[Top] [All Lists]

Re: How long should an xfs_freeze take?

To: Chris Pascoe <c.pascoe@xxxxxxxxxxxxxx>
Subject: Re: How long should an xfs_freeze take?
From: Steve Lord <lord@xxxxxxx>
Date: 30 Jan 2002 17:11:49 -0600
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <010601c1a9e0$f4210e20$47426682@xxxxxxxxxxxxxx>
References: <023501c1a953$f059a0f0$47426682@xxxxxxxxxxxxxx> <3C57F97A.7000400@xxxxxxx> <010601c1a9e0$f4210e20$47426682@xxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Wed, 2002-01-30 at 16:51, Chris Pascoe wrote:
> > How long freeze takes depends on how much data is dirty in the
> > filesystem, it should take a similar time to an unmount. However,
> > unless this was a very large and dirty filesystem this feels
> > like a long time - although it was all system time,
> > so something was going on.
> 6 hours is a long time for an unmount!  The volume is ~70GB, and had no
> activity on it before I tried to create the snapshots - the machine had just
> been rebooted before some attempts.

Whoa! it was early this morning, I missed a few digits when I read the
elapsed time! Yes, that is a leetle on the long side. There is a complex
loop in the xfs_syncsub function, not sure yet how it can get stuck


> > The spot you kept seeing on the stack does not make much sense, vn_count
> is
> > basically an atomic_read.
> It just seems that's the place I've hit the break key the most.  A closer
> examination of my console log suggests it isn't stuck in vn_count, there are
> a few occurances of it being somewhere else in xfs_iflush_all - so maybe
> it's got itself tangled in a loop for a few hours?
> 0xe95c3b88 0xc01a1a63 xfs_iflush_all+0xc3
>                         (0xf6d1f000, 0x1, 0xf6d1f000, 0xc, 0xc039ce80)
> 0xe95c3b88 0xc01a1ab9 xfs_iflush_all+0x119
>                         (0xf6d1f000, 0x1, 0xf6d1f000, 0xc, 0xc039ce80)
> 0xe95c3b88 0xc01a1ac3 xfs_iflush_all+0x123
>                         (0xf6d1f000, 0x1, 0xf6d1f000, 0xc, 0xc039ce80)
> > How repeatable is this?
> The problem was persistent across reboots - I rebooted to install a
> different LVM version at some stage (move to 1.0.2, from 1.0.1), and it
> occurred five times in a row after that.  I just rebooted now to say that it
> still happens - but, alas - it doesn't want to any more.  I'll try again at
> some random times throughout the day.
> Chris

Steve Lord                                      voice: +1-651-683-3511
Principal Engineer, Filesystem Software         email: lord@xxxxxxx

<Prev in Thread] Current Thread [Next in Thread>