xfs
[Top] [All Lists]

Re: [PATCH] xfstests: Improve test 219 to work with all filesystems

To: Jan Kara <jack@xxxxxxx>
Subject: Re: [PATCH] xfstests: Improve test 219 to work with all filesystems
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 19 May 2011 21:15:21 +1000
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20110519104908.GB8417@xxxxxxxxxxxxx>
References: <1305644104-612-1-git-send-email-jack@xxxxxxx> <20110517231301.GX19446@dastard> <20110518082422.GD25632@xxxxxxxxxxxxx> <20110518234318.GA32466@dastard> <20110519104908.GB8417@xxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Thu, May 19, 2011 at 12:49:08PM +0200, Jan Kara wrote:
> On Thu 19-05-11 09:43:18, Dave Chinner wrote:
> > On Wed, May 18, 2011 at 10:24:22AM +0200, Jan Kara wrote:
> > > On Wed 18-05-11 09:13:01, Dave Chinner wrote:
> > > > On Tue, May 17, 2011 at 04:55:04PM +0200, Jan Kara wrote:
> > > > > Different filesystems account different amount of metadata in quota. 
> > > > > Thus it is
> > > > > impractical to check for a particular amount of space occupied by a 
> > > > > file
> > > > > because there is no right value. Change the test to verify whether 
> > > > > the amount
> > > > > of space before quotacheck and after quotacheck is the same as other 
> > > > > quota
> > > > > tests do.
> > > > 
> > > > Except that the purpose of the test the accounting correctly matches
> > > > the blocks allocated via direct IO, buffered IO and mmap, not that
> > > > quota is consistent over a remount.
> > > > 
> > > > IOWs, The numbers do actually matter - for example the recent
> > > > changes to speculative delayed allocation beyond EOF for buffered IO
> > > > in XFS could be causing large numbers of blocks to be left after EOF
> > > > incorrectly, but the exact block number check used in the test would
> > > > catch that. The method you propose would not catch it at all, and
> > > > we'd be oblivous to an undesirable change in behaviour.
> > >   Hmm, I guess we think of different errors to catch with the test. I was
> > > more thinking that the test tries to catch errors where we forget to
> > > account allocated blocks in quota or so. But you are right, there are 
> > > other
> > > tests to catch this although not testing e.g. direct IO I think.
> > > 
> > > > IMO, a better filter function would be the way to go - one
> > > > that takes into account that there might be some metadata blocks
> > > > allocated but not less than 3x48k should have be allocated to the
> > > > quotas...
> > >   OK, but if I just check that the amount of space is >= 3x48k, your
> > > sample problem with xfs would pass anyway.
> > 
> > Not if it was done like we do with the randholes tests (008), where we use
> > the _within_tolerance function to determine if the number of holes is
> > accceptible.
> > 
> > >
> > > What would be nice is to know
> > > the right value but that depends on fs type and also fs parameters (fs
> > > block size in ext3/ext4) so it would be a bit large chunk of code to
> > > compute the right value - that's why I chose quotacheck to do the work for
> > > me...  But I guess I can do that if you think it's worth it.
> > 
> > Yes, block size will alter the number of blocks, but remember that
> > requota is reporting it in kilobytes, so the number should always be
> > around 144. Hence checking the result is 144 +/- 5% would probably
> > cover all filesystems and all configurations without an issue, and
> > it would still catch gross increasing in disk usage...
>   Ah, OK, that makes sense and works for the common cases I'm aware of (it
> won't work for example for ext4 with 64k block size, or ocfs2 with large
> cluster size where allocation unit can be upto 1M). Anyway, it has probably
> the best effort/result ratio so I'll do this. Thanks for ideas.

The write size of 48k is completely arbitrary, so if we need to
change it to work better with larger block sizes, then that is
easy to do...

Anyway, there are lots of other tests that will break with 64k block
sizes, so this is the least of our worries at this point...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>