On Mon, Jul 07, 2008 at 10:04:09AM +0200, Jens Beyer wrote:
> On Fri, Jul 04, 2008 at 12:59:41AM -0700, Dave Chinner wrote:
> > On Fri, Jul 04, 2008 at 08:41:26AM +0200, Jens Beyer wrote:
> > >
> > > I have encountered a strange performance problem during some
> > > hardware evaluation tests:
> > >
> > > I am running a benchmark to measure especially random read/write
> > > I/O on an raid device and found that (under some circumstances)
> > > the performance of Random Read I/O is inverse proportional to the
> > > size of the tested XFS filesystem.
> > >
> > > In numbers this means that on a 100GB partition I get a throughput
> > > of ~25 MB/s and on the same hardware at 1TB FS size only 18 MB/s
> > > (and at 2+ TB like 14 MB/s) (absolute values depend on options,
> > > kernel version and are for random read i/o at 8k test block size).
> > Of course - as the filesystem size grows, so does the amount of
> > each disk in use so the average seek distance increases and hence
> > read I/Os take longer.
> But then - why does the rate of ext3 does not decrease and stays at the
> higher value?
Because XFS spreads the data and metadata across the entire
filesystem, not just a small portion. It's one of the reasons XFS
can make decent use of lots of disks effectively. Grab seekwatcher
traces from your workload for the different filesystems and you'll
see what I mean....