On Mittwoch 23 September 2009 Eric Sandeen wrote:
> so that's slowed down a bit. Weird that more calls, originally, made
> it faster overall...?
You wrote in the patch description that bufsize went very large when it
got below zero. Could it mean that a that times a big readahead appeared
and that's why the speed improved?
Why have there been more calls before that patch?
> But one thing I noticed is that we choose readahead based on a guess
> at the readdir buffer size, and at least for glibc's readdir it has
> const size_t default_allocation =
> (4 * BUFSIZ < sizeof (struct dirent64) ?
> sizeof (struct dirent64) : 4 * BUFSIZ);
> where BUFSIZ is a magical 8192.
> But we do at max PAGE_SIZE which gives us almost no readahead ...
> So bumping our "bufsize" up to 32k, things speed up nicely. Wonder
> if the stock broken bufsize method led to more inadvertent
Is it possible to increase it more, to see if things still improve?
Maybe that made the difference in the old version?
In general, I'd opt for at least 64KB buffers, that's the smallest I/O
size to keep hard disks busy, and RAIDs usually have 64KB stripe sets or
bigger. But I don't know how scattered dirs in XFS are, or if you can
expect them to be sequential.
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4