> I'm going to be building an array later today, and, seeing as I
> won't have much time to test different combinations of options
> for performance (I'll be lucky, as usual, if I can get stability
> testing done before the box is wanted for production), I thought
> I'd ask about what might be good options to try
> The box will have...
> - 16 75G IDE drives in a single software RAID 5 array
> - 2 30G IDE drives containing the boot, root, and the big
> array's logdev on RAID 1 partitions.
> It'll mostly be film frames - 6MB to 12MB, or thereabouts - with
> a comparatively small amount of metadata activity (mostly
> compositing work).
> Any suggestions on a likely good logdev size?
> Any other suggestions to improve XFS performance for this setup?
You may want to do a couple of experiments with different stripe sizes on
the raid (except for the fact that this will mean doing rebuilds between
tests). With this sort of large file you are probably doing large I/O
and this may make a difference, although it is not something I have
experimented with on Linux.
Make sure you have the latest mkfs binary, more recent ones will make
a better attempt at laying down the filesystem on a large setup like
You will be crossing the 1 Tbyte limit with this filesystem, so you should
use a larger than default inode size:
mkfs -t xfs -f -i size=512
In general you will probably not need a huge log, in fact XFS logs are
never more than 128 Mbytes in size on linux. A size of 64M is probably
ample, you can get this with -l size=16384b
So a mkfs line along the lines of:
mkfs -t xfs -f -i size=512 -l size=16384b /dev/xxx
would be a good starting point.
I would mount with more than the default number of logbuffers:
mount -o logbufs=4
> Andrew Klaassen