On 20 Mar 2001 at 8:28am, Martin K. Petersen wrote
> >>>>> "Joshua" == Joshua Baker-LePain <jlb17@xxxxxxxx> writes:
> Joshua> My RAID is an IDE-SCSI system -- IDE disks, looks like SCSI to
> Joshua> the host. It has 8 80GB drives in RAID 5, for about 560GB of
> Joshua> usable space. I'd like it to be just one partition. My
> Joshua> question is, are there any special parameters I should pass to
> Joshua> mkfs for such a big (well, for me at least) partition? What
> Joshua> about to tailor it to the hardware RAID?
> Well. You could try and hint to the allocator how your RAID device is
> set up. Try and look up which chunk size it uses. You can pass that
> information on to mkfs using the sunit and swidth options. See the
> mkfs.xfs man page for details.
The unit's default is a stripe size of 64 512 byte blocks. The stripe
size can be set between 4 and 128 (incrementally, of course). For the
defaults, would this be correct:
mkfs.xfs -d sunit=1,swidth=64
Or should I multiply both of those numbers by 512? Also, should I make
the logfile any bigger than the default due to the size of the partition?
> Please note that for a RAID5, choosing the right chunk size for the
> workload the crucial for performance. If you set it too high, you end
> up doing a lot of I/O on every write. If you set it too low, your
> sustained I/O performance may suffer.
> It all depends on your workload and the RAID array in question (How
> much cache it has, etc.) I recommend you experiment a bit.
The system is maxed out at 128MB of cache RAM. The primary use will be
reading and writing (over NFS) lots of largish (250+MB) data files.
Sorry to pester everyone with my RAID naivete. I do indeed plan on doing
lots of testing. Thanks, all.
Department of Biomedical Engineering