xfs
[Top] [All Lists]

***** SUSPECTED SPAM ***** Re: Optimal mkfs settings for md RAID0 over

To: dgc@xxxxxxx
Subject: ***** SUSPECTED SPAM ***** Re: Optimal mkfs settings for md RAID0 over 2x3ware RAIDS
From: andrewl733@xxxxxxx
Date: Mon, 14 Jan 2008 22:02:16 -0500
Cc: xfs@xxxxxxxxxxx
Importance: Low
In-reply-to: <20080114225552.GU155259@xxxxxxx>
References: <8CA24C7D24953CD-93C-29C2@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <20080114225552.GU155259@xxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
Thanks for your speedy reply. Please see a follow up question below. 



On Mon, Jan 14, 2008 at 08:23:44AM -0500, andrewl733@xxxxxxx wrote:
> Hello XFS list,
> 
> I am trying to figure out the optimal mkfs settings for a large
> array (i.e., 18 TB) consisting of 2 or 4 PHYSICAL 3ware RAID-5
> arrays striped together with Linux software RAID-0. As far as I
> can tell, this question about combining physical and software RAID
> has not been asked or answered on the list. 
> As I understand it, for a SINGLE 12-drive 3ware PHYSICAL Hardware
> RAID-5 created with a 3-ware-defined "stripe size" of 64K, the
> optimal mkfs setting should be: .
> 
> mkfs.xfs -d su=64k,sw=11 /dev/sdX
> 
> The question is, what is optimal if I stripe together TWO of these
> Physical Hardware RAID-5 arrays as a SOFTWARE RAID-0. Casual
> testing shows striping together two PHYSICAL RAIDS as sucn can
> yield a gain in performance of approximately 60 percent versus
> 12-drives. But in order to optimize the RAID-0 device, would the
> correct mkfs be: 
> 
> mkfs.xfs -d su=64k,sw=22 /dev/mdX
> 
> There are now 24 drives minus two for parity. Is the logic correct here? 

Depends on your workload and file mix. For lots of small files,
the above will work fine. For maximum bandwidth, it will suck.

For maximum bandwidth you want XFS to align to the start of a RAID5
lun and do full RAID5 stripe width allocations so that large
allocations do not partially overlap RAID5 luns.

i.e. with what you suggested, an allocation of 22x64k (full
filesystem stripe width) will only be aligned to the underlying
hardware in 2 of the possible 22 places it could be allocated with a
64k alignment. in the other 20 cases, you'll get one full RAID5
write to one lun, and two sets of partial RMW cycles to the other
lun because they are not full RAID5 stripe writes.  That will be
slow.

With su=11*64k,sw=2, a 22x64k allocation will always be aligned to
the underlying geometry (until you start to run out of space) and
hence both luns will do a full RAID5 stripe write and it will be
fast.


In fact, I am testing a 64-drive SAS array today -- 4 x 16-drive RAID-5
arrays striped together with Linux RAID-0.? By your instructions, I
should do mkfs as follows: 



mkfs.xfs -d su=960k,sw=4? /dev/mdX?? where su=15*64k



However, I get back the following message: 



mkfs.xfs:? Specified data stripe unit 1920 is not the same as the volume stripe 
unit 512

mkfs.xfs:? Specified data stripe width 7680 is not the same as the volume 
stripe width 2048



The filesystem gets created. What's wrong here? In this case I have
chosen to use a Linux md RAID-0 "chunk size" of 256k.? I get a similar
message (with different numbers, of course) if I use a "chunk size" of
64k.? 



Is there an optimal ratio of 3ware "stripe size" to Linux md "chunk size" that 
also must come into play here? 



Thanks again in advance. 



Andrew





Cheers,

Dave.


 


________________________________________________________________________
More new features than ever.  Check out the new AOL Mail ! - 
http://webmail.aol.com


[[HTML alternate version deleted]]


<Prev in Thread] Current Thread [Next in Thread>