xfs
[Top] [All Lists]

FYI: LSI rebuilding; and XFS speed V. raw - hints on maxing out 'dd'...

To: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Subject: FYI: LSI rebuilding; and XFS speed V. raw - hints on maxing out 'dd'....(if not already obvious)...
From: Linda Walsh <xfs@xxxxxxxxx>
Date: Fri, 10 Jun 2011 18:33:08 -0700
Cc: Paul Anderson <pha@xxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <20110502191323.417ef644@xxxxxxxxxxxxxxxxxxxx>
References: <BANLkTik4YjSr7-VA+f9Sh+UxvKfFKMy=+w@xxxxxxxxxxxxxx> <20110502191323.417ef644@xxxxxxxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.24) Thunderbird/2.0.0.24 Mnenhy/0.7.6.666


Emmanuel Florac wrote:
Le Mon, 2 May 2011 11:47:48 -0400
Paul Anderson <pha@xxxxxxxxx> écrivait:

We are deploying five Dell 810s, 192GiB RAM, 12 core, each with three
LSI 9200-8E SAS controllers, and three SuperMicro 847 45 drive bay
cabinets with enterprise grade 2TB drives.

I have very little experience with these RAID coontrollers. However I
have a 9212 4i4e (same card generation and same chipset) in test, and so
far I must say it looks like _utter_ _crap_. The performance is abysmal
(it's been busy rebuilding a 20TB array for... 6 days!); the server
----
By default the card only allocates about 20% of it's disk capacity for
rebuilds with the rest allocated for 'real work'.   It's not so smart as
to use 100% when there is no real work... If you enter the control software, (runs under X on linux -- even displays on CygX)... and enter something like
90%, you'll find your rebuilds will go much faster, but you can expect any
real-access to the device to suffer accordingly.

I have a 9285-8E and have been pretty happy with it's performance, but I only
have 10 data disks (2x6-disk RAID5 =>RAID50) with 2TB SATA's and get 1GB perf...
about what I'd expect from disks that get around 120MB each and doing 2 RAID5
calcs...)...

----

The only other things I can think of when benching XFS for max throughput.

1) Realtime partition might be an option (never tried, but thought I'd mention
it)

2) on "dd", if you are testing "write" performance, try pre-allocating the file
using (filling in the vars...):

  xfs_io -f -c "truncate $size" -c "resvsp 0 $size" "$Newfile"

then test for fragmentation:
(see if it is in 1 'extent'...., 1/line):

        xfs_bmap  "$Newfile"

if needed, try defragging:
 xfs_fsr "$Newfile"

Then on "dd" use the conv="nocreat,notrunc" flags -- that way you'll be able
to dump I/O directly into the file without it having to be "created or 
allocated"...







<Prev in Thread] Current Thread [Next in Thread>