On Fri, 18 Jan 2008, Greg Cormier wrote:
Justin, thanks for the script. Here's my results. I ran it a few times
with different tests, hence the small number of results you see here,
I slowly trimmed out the obvious not-ideal sizes.
Nice, we all love benchmarks!! :)
4x500GB WD Raid editions, raid 5. SDE is the old 4-platter version
(5000YS), the others are the 3 platter version. Faster :-)
Timing buffered disk reads: 240 MB in 3.00 seconds = 79.91 MB/sec
Timing buffered disk reads: 248 MB in 3.01 seconds = 82.36 MB/sec
Timing buffered disk reads: 248 MB in 3.02 seconds = 82.22 MB/sec
/dev/sde: (older model, 4 platters instead of 3)
Timing buffered disk reads: 210 MB in 3.01 seconds = 69.87 MB/sec
Timing buffered disk reads: 628 MB in 3.00 seconds = 209.09 MB/sec
Test was : dd if=/dev/zero of=/r1/bigfile bs=1M count=10240; sync
For your configuration, a 64-256k chunk seems optimal for this, hypothetical
Test was : Unraring multipart RAR's, 1.2 gigabytes. Source and dest
drive were the raid array.
1 meg looks like its the best, which is what I use today, 1 MiB chunk offers
the best peformance by far, at least with all of my testing (with big files)
such as the tests you performed.
So, there's a toss up between 256 and 512.
Yeah for DD performance, not real-life.
If I'm interpreting
correctly here, raw throughput is better with 256, but 512 seems to
work better with real-world stuff?
Look above, 1 MiB got you the fastest unrar time.
Also, don't use ext*, XFS can be up to 2-3x faster (in many of the
I'll try to think up another test
or two perhaps, and removing 64 as one of the possible options to save
time (mke2fs takes a while on 1.5TB)
Next step will be playing with read aheads and stripe cache sizes I
guess! I'm open to any comments/suggestions you guys have!