On Wed, 16 Jan 2008, Al Boldi wrote:
Remember this is SW RAID, so max_sectors_kb will only affect the
individual disks underneath the SW RAID, I have benchmarked in the past,
the defaults chosen by the kernel are optimal, changing them did not make
any noticable improvements.
Justin Piszcz wrote:
For these benchmarks I timed how long it takes to extract a standard 4.4
Settings: Software RAID 5 with the following settings (until I change
blockdev --setra 65536 /dev/md3
echo 16384 > /sys/block/md3/md/stripe_cache_size
echo "Disabling NCQ on all disks..."
for i in $DISKS
echo "Disabling NCQ on $i"
echo 1 > /sys/block/"$i"/device/queue_depth
p34:~# grep : *chunk* |sort -n
It would appear a 256 KiB chunk-size is optimal.
Can you retest with different max_sectors_kb on both md and sd?
> Also, can you retest using dd with different block-sizes?
I can do this, moment..
I know about oflag=direct but I choose to use dd with sync and measure the
total time it takes.
/usr/bin/time -f %E -o ~/$i=chunk.txt bash -c 'dd if=/dev/zero
of=/r1/bigfile bs=1M count=10240; sync'
So I was asked on the mailing list to test dd with various chunk sizes,
here is the length of time it took
to write 10 GiB and sync per each chunk size: