[kwlug-disc] MDADM and RAID
Rashkae
rashkae at tigershaunt.com
Tue Mar 2 13:02:08 EST 2010
Chris Irwin wrote:
> My unscientific test command was the following:
>
> for i in $(seq 1 5); do
> sync;
> echo -e "\n===Iteration ${i}===";
> time $( dd bs=1M count=500 if=/dev/zero of=/home/chris/zero; sync );
> done
>
> For my raid5 test, I wrote to my home, which is on that array. For my
> non-raid comparison (non-raid disk plugged into same controller) I
> wrote to where that was mounted. Both are ext4 filesystems. I averaged
> the 'real' time over the five runs.
>
> raid5 gives: 12.232
> bare disk gives: 6.8586
>
> Now I'm not expecting miracles and I'm willing to take a performance
> hit to spare the cost of a dedicated raid controller, but is 50%
> throughput really the norm for raid5 with mdadm? And that is just with
> this simple test, I experienced much worse than 50% throughput with my
> lvm migration....
>
> For anybody who is curious, here is the raw output from my tests:
>
> [chris at jupitertwo:~]$ for i in $(seq 1 5); do sync; echo -e
> "\n===Iteration ${i}==="; time $( dd bs=1M count=500 if=/dev/zero
> of=/home/chris/zero; sync ); done
>
> ===Iteration 1===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.25245 s, 419 MB/s
>
> real 0m11.135s
> user 0m0.010s
> sys 0m3.430s
>
> ===Iteration 2===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.26102 s, 416 MB/s
>
> real 0m11.994s
> user 0m0.000s
> sys 0m4.710s
>
> ===Iteration 3===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.1752 s, 446 MB/s
>
> real 0m12.962s
> user 0m0.010s
> sys 0m5.180s
>
> ===Iteration 4===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.16783 s, 449 MB/s
>
> real 0m12.707s
> user 0m0.010s
> sys 0m5.160s
>
> ===Iteration 5===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.16648 s, 449 MB/s
>
> real 0m12.362s
> user 0m0.000s
> sys 0m4.710s
>
>
> [chris at jupitertwo:~]$ for i in $(seq 1 5); do sync; echo -e
> "\n===Iteration ${i}==="; time $( dd bs=1M count=500 if=/dev/zero
> of=/mnt/backup/chris/zero; sync ); done
>
> ===Iteration 1===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.14573 s, 458 MB/s
>
> real 0m6.909s
> user 0m0.000s
> sys 0m4.240s
>
> ===Iteration 2===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.24725 s, 420 MB/s
>
> real 0m6.886s
> user 0m0.010s
> sys 0m4.350s
>
> ===Iteration 3===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.20556 s, 435 MB/s
>
> real 0m6.856s
> user 0m0.000s
> sys 0m4.480s
>
> ===Iteration 4===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.11238 s, 471 MB/s
>
> real 0m6.663s
> user 0m0.000s
> sys 0m4.370s
>
> ===Iteration 5===
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.25068 s, 419 MB/s
>
> real 0m6.979s
> user 0m0.000s
> sys 0m4.530s
>
>
This kind of test is really not very helpful... The most obvious flaw
with the method is you have very little control on where the filesystem
actually writes the data on the disk. There is a much greater than 50%
in throughput from the inner edges of the disk to the outer edges. When
doing a test like this, you either have to fill the disk, or create very
small partitions in the same location on each disk.
That being said, I would normally expect a noticeable speed improvement
with a 4 device Raid 5 Array, not a penalty. It's been a while since I
experimented with this, however, and I do not have 4 decent drives
available on hand to crate a comparison test for you. (maybe a weekend
project if I have time and this hasn't been settled by someone
knowledgeable on the subject.)
More information about the kwlug-disc
mailing list