Benchmarking 3 disk RAID array speeds

I have experienced very poor performance on the RAID set up on my new computer, so I decided to do a short test of the speed and capacity of different RAID configurations.

For the test I used an old computer with Intel Core 2 Q6600 processor, and disks connected to the SATA controllers on the mainboard.  I have used two 1 T HDD and one 250 G HDD, all at least ten years old. The 250 G HDD was a bit slower than the 1T drives, so in many cases the faster disks had to work a bit slower to be in pair with the 250G disk.

Let see immediately the result, and later we can dig more into the details.



As you can see, if you do not need redundancy, then nothing beats the RAID0 setup.

If you want redundancy (if one disk fails, you data are still available), then with 3 disks RAID5 is the optimal solution, both in terms of capacity and in terms of speed, but if you read a lot from the disks and write less, you may consider RAID 10 f2 because of its good reading performance.

It is also interesting to compare the array speed to the speed of one disk, to see the efficiency of the different arrays:



General conditions

I just measured the sequential throughput with big files, because I intend to store large files in this setup. Before executing the tests I tested parallely the 3 individual disks, and they were able to keep the same speed as if they were used alone, so there is no limit in the system to utilize the full speed.

For speed testing, I used pv /dev/zero > /dev/md1 and pv /dev/md1 > /dev/null.

In addition I used the command iostat -xym 3 to check the individual load on each disk.

RAID 0

This is the striped version, all disks are used to write the data and all disks are used to read the data.

RAID0 Write (287 MB/s)                  
Device        rMB/s     wMB/s 
sda            0.00     94.28 
sdb            0.00     94.17 
sdc            0.00     94.17 
         
RAID1 Read
(275 MB/s)                   
Device        rMB/s     wMB/s 
sda           92.42      0.00 
sdb           92.33      0.00 
sdc           92.33      0.00 



From the individual disk speeds you can see that it is slowed down to the slowest disk, bat all 3 disks are used on full speed.

In RAID 0 we can use the full capacity of the included disks, but if one disk is failing, the whole array is gone.

RAID 1

Mirrored configuration, with 3 disks. All 3 disks contain the same data. When writing, data has to be parallelly written to all 3 disks, so the overall speed will be the speed of one disk. When reading, in theory it could read interleaved from all of the disks, but the linux mdraid implementation only does this, if separate processes are reading from different parts of the array.

RAID0 Write (92.9 MB/s)
Device    rMB/s     wMB/s
sda        0.00     93.41
sdb        0.00     93.19
sdc        0.00     93.19


RAID0 Read (92.6 MB/s)
Device    rMB/s     wMB/s
sda       94.46      0.00
sdb        0.00      0.00
sdc        0.00      0.00


RAID0 3 process parallel read
Device    rMB/s     wMB/s
sda       95.00      0.00
sdb      111.96      0.00
sdc      110.38      0.00


On writing the speed is reduced to the slowest disk speed and all the 3 disks are writing the same time. On reading if we have one process it reads at the full speed of the HDD if we have more processes, they are distributed among the disks and each disk is read at its full speed.

RAID 5

Parity configuration, with parity spread evenly among the disks. With 3 disk RAID5 write data can be written to 2 disks and parity has to be written to the 3rd disk. So the theoretical writing speed would be the speed of 2 disks. On reading in theory all 3 disks could be used, just the proper interleaving have to be taken care of.

Note about small writes to RAID5:
In RAID5 there is a "chunk size" this is the data size, which is handled together on one disk. The "stripe size" is the size off chunks parallelly stored on disks. So for a default 3 disk RAID5 the chunk size is 512 KB and the stripe size is 1024 KB. If you are writing one byte in the middle of the disk, then the whole chunk has to be read first, the parity calculated and then written back to the disk.  In this tests we are writing bigger data portions to the raid, so it is not happening, we are not considering it.

RAID5 Write (176 MB/s)      
Device       rMB/s     wMB/s 
sda           0.08     86.84 
sdb           0.10     87.62 
sdc           0.10     87.58 
        
RAID5 Read (185 MB/s)       
Device       rMB/s     wMB/s 
sda          62.33      0.00 
sdb          62.25      0.00 
sdc          62.00      0.00 


When writing to the array, all disks are used to almost their full speed, but as 1/3rd of the data is parity, the resulting throughput is roughly the speed of 2 disks. When reading, because the disks have to skip the parity blocks, the resulting speed is only 2/3rd of the possible speed.

The available capacity is  100 G, which is the best among the redundant configurations.

RAID10

RAID10 is a special RAID level in the linux implementation and it is storing duplicates or more copies of all the data, plus it spreads the data over more disks to increase read and write speeds. It has the advantage that it can be used with odd number of disks, with the smart allocation algorithm, it can hold always two copies of the data and fully utilize the available disk capacity (meaning that 1/2 of the raw disk capacity is available).

This level has 3 different allocation modes:

near (n) : the copy of the data is stored as close as possible to the original copy, this means, that practically it is on the same stripe (same sectors on different disks).
offset (o) : the copy of the data is stored on the next stripe
far (f) : the copy of the data is stored far from the original copy (this has read performance advantages [I do not know why])

In RAID 10 it is possible to specify the number copies to store of each data, in this test I used always 2 to provide redundancy.

Now lets see the actual throughput:

RAID10 n2 Write

Device         rMB/s     wMB/s
sda             0.00     93.62
sdb             0.00     94.33
sdc             0.00     94.28

RAID10 o2 Write

Device     rMB/
sda         0.00     94.72
sde         0.00     90.05
sdc         0.00     86.96

RAID10 f2 Write

Device     rMB/s     wMB/s
sda         0.00     84.44
sde         0.00     87.10
sdc         0.00     88.04


As explained above for writing the fastest is the n2 version where the data is written consecutively. This is utilizing the full speed of the slowest disk for all disks. o2 is only a little bit slower as the data is close. f2 is the slowest, because it has to do some seeking to the far locations, but the performance difference is not big.

RAID10 n2 Read

Device         rMB/s     wMB/s
sda            15.17      0.00
sde            87.08      0.00
sdc            54.00      0.00

RAID10 o2 Read

Device     rMB/s     wMB/s
sda        46.83      0.00
sde        46.83      0.00
sdc        47.00      0.00


RAID10 f2 Read

Device     rMB/s     wMB/s
sda        95.00      0.00
sde        95.00      0.00
sdc        94.67      0.00


The read operation produced very uneven disk load in the n2 case, because the primary copy of the data chunks is not distributed evenly on the disks, a stable, half of the disk speed in case of o2 and a very fast maxing at the slowest physical disk speed in the f2 case.

Next steps

During writing the text I realized that there are some more test to do:
  1. Test RAID5 read performance with multiple threads (probably it will max out disk read performance)
  2. Test RAID4, probably for large blocks when reading it outperforms RAID5
  3. Test RAID10 with 3 copies of data
  4. Do the tests with only 2 disks


Comments

Popular posts from this blog

Setting ethernet interface speed in /etc/network/interfaces