This section contains a number of benchmarks from a real-world system using software RAID.
Benchmarks are done with the bonnie
program, and at all times on files twice- or more the size of the physical RAM in the machine.
The benchmarks here only measures input and output bandwidth on one large single file. This is a nice thing to know, if it's maximum I/O throughput for large reads/writes one is interested in. However, such numbers tell us little about what the performance would be if the array was used for a news spool, a web-server, etc. etc. Always keep in mind, that benchmarks numbers are the result of running a ``synthetic'' program. Few real-world programs do what bonnie
does, and although these I/O numbers are nice to look at, they are not ultimate real-world-appliance performance indicators. Not even close.
For now, I only have results from my own machine. The setup is:
The three U2W disks hang off the U2W controller, and the UW disk off the UW controller.
It seems to be impossible to push much more than 30 MB/s thru the SCSI busses on this system, using RAID or not. My guess is, that because the system is fairly old, the memory bandwidth sucks, and thus limits what can be sent thru the SCSI controllers.
Read is Sequential block input, and Write is Sequential block output. File size was 1GB in all tests. The tests where done in single-user mode. The SCSI driver was configured not to use tagged command queuing.
Chunk size |
Block size | Read KB/s | Write KB/s |
4k | 1k | 19712 | 18035 |
4k | 4k | 34048 | 27061 |
8k | 1k | 19301 | 18091 |
8k | 4k | 33920 | 27118 |
16k | 1k | 19330 | 18179 |
16k | 2k | 28161 | 23682 |
16k | 4k | 33990 | 27229 |
32k | 1k | 19251 | 18194 |
32k | 4k | 34071 | 26976 |
From this it seems that the RAID chunk-size doesn't make that much of a difference. However, the ext2fs block-size should be as large as possible, which is 4KB (eg. the page size) on IA-32.
This time, the SCSI driver was configured to use tagged command queuing, with a queue depth of 8. Otherwise, everything's the same as before.
Chunk size |
Block size | Read KB/s | Write KB/s |
32k | 4k | 33617 | 27215 |
No more tests where done. TCQ seemed to slightly increase write performance, but there really wasn't much of a difference at all.
The array was configured to run in RAID-5 mode, and similar tests where done.
Chunk size |
Block size | Read KB/s | Write KB/s |
8k | 1k | 11090 | 6874 |
8k | 4k | 13474 | 12229 |
32k | 1k | 11442 | 8291 |
32k | 2k | 16089 | 10926 |
32k | 4k | 18724 | 12627 |
Now, both the chunk-size and the block-size seems to actually make a difference.
RAID-10 is ``mirrored stripes'', or, a RAID-1 array of two RAID-0 arrays. The chunk-size is the chunk sizes of both the RAID-1 array and the two RAID-0 arrays. I did not do test where those chunk-sizes differ, although that should be a perfectly valid setup.
Chunk size |
Block size | Read KB/s | Write KB/s |
32k | 1k | 13753 | 11580 |
32k | 4k | 23432 | 22249 |
No more tests where done. The file size was 900MB, because the four partitions involved where 500 MB each, which doesn't give room for a 1G file in this setup (RAID-1 on two 1000MB arrays).