This is what you need for any of the RAID levels:
All this software can be found at ftp://ftp.fi.kernel.org/pub/linux
The RAID tools and patches are in the daemons/raid/alpha
subdirectory. The kernels are found in the kernel
subdirectory.
Patch the kernel, configure it to include RAID support for the level you want to use. Compile it and install it.
Then unpack, configure, compile and install the RAID tools.
Ok, so far so good. If you reboot now, you should have a file called /proc/mdstat
. Remember it, that file is your friend. See what it contains, by doing a cat
/proc/mdstat
. It should tell you that you have the right RAID personality (eg. RAID mode) registered, and that no RAID devices are currently active.
Create the partitions you want to include in your RAID set.
Now, let's go mode-specific.
Ok, so you have two or more partitions which are not necessarily the same size (but of course can be), which you want to append to each other.
Set up the /etc/raidtab
file to describe your setup. I set up a raidtab for two disks in linear mode, and the file looked like this:
raiddev /dev/md0 raid-level linear nr-raid-disks 2 chunk-size 32 persistent-superblock 1 device /dev/sdb6 raid-disk 0 device /dev/sdc5 raid-disk 1Spare-disks are not supported here. If a disk dies, the array dies with it. There's no information to put on a spare disk.
You're probably wondering why we specify a chunk-size
here when linear mode just appends the disks into one large array with no parallelism. Well, you're completely right, it's odd. Just put in some chunk size and don't worry about this any more.
Ok, let's create the array. Run the command
mkraid /dev/md0
This will initialize your array, write the persistent superblocks, and start the array.
Have a look in /proc/mdstat
. You should see that the array is running.
Now, you can create a filesystem, just like you would on any other device, mount it, include it in your fstab and so on.
You have two or more devices, of approximately the same size, and you want to combine their storage capacity and also combine their performance by accessing them in parallel.
Set up the /etc/raidtab
file to describe your configuration. An example raidtab looks like:
raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 1 chunk-size 4 device /dev/sdb6 raid-disk 0 device /dev/sdc5 raid-disk 1Like in Linear mode, spare disks are not supported here either. RAID-0 has no redundancy, so when a disk dies, the array goes with it.
Again, you just run
mkraid /dev/md0to initialize the array. This should initialize the superblocks and start the raid device. Have a look in
/proc/mdstat
to see what's going on. You should see that your device is now running.
/dev/md0 is now ready to be formatted, mounted, used and abused.
You have two devices of approximately same size, and you want the two to be mirrors of each other. Eventually you have more devices, which you want to keep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.
Set up the /etc/raidtab
file like this:
raiddev /dev/md0 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 4 persistent-superblock 1 device /dev/sdb6 raid-disk 0 device /dev/sdc5 raid-disk 1If you have spare disks, you can add them to the end of the device specification like
device /dev/sdd5 spare-disk 0Remember to set the nr-spare-disks entry correspondingly.
Ok, now we're all set to start initializing the RAID. The mirror must be constructed, eg. the contents (however unimportant now, since the device is still not formatted) of the two devices must be synchronized.
Issue the
mkraid /dev/md0command to begin the mirror initialization.
Check out the /proc/mdstat
file. It should tell you that the /dev/md0 device has been started, that the mirror is being reconstructed, and an ETA of the completion of the reconstruction.
Reconstruction is done using idle I/O bandwidth. So, your system should still be fairly responsive, although your disk LEDs should be glowing nicely.
The reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction.
Try formatting the device, while the reconstruction is running. It will work. Also you can mount it and use it while reconstruction is running. Of Course, if the wrong disk breaks while the reconstruction is running, you're out of luck.
Note! I haven't tested this setup myself. The setup below is my best guess, not something I have actually had up running.
You have three or more devices of roughly the same size, one device is significantly faster than the other devices, and you want to combine them all into one larger device, still maintaining some redundancy information. Eventually you have a number of devices you wish to use as spare-disks.
Set up the /etc/raidtab file like this:
raiddev /dev/md0 raid-level 4 nr-raid-disks 4 nr-spare-disks 0 persistent-superblock 1 chunk-size 32 device /dev/sdb1 raid-disk 0 device /dev/sdc1 raid-disk 1 device /dev/sdd1 raid-disk 2 device /dev/sde1 raid-disk 3If we had any spare disks, they would be inserted in a similar way, following the raid-disk specifications;
device /dev/sdf1 spare-disk 0as usual.
Your array can be initialized with the
mkraid /dev/md0command as usual.
You should see the section on special options for mke2fs before formatting the device.
You have three or more devices of roughly the same size, you want to combine them into a larger device, but still to maintain a degree of redundancy for data safety. Eventually you have a number of devices to use as spare-disks, that will not take part in the array before another device fails.
If you use N devices where the smallest has size S, the size of the entire array will be (N-1)*S. This ``missing'' space is used for parity (redundancy) information. Thus, if any disk fails, all data stay intact. But if two disks fail, all data is lost.
Set up the /etc/raidtab file like this:
raiddev /dev/md0 raid-level 5 nr-raid-disks 7 nr-spare-disks 0 persistent-superblock 1 parity-algorithm left-symmetric chunk-size 32 device /dev/sda3 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 raid-disk 3 device /dev/sde1 raid-disk 4 device /dev/sdf1 raid-disk 5 device /dev/sdg1 raid-disk 6If we had any spare disks, they would be inserted in a similar way, following the raid-disk specifications;
device /dev/sdh1 spare-disk 0And so on.
A chunk size of 32 KB is a good default for many general purpose filesystems of this size. The array on which the above raidtab is used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (7-1)*6 = 36) device. It holds an ext2 filesystem with a 4 KB block size. You could go higher with both array chunk-size and filesystem block-size if your filesystem is either much larger, or just holds very large files.
Ok, enough talking. You set up the raidtab, so let's see if it works. Run the
mkraid /dev/md0command, and see what happens. Hopefully your disks start working like mad, as they begin the reconstruction of your array. Have a look in
/proc/mdstat
to see what's going on.
If the device was successfully created, the reconstruction process has now begun. Your array is not consistent until this reconstruction phase has completed. However, the array is fully functional (except for the handling of device failures of course), and you can format it and use it even while it is reconstructing.
See the section on special options for mke2fs before formatting the array.
Ok, now when you have your RAID device running, you can always stop it or re-start it using the
raidstop /dev/md0or
raidstart /dev/md0commands.
Instead of putting these into init-files and rebooting a zillion times to make that work, read on, and get autodetection running.
Back in ``The Good Old Days'' (TM), the raidtools would read your /etc/raidtab
file, and then initialize the array. However, this would require that the filesystem on which /etc/raidtab
resided was mounted. This is unfortunate if you want to boot on a RAID.
Also, the old approach led to complications when mounting filesystems on RAID devices. They could not be put in the /etc/fstab
file as usual, but would have to be mounted from the init-scripts.
The persistent superblocks solve these problems. When an array is initialized with the persistent-superblock
option in the /etc/raidtab
file, a special superblock is written in the beginning of all disks participating in the array. This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times.
You should however still maintain a consistent /etc/raidtab
file, since you may need this file for later reconstruction of the array.
The persistent superblock is mandatory if you want auto-detection of your RAID devices upon system boot. This is described in the Autodetection section.
The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk, actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn't support that. Instead, we choose some chunk-size, which we define as the smallest ``atomic'' mass of data that can be written to the devices. A write of 16 KB with a chunk size of 4 KB, will cause the first and the third 4 KB chunks to be written to the first disk, and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks. Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size.
Chunk sizes must be specified for all RAID levels, including linear mode. However, the chunk-size does not make any difference for linear mode.
For optimal performance, you should experiment with the value, as well as with the block-size of the filesystem you put on the array.
The argument to the chunk-size option in /etc/raidtab
specifies the chunk-size in kilobytes. So ``4'' means ``4 KB''.
Data is written ``almost'' in parallel to the disks in the array. Actually, chunk-size
bytes are written to each disk, serially.
If you specify a 4 KB chunk size, and write 16 KB to an array of three disks, the RAID system will write 4 KB to disks 0, 1 and 2, in parallel, then the remaining 4 KB to disk 0.
A 32 KB chunk-size is a reasonable starting point for most arrays. But the optimal value depends very much on the number of drives involved, the content of the file system you put on it, and many other factors. Experiment with it, to get the best performance.
For writes, the chunk-size doesn't affect the array, since all data must be written to all disks no matter what. For reads however, the chunk-size specifies how much data to read serially from the participating disks. Since all active disks in the array contain the same information, reads can be done in a parallel RAID-0 like manner.
When a write is done on a RAID-4 array, the parity information must be updated on the parity disk as well. The chunk-size is the size of the parity blocks. If one byte is written to a RAID-4 array, then chunk-size
bytes will be read from the N-1 disks, the parity information will be calculated, and chunk-size
bytes written to the parity disk.
The chunk-size affects read performance in the same way as in RAID-0, since reads from RAID-4 are done in the same way.
On RAID-5 the chunk-size has exactly the same meaning as in RAID-4.
A reasonable chunk-size for RAID-5 is 128 KB, but as always, you may want to experiment with this.
Also see the section on special options for mke2fs. This affects RAID-5 performance.
There is a special option available when formatting RAID-4 or -5 devices with mke2fs. The -R stride=nn
option will allow mke2fs to better place different ext2 specific data-structures in an intelligent way on the RAID device.
If the chunk-size is 32 KB, it means, that 32 KB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 KB block-size, we realize that there will be eight filesystem blocks in one array chunk. We can pass this information on the mke2fs utility, when creating the filesystem:
mke2fs -b 4096 -R stride=8 /dev/md0
RAID-{4,5} performance is severely influenced by this option. I am unsure how the stride option will affect other RAID levels. If anyone has information on this, please send it in my direction.
The ext2fs blocksize severely influences the performance of the filesystem. You should always use 4KB block size on any filesystem larger than a few hundred megabytes, unless you store a very large number of very small files on it.
Autodetection allows the RAID devices to be automatically recognized by the kernel at boot-time, right after the ordinary partition detection is done.
This requires several things:
NOTE: Be sure that your RAID is NOT RUNNING before changing the partition types. Use raidstop /dev/md0
to stop the device.
If you set up 1, 2 and 3 from above, autodetection should be set up. Try rebooting. When the system comes up, cat'ing /proc/mdstat
should tell you that your RAID is running.
During boot, you could see messages similar to these:
Oct 22 00:51:59 malthe kernel: SCSI device sdg: hdwr sector= 512 bytes. Sectors= 12657717 [6180 MB] [6.2 GB] Oct 22 00:51:59 malthe kernel: Partition check: Oct 22 00:51:59 malthe kernel: sda: sda1 sda2 sda3 sda4 Oct 22 00:51:59 malthe kernel: sdb: sdb1 sdb2 Oct 22 00:51:59 malthe kernel: sdc: sdc1 sdc2 Oct 22 00:51:59 malthe kernel: sdd: sdd1 sdd2 Oct 22 00:51:59 malthe kernel: sde: sde1 sde2 Oct 22 00:51:59 malthe kernel: sdf: sdf1 sdf2 Oct 22 00:51:59 malthe kernel: sdg: sdg1 sdg2 Oct 22 00:51:59 malthe kernel: autodetecting RAID arrays Oct 22 00:51:59 malthe kernel: (read) sdb1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdb1,1> Oct 22 00:51:59 malthe kernel: (read) sdc1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdc1,2> Oct 22 00:51:59 malthe kernel: (read) sdd1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdd1,3> Oct 22 00:51:59 malthe kernel: (read) sde1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sde1,4> Oct 22 00:51:59 malthe kernel: (read) sdf1's sb offset: 6205376 Oct 22 00:51:59 malthe kernel: bind<sdf1,5> Oct 22 00:51:59 malthe kernel: (read) sdg1's sb offset: 6205376 Oct 22 00:51:59 malthe kernel: bind<sdg1,6> Oct 22 00:51:59 malthe kernel: autorunning md0 Oct 22 00:51:59 malthe kernel: running: <sdg1><sdf1><sde1><sdd1><sdc1><sdb1> Oct 22 00:51:59 malthe kernel: now! Oct 22 00:51:59 malthe kernel: md: md0: raid array is not clean -- starting background reconstructionThis is output from the autodetection of a RAID-5 array that was not cleanly shut down (eg. the machine crashed). Reconstruction is automatically initiated. Mounting this device is perfectly safe, since reconstruction is transparent and all data are consistent (it's only the parity information that is inconsistent - but that isn't needed until a device fails).
Autostarted devices are also automatically stopped at shutdown. Don't worry about init scripts. Just use the /dev/md devices as any other /dev/sd or /dev/hd devices.
Yes, it really is that easy.
You may want to look in your init-scripts for any raidstart/raidstop commands. These are often found in the standard RedHat init scripts. They are used for old-style RAID, and has no use in new-style RAID with autodetection. Just remove the lines, and everything will be just fine.
There are several ways to set up a system that mounts it's root filesystem on a RAID device. At the moment, only the graphical install of RedHat Linux 6.1 allows direct installation to a RAID device. So most likely you're in for a little tweaking if you want this, but it is indeed possible.
The latest official lilo distribution (Version 21) doesn't handle RAID devices, and thus the kernel cannot be loaded at boot-time from a RAID device. If you use this version, your /boot
filesystem will have to reside on a non-RAID device. A way to ensure that your system boots no matter what is, to create similar /boot
partitions on all drives in your RAID, that way the BIOS can always load data from eg. the first drive available. This requires that you do not boot with a failed disk in your system.
With redhat 6.1 a patch to lilo 21 has become available that can handle /boot
on RAID-1. Note that it doesn't work for any other level, RAID-1 (mirroring) is the only supported RAID level. This patch (lilo.raid1
) can be found in dist/redhat-6.1/SRPMS/SRPMS/lilo-0.21-10.src.rpm
on any redhat mirror. The patched version of LILO will accept boot=/dev/md0
in lilo.conf
and will make each disk in the mirror bootable.
Another way of ensuring that your system can always boot is, to create a boot floppy when all the setup is done. If the disk on which the /boot
filesystem resides dies, you can always boot from the floppy.
In order to have a system booting on RAID, the root filesystem (/) must be mounted on a RAID device. Two methods for achieving this is supplied bellow. Because none of the current distributions (that I know of at least) support installing on a RAID device, the methods assume that you install on a normal partition, and then - when the installation is complete - move the contents of your non-RAID root filesystem onto a new RAID device.
This method assumes you have a spare disk you can install the system on, which is not part of the RAID you will be configuring.
mke2fs
), and mount it under /mnt/newrootcd / find . -xdev | cpio -pm /mnt/newroot
/mnt/newroot/etc/fstab
file to use the correct device (the /dev/md?
root device) for the root filesystem./boot
filesystem, and mount the boot device on /mnt/newroot/boot
instead. This is required for LILO to run successfully in the next step./mnt/newroot/etc/lilo.conf
to point to the right devices. The boot device must still be a regular disk (non-RAID device), but the root device should point to your new RAID. When done, run
lilo -r /mnt/newrootThis LILO run should complete with no errors.
If you're doing this with IDE disks, be sure to tell your BIOS that all disks are ``auto-detect'' types, so that the BIOS will allow your machine to boot even when a disk is missing.
This method requires that you use a raidtools/patch that includes the failed-disk directive. This will be the tools/patch for all kernels from 2.2.10 and later.
You can only use this method on RAID levels 1 and above. The idea is to install a system on a disk which is purposely marked as failed in the RAID, then copy the system to the RAID which will be running in degraded mode, and finally making the RAID use the no-longer needed ``install-disk'', zapping the old installation but making the RAID run in non-degraded mode.
failed-disk
in the raidtab
file. Don't put the failed-disk
as the first disk in the raidtab
, that will give you problems with starting the RAID. Create the RAID, and put a filesystem on it.raidtab
file to include the previously failed-disk
as a normal raid-disk
. Now, raidhotadd
the disk to your RAID.For the kernel to be able to mount the root filesystem, all support for the device on which the root filesystem resides, must be present in the kernel. Therefore, in order to mount the root filesystem on a RAID device, the kernel must have RAID support.
The normal way of ensuring that the kernel can see the RAID device is to simply compile a kernel with all necessary RAID support compiled in. Make sure that you compile the RAID support into the kernel, and not as loadable modules. The kernel cannot load a module (from the root filesystem) before the root filesystem is mounted.
However, since RedHat-6.0 ships with a kernel that has new-style RAID support as modules, I here describe how one can use the standard RedHat-6.0 kernel and still have the system boot on RAID.
You will have to instruct LILO to use a RAM-disk in order to achieve this. Use the mkinitrd
command to create a ramdisk containing all kernel modules needed to mount the root partition. This can be done as:
mkinitrd --with=<module> <ramdisk name> <kernel>For example:
mkinitrd --with=raid5 raid-ramdisk 2.2.5-22
This will ensure that the specified RAID module is present at boot-time, for the kernel to use when mounting the root device.
Never NEVER never re-partition disks that are part of a running RAID. If you must alter the partition table on a disk which is a part of a RAID, stop the array first, then repartition.
It is easy to put too many disks on a bus. A normal Fast-Wide SCSI bus can sustain 10 MB/s which is less than many disks can do alone today. Putting six such disks on the bus will of course not give you the expected performance boost.
More SCSI controllers will only give you extra performance, if the SCSI busses are nearly maxed out by the disks on them. You will not see a performance improvement from using two 2940s with two old SCSI disks, instead of just running the two disks on one controller.
If you forget the persistent-superblock option, your array may not start up willingly after it has been stopped. Just re-create the array with the option set correctly in the raidtab.
If a RAID-5 fails to reconstruct after a disk was removed and re-inserted, this may be because of the ordering of the devices in the raidtab. Try moving the first ``device ...'' and ``raid-disk ...'' pair to the bottom of the array description in the raidtab file.
Most of the ``error reports'' we see on linux-kernel, are from people who somehow failed to use the right RAID-patch with the right version of the raidtools. Make sure that if you're running 0.90 RAID, you're using the raidtools for it