Software RAID 0 Configuration in linux
RAID is one of the heavily used technology for data performance and redundancy. Based on the requirement and functionality they are classified into different levels. Selecting a level for your requirement is always dependent on the kind of operation that you want to perform on the disk.
We already have written an article about the differences and internal details of how different raid levels work. The exact same working model of RAID which is completely done by hardware controller's, can also be managed by software(But the efficiency and performance of a hardware raid is always better).
Read: Different Level's of Raid Explained
WATCH:HOW TO INSTALL LINUX OVER SOFTWARE RAID.
Read: Configuring LVM on top of RAID
Read: Configure RAID on LOOP Devices and LVM over top of RAID.
We will be publishing a series of posts on configuring different level's of RAID with its software implementation in Linux. In this post we will be going through the steps to configure software raid level 0 on Linux.
Raid 0 was introduced by keeping only performance in mind. In fact it provides no redundancy at all. Data in raid 0 is stripped across multiple disks for faster access.
So lets start configuring our software raid 0.
- It requires two hard disk for RAID 0 Configuration
- Disks drives or partitions of different sizes may be used,
but the size of the smallest disk/partition will limit the amount of space usable on all of the disks
- It improves the read write performance by distributing reads and writes to multiple disks.
- it works on stripping data across multiple disks(/dev/sda6 & /dev/sda7) simultaneously.
- its known for its performance
- RAID 0 is not suitable for data storage because It did not provide us the facility of Fault Tolerance or any type of Data Security.
- IF any disk fail raid is unusable because none of data is duplicated.
As mentioned above we need minimum two hard disk for raid 0 configuration. Size may differ but with above mentioned condition.As we are going to configure it on same hard disk we can test it by creating two partitions. So let's create two partition here /dev/sda6 and /dev/sda7.
Note: Software raid is exactly the same as hardware raid, except the fact that the entire show is run by software instead of a raid controller.
Let's see the Partition Table First.
[root@localhost ~]# fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 3825 30617600 7 HPFS/NTFS /dev/sda3 3825 11474 61440000 7 HPFS/NTFS /dev/sda4 11475 19457 64123447+ 5 Extended /dev/sda5 11475 18868 59392273+ 83 Linux
Above partition Table clearly shows that it has only one Linux partition /dev/sda5.
So now we have to create two more partitions for creating raid.
Creating Partition /dev/sda6 and /dev/sda7 and changing its type to raid
[root@localhost ~]# fdisk /dev/sda The number of cylinders for this disk is set to 19457. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n First cylinder (18869-19457, default 18869): Using default value 18869 Last cylinder or +size or +sizeM or +sizeK (18869-19457, default 19457): +100M Command (m for help): n First cylinder (18882-19457, default 18882): Using default value 18882 Last cylinder or +size or +sizeM or +sizeK (18882-19457, default 19457): +100M Command (m for help): t Partition number (1-7): 7 Hex code (type L to list codes): fd Changed system type of partition 7 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-7): 6 Hex code (type L to list codes): fd Changed system type of partition 6 to fd (Linux raid autodetect) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks.
In the above shown example, i have created two partitions, both of 100M size. These two partitions will be acting as physical disks that will make a raid 0 configuration.
changing the type of partition in fdisk can be done by typing "t" in fdisk prompt, which will ask for the partition number for which you are going to change the type(in my case it was 6 & 7).
"fd" type stands for "Linux raid"
Now lets update the partition table information with the help of partprobe command.
[root@localhost ~]# partprobe [root@localhost ~]# fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 3825 30617600 7 HPFS/NTFS /dev/sda3 3825 11474 61440000 7 HPFS/NTFS /dev/sda4 11475 19457 64123447+ 5 Extended /dev/sda5 11475 18868 59392273+ 83 Linux /dev/sda6 18869 18881 104391 fd Linux raid autodetect /dev/sda7 18882 18894 104391 fd Linux raid autodetect
Hence now we can see that two partition /dev/sda6 and /dev/sda7 with type raid created.
Creating software raid 0 with mdadm command
[root@localhost ~]# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda6 /dev/sda7
The command line options of mdadm command is quite straight forward. Like
--level specifies the level of raid
--raid-devices quite specifies the number of hard disks used and their names.
With that command our raid 0 device is ready.
How to see detailed information of a software raid device
[root@localhost ~]# mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Tue Apr 9 14:13:53 2013 Raid Level : raid0 Array Size : 208640 (203.78 MiB 213.65 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Apr 9 14:13:53 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : db877de6:0a0be5f2:c22d99c7:e07fda85 Events : 0.1 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 [root@localhost ~]# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sda7[1] sda6[0] 208640 blocks 64k chunks unused devices: <none>
The above made md0 device is same as a virtual disk. Now inorder to read and write data to it, we need to format it with a filesystem of your wish. Lets format the raid 0 parition with ext3 filesystem by using mke2fs command.
[root@localhost ~]# mke2fs -j /dev/md0 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 52208 inodes, 208640 blocks 10432 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 26 block groups 8192 blocks per group, 8192 fragments per group 2008 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 28 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Mounting the raid device parition to a mount point
Create a directory /raid0 and mount raid filesystem /dev/md0 on it.
[root@localhost ~]# mkdir /raid0 [root@localhost ~]# mount /dev/md0 /raid0 [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 55G 18G 35G 34% / tmpfs 502M 0 502M 0% /dev/shm /dev/sda3 59G 31G 28G 53% /root/Desktop/win7 /dev/md0 198M 5.8M 182M 4% /raid0
We can clearly see that we have taken two partition each of 100mb and here /dev/md0 is showing 198M(approx 200M) which show that raid 0 uses 100% of disk size.
root@localhost ~]# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sda7[1] sda6[0] 208640 blocks 64k chunks unused devices: <none>
Hence raid 0 is created and mounted on /raid0.
Note:RAID 0 will not provide Data Redundancy when hard disk failed.
neither /dev/sda6 nor /dev/sda7 contains complete data, so failure of any of them will result in data loss.
KNOW THE ADVANTAGES OF RAID 0:
- Whenever we talk about RAID0 we find that Read & Write both Performance is great.
- One more nice thing about RAID0 is that there is no overhead caused by Parity Controls.
- The best part of RAID 0 is that the implementation of RAID 0 Technology is quite easy.
- One more benefits of using RAID 0 is it's complete usage of Storage Capacity and hence no disk overhead.
- Since it provides striping ,it will reduces the overall loads.
- In RAID 0 we can read/write different segments of data simultaneously to multiple disk at a time.
ALSO KNOW THE DISADVANTAGES OF RAID 0:
- When you are using RAID 0 be familiar with the fact that there is no any substitute for Backup.
- One worst thing about RAID 0 is that it can't be used on mission-critical systems.
- It is not fault tolerant, so one disk failure results in complete data loss which will be a disaster.
Attention Please: I have used single hard disk with different Partitions in this article to show configuration of software RAID0, while in real life we use two different hard disk.
Comments
hi
nice article ..could you please provide detailed all apache directives configuration
best explanation ever found
best explanation ever found on internet,,,.
Raid cant get more easier
Brilliant Post !!
Raid 0, 5, 10
Really This is nice document of Raid concept............
Great Work.
YOU Explained very well .... thanks a lot..
Add new comment