How to create RAID on Loop Devices and LVM over top of RAID.
Before you go through this article, let me inform you that this is completely experimental.So read it, enjoy it,but use it in real environment at your own risk.I use to do so many experimental things in my Lab and this is just one of them.We have studied and know so much about Partitions,RAID and LVM.This time i am going to write something on the fact which comes out after relating all these things together. As you all know that once we have created LVM on top of RAID, it become so easy to add any other volume to RAID. Now instead of using a real device or real partition here we are going to use loop devices.Then after creating loop devices we will create RAID on loop devices.And after that we will create LVM over top of RAID.
Step by step explanation:-
- First you need to create Loop Devices:
- Then you create new Arrays using these Loop Devices or you can say create RAID Device.
- After that you need to initialize the physical Volume(PV).
- Then you can create Volume Groups(VG).
- After that you create Logical Volumes(LVM).
- Now you can resize the RAID Array.
- You can also resize LVM and RAID and can easily test or verify their sizes after every changes you have made.
- After you complete this experimental Lab, you can delete all these by given process, which i have mentioned in this article at last.
Create three 200mb Files First using dd command.
using dd(Disk Dump command we will create three files each of size 200MB)
[root@satish ~]# dd if=/dev/zero of=raid-0 bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 0.554498 seconds, 378 MB/s
[root@satish ~]# cp raid-0 raid-1 [root@satish ~]# cp raid-0 raid-2
Create initial array using above three 200mb files by converting them to loop device using losetup command.
losetup command is used to create loop device in linux.
[root@satish ~]# losetup /dev/loop0 raid-0 [root@satish ~]# losetup /dev/loop1 raid-1 [root@satish ~]# losetup /dev/loop2 raid-2
Create the new array using these three loop devices.
[root@satish ~]# mdadm --create --verbose /dev/md5 --level=5 --raid-devices=3 /dev/loop0 /dev/loop1 /dev/loop2 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: size set to 204736K mdadm: array /dev/md5 started.
Using madam command we have created software raid5 using three loop devices named /dev/loop0 /dev/loop1 and /dev/loop2. Now after creating raid we need to confirm whether raid is properly configured or not.So we need to test it and we can do examine that in a very simple way using mdadm command with option --detail as mentioned below.
Now Examine the created raid Devices in Detail.
[root@satish ~]# mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Wed Jul 31 16:21:01 2013 Raid Level : raid5 Array Size : 409472 (399.94 MiB 419.30 MB) Used Dev Size : 204736 (199.97 MiB 209.65 MB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Wed Jul 31 16:22:36 2013 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 592f15a1:dba02592:c3b644ca:412e7cf4 Events : 0.2 Number Major Minor RaidDevice State 0 7 0 0 active sync /dev/loop0 1 7 1 1 active sync /dev/loop1 2 7 2 2 active sync /dev/loop2
Above output clearly show us that the three devices /dev/loop0 /dev/loop1 and /dev/loop2 is active in raid array.
Now initialize the Physical Volumes (PVs) and Create Volume Groups(VGs)
[root@satish ~]# pvcreate /dev/md5 Physical volume "/dev/md5" successfully created [root@satish ~]# pvdisplay "/dev/md5" is a new physical volume of "399.88 MB" --- NEW Physical volume --- PV Name /dev/md5 VG Name PV Size 399.88 MB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID ytsDuk-Z1BG-S3tO-3mMy-wAQv-kuTZ-3a5TbM
[root@satish ~]# vgcreate lvm-satish /dev/md5 Volume group "lvm-satish" successfully created
[root@satish ~]# vgdisplay --- Volume group --- VG Name lvm-satish System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 396.00 MB PE Size 4.00 MB Total PE 99 Alloc PE / Size 0 / 0 Free PE / Size 99 / 396.00 MB VG UUID N0d0YK-9sGO-v8jY-9kSL-rFfx-lLCA-VnCqYP
Create and Examine lvm Volume.
[root@satish ~]# lvcreate -l 60 lvm-satish -n lvm0 Logical volume "lvm0" created
[root@satish ~]# lvdisplay --- Logical volume --- LV Name /dev/lvm-satish/lvm0 VG Name lvm-satish LV UUID yMOTeD-1X3S-W5G7-arSn-s6tf-PKD1-y6Bv54 LV Write Access read/write LV Status available # open 0 LV Size 240.00 MB Current LE 60 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 512 Block device 253:0
Now Format and mount it on a directory.
here i am going to mount it on slashroot directory.
[root@satish ~]# mkfs.ext3 /dev/lvm-satish/lvm0 [root@satish ~]# mount /dev/lvm-satish/lvm0 slashroot/ [root@satish ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 24G 12G 11G 54% / tmpfs 502M 0 502M 0% /dev/shm /dev/mapper/lvm--satish-lvm0 233M 6.1M 215M 3% /root/slashroot
Now add a File to it.
[root@satish ~]# dd if=/dev/zero of=slashroot/ironman3.avi bs=10240 count=10240 10240+0 records in 10240+0 records out 104857600 bytes (105 MB) copied, 0.714737 seconds, 147 MB/s [root@satish ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/lvm--satish-lvm0 233M 107M 114M 49% /root/slashroot
Now Create a new 200mb loopback device and using this loop device,Expand the raid array.
[root@satish ~]# mdadm --grow /dev/md5 -n4 --backup-file=raid5backup mdadm: Need to backup 384K of critical section.. mdadm: ... critical section passed. [root@satish ~]# dd if=/dev/zero of=raid-3 bs=1M count=200 [root@satish ~]# losetup /dev/loop3 raid-3 [root@satish ~]# mdadm --add /dev/md5 /dev/loop3 mdadm: added /dev/loop3
[root@satish ~]# mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Wed Jul 31 16:21:01 2013 Raid Level : raid5 Array Size : 614208 (599.91 MiB 628.95 MB) Used Dev Size : 204736 (199.97 MiB 209.65 MB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Wed Jul 31 17:14:20 2013 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 14% complete UUID : 592f15a1:dba02592:c3b644ca:412e7cf4 Events : 0.168 Number Major Minor RaidDevice State 0 7 0 0 active sync /dev/loop0 1 7 1 1 active sync /dev/loop1 2 7 2 2 active sync /dev/loop2 4 7 3 3 spare rebuilding /dev/loop3
Resize RAID and LVM and then examine their size.
[root@satish ~]# pvresize /dev/md5 Physical volume "/dev/md5" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [root@satish ~]# vgdisplay lvm-satish --- Volume group --- VG Name lvm-satish System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 596.00 MB PE Size 4.00 MB Total PE 149 Alloc PE / Size 60 / 240.00 MB Free PE / Size 89 / 356.00 MB VG UUID N0d0YK-9sGO-v8jY-9kSL-rFfx-lLCA-VnCqYP
[root@satish ~]# lvresize -l 50 /dev/lvm-satish/lvm0 WARNING: Reducing active and open logical volume to 200.00 MB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lvm0? [y/n]: y Reducing logical volume lvm0 to 200.00 MB Logical volume lvm0 successfully resized
[root@satish ~]# lvdisplay --- Logical volume --- LV Name /dev/lvm-satish/lvm0 VG Name lvm-satish LV UUID yMOTeD-1X3S-W5G7-arSn-s6tf-PKD1-y6Bv54 LV Write Access read/write LV Status available # open 1 LV Size 200.00 MB Current LE 50 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 512 Block device 253:0
Now learn how to Delete All these things
To Delete Everything we have done in this Lab we follow reverse process.
First we need to unmount all filesystem before we stop the array.
If you want to remove devices from array,First you need to failed them.We can failed any healthy device manually.
[root@satish ~]#umount slashroot [root@satish ~]#lvremove -f /dev/lvm-satish/lvm0 [root@satish ~]#vgremove lvm-satish
Stopping and Removing raid array.
[root@satish ~]#mdadm --stop /dev/md5 [root@satish ~]#mdadm --remove /dev/md5
Now Delete all loop device.
[root@satish ~]#losetup -d /dev/loop0 [root@satish ~]#losetup -d /dev/loop1 [root@satish ~]#losetup -d /dev/loop2 [root@satish ~]#losetup -d /dev/loop3
Now Delete the all those files which are used as loop device.
[root@satish ~]#rm -rf raid-3 raid-2 raid-1 raid-0
Finally Delete the Directory in which raid has been mounted.
[root@satish ~]rmdir slashroot
If you are more intrested in storage management, you can read our articles on raid and lvm.Very soon we will write on how to create and configure encrypted raid device and how to create encrypted loop device.
List of our articles related to raid and lvm are:
How to configure software RAID0.
How to configure software RAID1.
How to configure software RAID5.
Add new comment