Configuring LVM on top of RAID : An alternative way to partitioning RAID Devices.
Is partitioning raid device is same as partition ext2,ext3 and ext4 file systems?
Answer is NO!
partitioning of raid device is not as same as partitioning of ext2, ext3 and ext4 file systems.
Raid device partitioning is different than simple file system partitioning method.
Why do we need to partition RAID Device?
Partitioning RAID devices is not the same as partitiong simple disk. So what to do when you need to partition a raid device or to resize it.
Creating LVM over top of RAID is one of alternative way to partition RAID Devices.
As we all know tha logical volumes can be created by using single physical volumes or by using multiple physical volumes. So if we have created a logical volume of 100GB using one physical volume and another 100Gb logical volume using multiple physical volume, then in this case which logical volume have higher performance?
or which logical volume is more flexible?
or which logical volume is more stable (i.e minimum chance of data loss)?
or is there will be any change in performance?
You can get the answer of above metioned questions yourself after going through this experimental Lab.
You can test this scenarion on your own machine and Let put your views, your answer here for the rest of world.
This is an experimental Lab which i am going to explain here, but it reveals some conept of raid and lvm, that's why i have decided to publish it. Everyone is free to comment or appreciate on it through comment section, but since this is completely my article so i keep the whole right to ignore the comments if i don't like it and appreciate those who practice this Lab at their lab and share their views,results and ideas here with rest of world.
To create LVM on top of Software RAID5 we need to go through few simple steps which i have mentioned below.
- Partitioning
- Changing partition type to raid
- Configure Software RAID5.
- Create MD Device /dev/mdX.
- Choose or Select device Type.
- Choose number of devices to be used in Raid5.
- Choose spare device to used in RAID5.
- Change or configure the Layout of RAID5 Array.
- Configure mount point.
- Physical Volume Creation using RAID.
- Volume Group creation using RAID.
- Logical Volume creation.
- format Logical Volume and Configure mount point.
- Make entry in /etc/fstab file for permanent entry.
- Test disk failure condition and it's effect on created raid and logical volume.
Step1:- We will create 4 partitions /dev/sda6 /dev/sda7 /dev/sda8 and /dev/sda9 here each of size 100mb.
[root@satish ~]# fdisk /dev/sda The number of cylinders for this disk is set to 19457. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n First cylinder (18869-19457, default 18869): Using default value 18869 Last cylinder or +size or +sizeM or +sizeK (18869-19457, default 19457): +100M Command (m for help): n First cylinder (18882-19457, default 18882): Using default value 18882 Last cylinder or +size or +sizeM or +sizeK (18882-19457, default 19457): +100M Command (m for help): n First cylinder (18895-19457, default 18895): Using default value 18895 Last cylinder or +size or +sizeM or +sizeK (18895-19457, default 19457): +100M Command (m for help): n First cylinder (18908-19457, default 18908): Using default value 18908 Last cylinder or +size or +sizeM or +sizeK (18908-19457, default 19457): +100M Command (m for help): p Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 3825 30617600 7 HPFS/NTFS /dev/sda3 3825 11474 61440000 7 HPFS/NTFS /dev/sda4 11475 19457 64123447+ 5 Extended /dev/sda5 11475 18868 59392273+ 83 Linux /dev/sda6 18869 18881 104391 83 Linux /dev/sda7 18882 18894 104391 83 Linux /dev/sda8 18895 18907 104391 83 Linux /dev/sda9 18908 18920 104391 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. [root@satish ~]#partprobe
Step2:- Change the partition type.
[root@satish ~]# fdisk /dev/sda The number of cylinders for this disk is set to 19457. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): t Partition number (1-9): 9 Hex code (type L to list codes): fd Changed system type of partition 9 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-9): 8 Hex code (type L to list codes): fd Changed system type of partition 8 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-9): 7 Hex code (type L to list codes): fd Changed system type of partition 7 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-9): 6 Hex code (type L to list codes): fd Changed system type of partition 6 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 3825 30617600 7 HPFS/NTFS /dev/sda3 3825 11474 61440000 7 HPFS/NTFS /dev/sda4 11475 19457 64123447+ 5 Extended /dev/sda5 11475 18868 59392273+ 83 Linux /dev/sda6 18869 18881 104391 fd Linux raid autodetect /dev/sda7 18882 18894 104391 fd Linux raid autodetect /dev/sda8 18895 18907 104391 fd Linux raid autodetect /dev/sda9 18908 18920 104391 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. [root@satish ~]#partprobe
Step3:- Create Raid device initial array using three 100mb partitions /dev/sda6 /dev/sda7 and /dev/sda8.
And at the same time we have created the spare device /dev/sda9.
[root@satish ~]# mdadm --create --verbose /dev/md5 --chunk=128 --level=5 --layout=right-asymmetric --raid-devices=3 /dev/sda6 /dev/sda7 /dev/sda8 --spare-devices=1 /dev/sda9 mdadm: /dev/sda6 appears to contain an ext2fs file system size=208640K mtime=Mon May 27 08:20:52 2013 mdadm: /dev/sda6 appears to be part of a raid array: level=raid0 devices=2 ctime=Mon May 27 08:11:06 2013 mdadm: /dev/sda7 appears to be part of a raid array: level=raid0 devices=2 ctime=Mon May 27 08:11:06 2013 mdadm: /dev/sda8 appears to contain an ext2fs file system size=104320K mtime=Tue May 28 07:49:32 2013 mdadm: /dev/sda8 appears to be part of a raid array: level=raid1 devices=2 ctime=Tue May 28 07:48:03 2013 mdadm: /dev/sda9 appears to contain an ext2fs file system size=104320K mtime=Tue May 28 07:49:32 2013 mdadm: /dev/sda9 appears to be part of a raid array: level=raid1 devices=2 ctime=Tue May 28 07:48:03 2013 mdadm: size set to 104320K Continue creating array? y mdadm: array /dev/md5 started.
EXPLANATION OF THE ABOVE COMMAND:
--create: This option is used to create new raid device.
--verbose:This option helps us to live view the operation information.
--level=5: This options define the RAID level. so here it's RAID5.
--raid-devices=3: This tells us about the number of devices or disks going to be used in raid. here number of disks are 3.
/dev/sda6 /dev/sda7 /dev/sda8: these are the disks which is going to be used in raid here.
--spare-devices: This option is used to add the spare disks while creating the raid array so that in case of disk failure it automatically get sync.
--layout:this option tells about the layout or symmetric of the created array.
Step4:- Format the raid devices with journal file system.
[root@satish ~]# mkfs.ext3 /dev/md5 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 52208 inodes, 208640 blocks 10432 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 26 block groups 8192 blocks per group, 8192 fragments per group 2008 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Step5:- Reviewing the RAID Configuration
How to view or display the basic information of all presently active raid devices.
[root@satish ~]# cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md5 : active raid5 sda8[2] sda9[3](S) sda7[1] sda6[0] 208640 blocks level 5, 128k chunk, algorithm 1 [3/3] [UUU] unused devices: <none>
View the detail information of a raid device.
[root@satish ~]# mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Mon Jun 3 01:22:14 2013 Raid Level : raid5 Array Size : 208640 (203.78 MiB 213.65 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Mon Jun 3 01:30:52 2013 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : right-asymmetric Chunk Size : 128K UUID : 74a1ed87:c7567887:280dbe38:ef27c774 Events : 0.2 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 2 8 8 2 active sync /dev/sda8 3 8 9 - spare /dev/sda9
How to determine a given or mentioned device is a component device or a raid device.
[root@satish ~]# mdadm --query /dev/sda9 /dev/sda9: is not an md array /dev/sda9: device 3 in 3 device active raid5 /dev/md5. Use mdadm --examine for more detail. [root@satish ~]# mdadm --query /dev/sda6 /dev/sda6: is not an md array /dev/sda6: device 0 in 3 device active raid5 /dev/md5. Use mdadm --examine for more detail.
How to determine a given or mentioned device is a component device or a raid device.
[root@satish ~]# mdadm --query /dev/md5 /dev/md5: 203.75MiB raid5 3 devices, 1 spare. Use mdadm --detail for more detail. /dev/md5: No md super block found, not an md component.
How to examine the devices used in raid in more detail.
[root@satish ~]# mdadm --examine /dev/sda9 /dev/sda9: Magic : a92b4efc Version : 0.90.00 UUID : 74a1ed87:c7567887:280dbe38:ef27c774 Creation Time : Mon Jun 3 01:22:14 2013 Raid Level : raid5 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 208640 (203.78 MiB 213.65 MB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 5 Update Time : Mon Jun 3 01:22:28 2013 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Checksum : 9fb5233f - correct Events : 2 Layout : right-asymmetric Chunk Size : 128K Number Major Minor RaidDevice State this 3 8 9 3 spare /dev/sda9 0 0 8 6 0 active sync /dev/sda6 1 1 8 7 1 active sync /dev/sda7 2 2 8 8 2 active sync /dev/sda8 3 3 8 9 3 spare /dev/sda9
How to list array lines.
[root@satish ~]# mdadm --detail --scan ARRAY /dev/md5 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=74a1ed87:c7567887:280dbe38:ef27c774
How to view or list array line for a particular device.
[root@satish ~]# mdadm --detail --brief /dev/md5 ARRAY /dev/md5 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=74a1ed87:c7567887:280dbe38:ef27c774
Step6:Create Physical Volume using RAID5 array.
[root@satish ~]# pvcreate /dev/md5 Physical volume "/dev/md5" successfully created
Check Physical volume attributes using pvs.
[root@satish ~]# pvs PV VG Fmt Attr PSize PFree /dev/md5 lvm2 -- 203.75M 203.75M
Check Physical Volume information in detail using pvdisplay command.
[root@satish ~]# pvdisplay "/dev/md5" is a new physical volume of "203.75 MB" --- NEW Physical volume --- PV Name /dev/md5 VG Name PV Size 203.75 MB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID e5YCQh-0IFd-MYv2-2WzC-KHEx-pys3-z8w2Ud
Step7:- Create volume group named raid5 using vgcreate command.
[root@satish ~]# vgcreate raid5 /dev/md5 Volume group "raid5" successfully created You have new mail in /var/spool/mail/root
See Volume group attributes using vgs command.
[root@satish ~]# vgs VG #PV #LV #SN Attr VSize VFree raid5 1 0 0 wz--n- 200.00M 200.00M
See volume Group information in detail using vgdisplay.
[root@satish ~]# vgdisplay --- Volume group --- VG Name raid5 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 200.00 MB PE Size 4.00 MB Total PE 50 Alloc PE / Size 0 / 0 Free PE / Size 50 / 200.00 MB VG UUID om3xvw-CGQX-mMwx-K03R-jf2p-zaqM-xjswMZ
Step8:- Logical Volume Creation using lvcreate.
[root@satish ~]# lvcreate -L 150M raid5 -n lvm0 Rounding up size to full physical extent 152.00 MB Logical volume "lvm0" created
View the attributes of Logical Volume.
[root@satish ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert lvm0 raid5 -wi-a- 152.00M
View Logical Volume information in detail.
[root@satish ~]# lvdisplay --- Logical volume --- LV Name /dev/raid5/lvm0 VG Name raid5 LV UUID UCrVf9-3cJx-0TlU-aSl0-Glqg-jOec-UHtVgg LV Write Access read/write LV Status available # open 0 LV Size 152.00 MB Current LE 38 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1024 Block device 253:0
Step9:- Format lvm partition.
[root@satish ~]# mkfs.ext3 /dev/raid5/lvm0 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 38912 inodes, 155648 blocks 7782 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 19 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 22 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Step10:- Configure mount point.
[root@satish ~]# mkdir /raid5 [root@satish ~]# mount /dev/raid5/lvm0 /raid5 [root@satish ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 55G 22G 31G 41% / tmpfs 502M 0 502M 0% /dev/shm /dev/mapper/raid5-lvm0 148M 5.6M 135M 4% /raid5
Now you can scan all devices
[root@satish ~]# lvmdiskscan /dev/ramdisk [ 16.00 MB] /dev/raid5/lvm0 [ 152.00 MB] /dev/ram [ 16.00 MB] /dev/sda1 [ 100.00 MB] /dev/ram2 [ 16.00 MB] /dev/sda2 [ 29.20 GB] /dev/ram3 [ 16.00 MB] /dev/sda3 [ 58.59 GB] /dev/ram4 [ 16.00 MB] /dev/ram5 [ 16.00 MB] /dev/root [ 56.64 GB] /dev/md5 [ 203.75 MB] LVM physical volume /dev/ram6 [ 16.00 MB] /dev/ram7 [ 16.00 MB] /dev/ram8 [ 16.00 MB] /dev/ram9 [ 16.00 MB] /dev/ram10 [ 16.00 MB] /dev/ram11 [ 16.00 MB] /dev/ram12 [ 16.00 MB] /dev/ram13 [ 16.00 MB] /dev/ram14 [ 16.00 MB] /dev/ram15 [ 16.00 MB] 3 disks 18 partitions 0 LVM physical volume whole disks 1 LVM physical volume
Note: Configuration File for lvm is:
[root@satish ~]# vim /etc/lvm/lvm.conf
If you want to know about the physical volume in detail along with the drive participated with physical volume you can get all through this file.This file will you help you understand physical volume creation in detail and helps you in troubleshooting.
[root@satish ~]# vim /etc/lvm/archive/vg00_00000.vg 1 # Generated by LVM2 version 2.02.46-RHEL5 (2009-06-18): Sat Apr 27 12:45:46 2013 2 3 contents = "Text Format Volume Group" 4 version = 1 5 6 description = "Created *before* executing 'vgcreate vg00 /dev/sda6 /dev/sda7 /dev/sda8'" 7 8 creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 9 creation_time = 1367081146 # Sat Apr 27 12:45:46 2013 10 11 vg00 { 12 id = "H3FYcT-1u28-i8ln-ehNm-DbFM-nelQ-3UFSnw" 13 seqno = 0 14 status = ["RESIZEABLE", "READ", "WRITE"] 15 flags = [] 16 extent_size = 8192 # 4 Megabytes 17 max_lv = 0 18 max_pv = 0 19 20 physical_volumes { 21 22 pv0 { 23 id = "cfz6P0-VVhD-fWUs-sbRj-0pgM-F0JM-76iVOg" 24 device = "/dev/sda6" # Hint only 25 26 status = ["ALLOCATABLE"] 27 flags = [] 28 dev_size = 208782 # 101.944 Megabytes 29 pe_start = 384 30 pe_count = 25 # 100 Megabytes 31 } 32 33 pv1 { 34 id = "FiouR5-VRUL-uoFp-6DCS-fJG0-cbUx-7S0gzk" 35 device = "/dev/sda7" # Hint only 36 37 status = ["ALLOCATABLE"] 38 flags = [] 39 dev_size = 208782 # 101.944 Megabytes 40 pe_start = 384 41 pe_count = 25 # 100 Megabytes 42 } 43 44 pv2 { 45 id = "oxIjRC-rQGQ-4kHH-K8xR-lJmn-lYOb-x3nYFR" 46 device = "/dev/sda8" # Hint only 47 48 status = ["ALLOCATABLE"] 49 flags = [] 50 dev_size = 208782 # 101.944 Megabytes 51 pe_start = 384 52 pe_count = 25 # 100 Megabytes 53 } 54 } 55 56 }
Step11:- for permanent mounting make entry in /etc/fstab file.
add the below line in /etc/fstab file
/dev/raid5/lvm0 /raid5 ext3 defaults 0 0
EXPERIMENTAL FACTS:
What if one of partition involved in raid configuration go to faulty spare?
Let's mannualy i am goint to fail partition /dev/sda8 for testing purpose to see the result of it's effect on raid and lvm.
[root@satish ~]# mdadm /dev/md5 --fail /dev/sda8 mdadm: set /dev/sda8 faulty in /dev/md5
Now you can see the raid array information which clearly show you that spare device we have mentioned at the time of raid device creation automatticaly replace the faulty device.So you can see spare rebuilding option.
[root@satish ~]# mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Mon Jun 3 01:22:14 2013 Raid Level : raid5 Array Size : 208640 (203.78 MiB 213.65 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Mon Jun 3 02:29:18 2013 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : right-asymmetric Chunk Size : 128K Rebuild Status : 21% complete UUID : 74a1ed87:c7567887:280dbe38:ef27c774 Events : 0.4 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 4 8 9 2 spare rebuilding /dev/sda9 3 8 8 - faulty spare /dev/sda8
Data rebuilding take some time. That's why when you see the result after some time you find that partition now get completely synchronized with raid array.
[root@satish ~]# mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Mon Jun 3 01:22:14 2013 Raid Level : raid5 Array Size : 208640 (203.78 MiB 213.65 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Mon Jun 3 02:29:31 2013 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : right-asymmetric Chunk Size : 128K UUID : 74a1ed87:c7567887:280dbe38:ef27c774 Events : 0.6 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 2 8 9 2 active sync /dev/sda9 3 8 8 - faulty spare /dev/sda8
Now you can see the list of active devices and faulty devices here.
[root@satish ~]# cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md5 : active raid5 sda8[3](F) sda9[2] sda7[1] sda6[0] 208640 blocks level 5, 128k chunk, algorithm 1 [3/3] [UUU] unused devices: <none>
you find no change in logical Volume.
[root@satish ~]# lvdisplay --- Logical volume --- LV Name /dev/raid5/lvm0 VG Name raid5 LV UUID UCrVf9-3cJx-0TlU-aSl0-Glqg-jOec-UHtVgg LV Write Access read/write LV Status available # open 1 LV Size 152.00 MB Current LE 38 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1024 Block device 253:0
[root@satish ~]# pvck /dev/md5 Found label on /dev/md5, sector 1, type=LVM2 001 Found text metadata area: offset=4096, size=258048
There is no loss of data.
[root@satish raid5]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 55G 22G 31G 41% / tmpfs 502M 0 502M 0% /dev/shm /dev/mapper/raid5-lvm0 148M 56M 84M 40% /raid5
Linux Software RAID and DATA Recovery:
we have configure lvm on top of raid very easily, but in the case of crash or data loss we need to recover our data. so data become important.So here we get introduce with the configuration file when lvm is created over raid because this file helps us to understand about the lvm creation and algorithm in detail.
[root@satish ~]# vim /etc/lvm/backup/raid5 1 # Generated by LVM2 version 2.02.46-RHEL5 (2009-06-18): Mon Jun 3 02:08:05 2013 2 3 contents = "Text Format Volume Group" 4 version = 1 5 6 description = "Created *after* executing 'lvcreate -L 150M raid5 -n lvm0'" 7 8 creation_host = "satish.com" # Linux satish.com 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 9 creation_time = 1370239685 # Mon Jun 3 02:08:05 2013 10 11 raid5 { 12 id = "om3xvw-CGQX-mMwx-K03R-jf2p-zaqM-xjswMZ" 13 seqno = 2 14 status = ["RESIZEABLE", "READ", "WRITE"] 15 flags = [] 16 extent_size = 8192 # 4 Megabytes 17 max_lv = 0 18 max_pv = 0 19 20 physical_volumes { 21 22 pv0 { 23 id = "e5YCQh-0IFd-MYv2-2WzC-KHEx-pys3-z8w2Ud" 24 device = "/dev/md5" # Hint only 25 26 status = ["ALLOCATABLE"] 27 flags = [] 28 dev_size = 417280 # 203.75 Megabytes 29 pe_start = 512 30 pe_count = 50 # 200 Megabytes 31 } 32 } 33 34 logical_volumes { 35 36 lvm0 { 37 id = "UCrVf9-3cJx-0TlU-aSl0-Glqg-jOec-UHtVgg" 38 status = ["READ", "WRITE", "VISIBLE"]
How to DELETE the above scenario Now.
Deletion will be done in few simple steps:
- Step1:remove the line from /etc/fstab file.
- Step2:unmount the lvm.
- Step3:remove the lvm using lvremove command.
- Step4:remove the volume group using vgremove command.
- step5:remove the physical volume using pvremove command.
- step6:Now fail the partition used in raid.
- Step7:Then stop the array.
- Step8:Then remove the array.
- Step9:Now delete the partition using fdisk utility.
The above process can also be done by creating a loop device instead of using any partition or any disk. To know how to do the same Lab without creating any new partition or without using any new disk read the article give below:
Comments
Very clearly described about
Very clearly described about RAID and LVM..awesome job man!!
Add new comment