In this article, we recompiled our software and kernel so we could remove our /dev/sde drive. [Note: Before you do anything involving your partitions or volumes, make a complete backup. Make sure to set up a test system with the same kernel and distribution to see how this stuff works. This article was written using our lab box. Read our terms of use.] Let’s try out our new system:
[root@srv-1 LVM2.2.00.25]# pvmove --version LVM version: 2.00.25 (2004-09-29) Library version: 1.00.19-ioctl (2004-07-03) /dev/mapper/control: open failed: No such file or directory Is device-mapper driver missing from kernel? [root@srv-1 LVM2.2.00.25]# |
It turns out that there is a script that you have to run at startup that comes with the device-mapper source:
[root@srv-1 device-mapper.1.00.19]# sh scripts/devmap_mknod.sh Creating /dev/mapper/control character device with major:10 minor:63. |
Let’s try it now:
[root@srv-1 device-mapper.1.00.19]# pvmove --version LVM version: 2.00.25 (2004-09-29) Library version: 1.00.19-ioctl (2004-07-03) Driver version: 4.1.1 [root@srv-1 device-mapper.1.00.19]# |
Let’s see how our volume group looks:
[root@srv-1 root]# vgdisplay 2 PV(s) found for VG volgroup: expected 3 Volume group "volgroup" doesn't exist [root@srv-1 root]# |
Hmmmm… bet that the RAID device hasn’t started. Let’s start it up:
[root@srv-1 root]# raidstart /dev/md0 [root@srv-1 root]# vgdisplay --- Volume group --- VG Name volgroup System ID srv-11096897733 Format lvm1 VG Access read/write VG Status resizable MAX LV 256 Cur LV 1 Open LV 0 Max PV 256 Cur PV 3 Act PV 3 VG Size 11.98 GB PE Size 4.00 MB Total PE 3066 Alloc PE / Size 768 / 3.00 GB Free PE / Size 2298 / 8.98 GB VG UUID FHGe16-ATie-2TsY-CkCo-R2PO-0kv2-VpUvUG [root@srv-1 root]# [root@srv-1 root]# lvmdiskscan /dev/sda [ 4.00 GB] /dev/md0 [ 4.00 GB] LVM physical volume /dev/sda1 [ 3.75 GB] /dev/sda2 [ 250.98 MB] /dev/sdb [ 4.00 GB] /dev/sdb1 [ 4.00 GB] /dev/sdc [ 4.00 GB] /dev/sdc1 [ 4.00 GB] /dev/sdd [ 4.00 GB] /dev/sdd1 [ 4.00 GB] /dev/sde [ 4.00 GB] LVM physical volume /dev/sdf [ 4.00 GB] LVM physical volume 4 disks 5 partitions 2 LVM physical volume whole disks 1 LVM physical volume [root@srv-1 root]# |
Let’s make sure that this all happens correctly at boot. Note that there are many different ways to do this, and there are a lot of drawbacks to using rc.local, but it is just fine for the purposes of this article:
[root@srv-1 rc.d]# cat rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. touch /var/lock/subsys/local /usr/src/device-mapper.1.00.19/scripts/devmap_mknod.sh /sbin/raidstart /dev/md0 /sbin/vgchange -a y [root@srv-1 rc.d]# |
Let’s move /dev/sde:
[root@srv-1 root]# pvmove /dev/sde -v Finding volume group "volgroup" Archiving volume group "volgroup" metadata. Creating logical volume pvmove0 Metadata format (lvm1) does not support required LV segment type (mirror). Consider changing the metadata format by running vgconvert. Unable to allocate temporary LV for pvmove. [root@srv-1 root]# |
I guess we’ll have to convert it!
[root@srv-1 root]# vgconvert -M2 volgroup Volume group volgroup successfully converted [root@srv-1 root]# |
Try again:
[root@srv-1 root]# pvmove /dev/sde -v Finding volume group "volgroup" Archiving volume group "volgroup" metadata. Creating logical volume pvmove0 Moving 768 extents of logical volume volgroup/logicalvol Found volume group "volgroup" Updating volume group metadata Creating volume group backup "/etc/lvm/backup/volgroup" Found volume group "volgroup" Found volume group "volgroup" Loading volgroup-pvmove0 Found volume group "volgroup" Loading volgroup-logicalvol Checking progress every 15 seconds /dev/sde: Moved: 0.8% . . . /dev/sde: Moved: 99.9% /dev/sde: Moved: 100.0% Found volume group "volgroup" Found volume group "volgroup" Found volume group "volgroup" Loading volgroup-pvmove0 Found volume group "volgroup" Loading volgroup-logicalvol Found volume group "volgroup" Found volume group "volgroup" Removing temporary pvmove LV Writing out final volume group after pvmove Creating volume group backup "/etc/lvm/backup/volgroup" [root@srv-1 root]# |
Let’s remove /dev/sde and /dev/sdf, now, create a new RAID1 device, and add it to our volume group:
ot@srv-1 root]# vgreduce volgroup /dev/sde Removed "/dev/sde" from volume group "volgroup" [root@srv-1 root]# vgreduce volgroup /dev/sdf Removed "/dev/sdf" from volume group "volgroup" [root@srv-1 root]# [root@srv-1 root]# vi /etc/raidtab [root@srv-1 root]# cat /etc/raidtab raiddev /dev/md0 raid-level 1 nr-raid-disks 2 persistent-superblock 1 chunk-size 4 device /dev/sdg raid-disk 0 device /dev/sdh raid-disk 1 raiddev /dev/md1 raid-level 1 nr-raid-disks 2 persistent-superblock 1 chunk-size 4 device /dev/sde raid-disk 0 device /dev/sdf raid-disk 1 [root@srv-1 root]# [root@srv-1 root]# mkraid /dev/md1 handling MD device /dev/md1 analyzing super-block disk 0: /dev/sde, 4194157kB, raid superblock at 4194048kB disk 1: /dev/sdf, 4194157kB, raid superblock at 4194048kB [root@srv-1 root]# [root@srv-1 root]# raidstart /dev/md1 /dev/md1: already running [root@srv-1 root]# [root@srv-1 root]# pvcreate /dev/md1 Physical volume "/dev/md1" successfully created [root@srv-1 root]# [root@srv-1 root]# vgextend volgroup /dev/md1 Volume group "volgroup" successfully extended [root@srv-1 root]# [root@srv-1 root]# pvscan . . . PV /dev/md0 VG volgroup lvm2 [3.99 GB / 1016.00 MB free] PV /dev/md1 VG volgroup lvm2 [4.00 GB / 4.00 GB free] Total: 2 [7.99 GB] / in use: 2 [7.99 GB] / in no VG: 0 [0 ] [root@srv-1 root]# |
Nice. We have two RAID1 devices in our volume group. Let’s extend our filesystem:
[root@srv-1 root]# e2fsck -f /dev/volgroup/logicalvol e2fsck 1.34 (25-Jul-2003) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/volgroup/logicalvol: 12/32768 files (0.0% non-contiguous), 9240/40000 blocks [root@srv-1 root]# [root@srv-1 root]# resize2fs /dev/volgroup/logicalvol resize2fs 1.34 (25-Jul-2003) Resizing the filesystem on /dev/volgroup/logicalvol to 786432 (4k) blocks. The filesystem on /dev/volgroup/logicalvol is now 786432 blocks long. [root@srv-1 root]# |
Now, after all of this, let’s see if our test file is still good:
[root@srv-1 root]# mount -t ext3 /dev/volgroup/logicalvol /mnt [root@srv-1 root]# cat /mnt/ruk.txt test [root@srv-1 root]# |
Cool! Through the series of articles we have created a volume group of 3 drives and a filesystem, expanded the filesystem, shrank the filesystem, and removed a drive. Taken that drive, created a RAID1 set with another unused drive, added it to the volume group, moved the filesystem off of the used non-RAIDed drive, created another RAID1 set on the original two drives, and combined both RAID1 sets into one logical volume. All of this without any kind of corruption. We even had to upgrade our logical volume from LVM1 to LVM2 in the process. Not too bad.
There are six articles in this series:
Setting Up Logical Volume Manager
Extending a Logical Volume
Shrinking a Logical Volume With LVM
Adding a RAID1 Device to a Volume With LVM
Upgrading LVM To Version 2 and Patching The Linux Kernel
Finish Conversion And Expansion to Two RAID1 Devices With LVM