LVM物理卷迁移
在生产环境中有时候会经常碰到不能停机维护的情况,或停机窗口时间不够用,比如希望更换存储系统,几十个T或更大的数据,在半天内是迁移不完的,那这个时候就只能在线做了。
下面测试使用LVM的物理卷迁移的方式来在线迁移数据:
1) 物理机已分配2块硬盘sdc,sdd
[root@test-204 ~]# fdisk -l
Disk /dev/sdd: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
2)先建一个LVM逻辑卷
[root@test-204 ~]# pvcreate /dev/sdc
Physical volume "/dev/sdc" successfully created
[root@test-204 ~]# vgcreate /dev/mapper/vg_test /dev/sdc
Volume group "vg_test" successfully created
[root@test-204 ~]# lvcreate /dev/mapper/vg_test -L 7G -n lv
Logical volume "lv" created
[root@test-204 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/vg_test/lv
LV Name lv
VG Name vg_test
LV UUID XV4oMz-RAZN-WmOK-b1tR-dDWh-Vl2m-SuVvAN
LV Write Access read/write
LV Creation host, time test-204, 2017-04-12 14:06:15 +0800
LV Status available
# open 0
LV Size 7.00 GiB
Current LE 1792
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
[root@test-204 ~]# mkfs -t ext4 /dev/vg_test/lv
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
458752 inodes, 1835008 blocks
91750 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1879048192
56 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@test-204 ~]# mkdir /mnt/lv
[root@test-204 ~]# mount /dev/vg_test/lv /mnt/lv
[root@test-204 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_test201-lv_root
105G 23G 78G 23% /
tmpfs 939M 0 939M 0% /dev/shm
/dev/sda1 477M 57M 395M 13% /boot
/dev/mapper/vg_test-lv
6.8G 16M 6.4G 1% /mnt/lv
3) 测试pvmove
同时在copy数据到这个逻辑卷上,测试发现IO等待在30%不到一点。还是可以在线迁移的。
另外在HP-UX系统上测试发现,IO等待要低的多,不到10%,可能跟使用的是存储有关。
[root@test-204 ~]# pvcreate /dev/sdd
Physical volume "/dev/sdd" successfully created
[root@test-204 ~]# pvmove /dev/sdc /dev/sdd
Physical Volume "/dev/sdd" not found in Volume Group "vg_test"
[root@test-204 ~]# vgextend vg_test /dev/sdd
Volume group "vg_test" successfully extended
[root@test-204 ~]# pvmove /dev/sdc /dev/sdd
/dev/sdc: Moved: 1.8%
/dev/sdc: Moved: 19.6%
/dev/sdc: Moved: 37.0%
/dev/sdc: Moved: 52.9%
/dev/sdc: Moved: 64.7%
/dev/sdc: Moved: 71.5%
/dev/sdc: Moved: 79.4%
/dev/sdc: Moved: 88.7%
/dev/sdc: Moved: 100.0%
[root@test-204 ~]# vgdisplay -v vg_test
DEGRADED MODE. Incomplete RAID LVs will be processed.
Using volume group(s) on command line
Finding volume group "vg_test"
--- Volume group ---
VG Name vg_test
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 15.99 GiB
PE Size 4.00 MiB
Total PE 4094
Alloc PE / Size 1792 / 7.00 GiB
Free PE / Size 2302 / 8.99 GiB
VG UUID b1pnZ1-byAA-6OpR-4nDD-sPhx-PX6F-JGOvC2
--- Logical volume ---
LV Path /dev/vg_test/lv
LV Name lv
VG Name vg_test
LV UUID XV4oMz-RAZN-WmOK-b1tR-dDWh-Vl2m-SuVvAN
LV Write Access read/write
LV Creation host, time test-204, 2017-04-12 14:06:15 +0800
LV Status available
# open 1
LV Size 7.00 GiB
Current LE 1792
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Physical volumes ---
PV Name /dev/sdc
PV UUID iN2rC3-RfXa-zlN6-LxNN-zexj-uFI3-Cy0Frv
PV Status allocatable
Total PE / Free PE 2047 / 2047
PV Name /dev/sdd
PV UUID Mf2aVw-yNk2-roNE-D0zu-QGkT-Wly0-Wg5GNR
PV Status allocatable
Total PE / Free PE 2047 / 255
4)删除老的物理卷
[root@test-204 ~]# vgreduce vg_test /dev/sdc
Removed "/dev/sdc" from volume group "vg_test"
[root@test-204 ~]# vgdisplay -v vg_test
DEGRADED MODE. Incomplete RAID LVs will be processed.
Using volume group(s) on command line
Finding volume group "vg_test"
--- Volume group ---
VG Name vg_test
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 8.00 GiB
PE Size 4.00 MiB
Total PE 2047
Alloc PE / Size 1792 / 7.00 GiB
Free PE / Size 255 / 1020.00 MiB
VG UUID b1pnZ1-byAA-6OpR-4nDD-sPhx-PX6F-JGOvC2
--- Logical volume ---
LV Path /dev/vg_test/lv
LV Name lv
VG Name vg_test
LV UUID XV4oMz-RAZN-WmOK-b1tR-dDWh-Vl2m-SuVvAN
LV Write Access read/write
LV Creation host, time test-204, 2017-04-12 14:06:15 +0800
LV Status available
# open 1
LV Size 7.00 GiB
Current LE 1792
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Physical volumes ---
PV Name /dev/sdd
PV UUID Mf2aVw-yNk2-roNE-D0zu-QGkT-Wly0-Wg5GNR
PV Status allocatable
Total PE / Free PE 2047 / 255
整个过程相对来说还是比较安全的。