2.0、LVM快照
[iyunv@localhost ~]# vgdisplay ---Volume group --- VGName vbirdvg System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 18 VGAccess read/write VGStatus resizable MAXLV 0 CurLV 1 Open LV 1 MaxPV 0 CurPV 4 ActPV 4 VGSize 20.00 GiB PESize 16.00 MiB Total PE 1280 Alloc PE / Size 1280 / 20.00GiB Free PE / Size 0 / 0 ←没有多余的PE可用 VGUUID QaQ6X4-U9RP-wW7m-bgWA-UJvY-4Kg5-6jmvye 注:由于上一篇中我们删除了/dev/sdb5,我们现在要将/dev/sdb5添加回来。 [iyunv@localhost ~]# pvcreate /dev/sdb5 Physical volume "/dev/sdb5" successfully created [iyunv@localhost ~]# vgextend vbirdvg/dev/sdb5 Volume group "vbirdvg" successfully extended [iyunv@localhost ~]# vgdisplay ---Volume group --- VGName vbirdvg System ID Format lvm2 Metadata Areas 5 Metadata Sequence No 19 VGAccess read/write VGStatus resizable MAXLV 0 CurLV 1 Open LV 1 MaxPV 0 CurPV 5 ActPV 5 VGSize 23.00 GiB PESize 16.00 MiB Total PE 1472 AllocPE / Size 1280 / 20.00 GiB Free PE / Size 192 / 3.00 GiB ←已经有192个PE可用了。 VGUUID QaQ6X4-U9RP-wW7m-bgWA-UJvY-4Kg5-6jmvye
[iyunv@localhost ~]# lvcreate -L 2G -s -nvbirdss /dev/vbirdvg/vbirdlv Logical volume "vbirdss" created 注:上述命令最重要的是-s的参数。代表是snapshot快照功能的意思。 -n:后面接快照区的设备名称,/dev/….则是要被快照的完整文件名。 -L:后面则是接使用多大容量的PE来作为这个快照。或使用-l接PE个数。
[iyunv@localhost ~]# lvdisplay
--- Logical volume --- LV Path /dev/vbirdvg/vbirdss LV Name vbirdss VG Name vbirdvg LV UUID oLtuEe-viTM-ccuU-PkVb-47qc-Swc1-VqtHsv LV Write Access read/write LV Creation host, time localhost, 2016-07-0200:23:06 +0800 LV snapshot status active destination for vbirdlv LV Status available # open 0 LVSize 20.00 GiB ←被快照的原LV容量 Current LE 1280 COW-tablesize 2.00 GiB ←快照区的实际容量 COW-tableLE 128 ←快照区占用的PE总数量 Allocated to snapshot 0.00% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3
注:现在可以看到/dev/vbirdvg/vbirdss快照区已经被创建好了,而且它的VG量竟然与原本的/dev/vbirdvg/vbirdlv相同。也就是说,如果你真的挂载了这个设备的时候,看到的数据会跟原本的vbirdlv相同。
[iyunv@localhostmnt]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup-lv_root 38776280 5020792 31785724 14% / tmpfs 243140 72 243068 1% /dev/shm /dev/sda1 495844 34907 435337 8% /boot /dev/mapper/vbirdvg-vbirdlv 20642428 176200 19417652 1% /mnt/lvm /dev/mapper/vbirdvg-vbirdss 20642428 176200 19417652 1% /mnt/snapshot 注:我们可以看出两个文件系统竟然一样!
[iyunv@localhost mnt]# df /mnt/lvm Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vbirdvg-vbirdlv 20642428 176200 19417652 1% /mnt/lvm [iyunv@localhost mnt]# ll /mnt/lvm 总用量 20 drwx------. 2 root root 16384 6月 27 22:17 lost+found -rwxr-xr-x. 1 root root 23 6月 2722:29 test.sh [iyunv@localhost mnt]# rm -rf/mnt/lvm/test.sh ←对/dev/vbirdvg/vbirdlv中的内容做些修改 [iyunv@localhost mnt]# cp/etc/inittab /mnt/lvm [iyunv@localhost snapshot]# cp -a/boot /lib /sbin/ /mnt/lvm [iyunv@localhost lvm]# ls -l 总用量 40 dr-xr-xr-x. 5 root root 4096 4月 7 07:06 boot -rw-r--r--. 1 root root 884 7月 2 00:43 inittab dr-xr-xr-x. 11 root root 4096 4月 706:51 lib drwx------. 2 root root 16384 6月 27 22:17 lost+found dr-xr-xr-x. 2 root root 12288 5月 28 19:40 sbin ←更改完毕 注:大家要注意复原的数据量不能高于快照区所能负载的实际容量。由于数据会被移到快照区,如果你的快照区数据量不够大,若原始数据被改动的实际数据量比快照大,那么快照就会容纳不了,这时候快照功能就会失效。 [iyunv@localhost mnt]# lvdisplay --- Logicalvolume --- LV Path /dev/vbirdvg/vbirdss LV Name vbirdss VG Name vbirdvg LV UUID oLtuEe-viTM-ccuU-PkVb-47qc-Swc1-VqtHsv LV Write Access read/write LV Creation host, timelocalhost, 2016-07-02 00:23:06 +0800 LV snapshot status active destination for vbirdlv LV Status available # open 1 LV Size 20.00 GiB Current LE 1280 COW-table size 2.00 GiB COW-table LE 128 Allocated to snapshot 9.63% ←从这里可以看出,快照区已经被使用了9.63%,因为原始文件系统有过更改。
Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3
[iyunv@localhostmnt]# mount /dev/vbirdvg/vbirdss /mnt/snapshot/ [iyunv@localhostlvm]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup-lv_root 38776280 5020808 31785708 14% / tmpfs 243140 72 243068 1% /dev/shm /dev/sda1 495844 34907 435337 8% /boot /dev/mapper/vbirdvg-vbirdlv 20642428 370036 19223816 2% /mnt/lvm /dev/mapper/vbirdvg-vbirdss 20642428 176200 19417652 1% /mnt/snapshot
[iyunv@localhost ~]# mkdir -p/backups ←确定真的有这个文件 [iyunv@localhost ~]# cd/mnt/snapshot/ [iyunv@localhost snapshot]# tar -jvc-f /backups/lvm.tar.ba2 * lost+found/ test.sh 注:现在我们可以看出两者已经不同,下面我们将快照区内容复制出来。 - 为什么要备份呢?为什么不可以直接格式化/dev/vbird/vbirdlv,然后将/dev/vbirdvg/vbirdss直接复制给vbirdlv呢?要知道vbirdss其实是vbirdlv的快照,因此如果你格式化整个vibrdlv时原本的文件系统所有数据都会被复制到vbirdss。那如果vbirdss的容量不够大(大多数的时候都不够大),那么部分数据将无法复制到vbirdss内,数据当然无法全部还原。
所以才要在上面表格中制作出一个备份文件。
- 将vbirdss卸载并删除(里面的内容已经备份起来了,现在vbirdss已经没有用了)
[iyunv@localhost ~]# umount/mnt/snapshot/
[iyunv@localhost ~]# lvremove/dev/vbirdvg/vbirdss
Do you really want to removeactive logical volume vbirdss? [y/n]: y
Logicalvolume "vbirdss" successfully removed
[iyunv@localhost ~]#umount /mnt/lvm
[iyunv@localhost ~]# mkfs -t ext3/dev/vbirdvg/vbirdlv
mke2fs 1.41.12 (17-May-2010)
文件系统标签=
操作系统:Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=0blocks
1310720 inodes, 5242880 blocks
262144 blocks (5.00%) reserved forthe super user
第一个数据块=0
Maximum filesystemblocks=4294967296
160 block groups
32768 blocks per group, 32768fragments per group
8192 inodes per group
Superblock backups stored onblocks:
32768,98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystemaccounting information: 完成
This filesystem will be automaticallychecked every 28 mounts or
180days, whichever comes first. Use tune2fs-c or -i to override.
[iyunv@localhost~]# mount /dev/vbirdvg/vbirdlv /mnt/lvm
[iyunv@localhost~]# tar -jxf /backups/lvm.tar.ba2 -C /mnt/lvm
[iyunv@localhost ~]# ll /mnt/lvm
总用量 20
drwx------. 2 root root 16384 6月 27 22:17 lost+found
-rwxr-xr-x.1 root root 23 6月 27 22:29 test.sh
这个LVM终于整完了,大家一起努力吧, 我们的路还很长,如有错误请大家多多指点!
|