1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
| 操作内容:
一、假设已经新增一块磁盘,sdb,,建立了一个分区/dev/sdb1
二、创建一个lvm分区
【pv】
[iyunv@tvm-test ~]# pvcreate /dev/sdb1
【vg】
[iyunv@tvm-test ~]# vgcreate vg01 /dev/sdb1
【lv】
[iyunv@tvm-test ~]# lvcreate -L 10G -n lv01 vg01
【mkfs】
[iyunv@tvm-test ~]# mkfs.ext4 /dev/vg01/lv01
【mount】
[iyunv@tvm-test ~]# mkdir /DATA01 && mount /dev/vg01/lv01 /DATA01
[iyunv@tvm-test ~]# echo "UUID=$(blkid /dev/vg01/lv01 |cut -d'"' -f2) /DATA01 ext4 defaults 1 2" >>/etc/fstab
[iyunv@tvm-test ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 3.8G 1.7G 2.0G 46% /
tmpfs 246M 12K 246M 1% /dev/shm
/dev/sda1 194M 29M 155M 16% /boot
/dev/mapper/vg01-lv01 9.9G 151M 9.2G 2% /DATA01
随便写几个文件到/DATA01/下
[iyunv@tvm-test ~]# touch /DATA01/file2_{1..5} && mkdir /DATA01/dir2_{1,2,3} && touch /DATA01/dir2_{1,2}/file3.{aa,bb,cc}
[iyunv@tvm-test ~]# dd if=/dev/zero of=/DATA01/dir2_3/test1.dd bs=512 count=100000
100000+0 records in
100000+0 records out
51200000 bytes (51 MB) copied, 0.186818 s, 274 MB/s
[iyunv@tvm-test ~]# dd if=/dev/zero of=/DATA01/dir2_3/test2.dd bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 0.230891 s, 443 MB/s
[iyunv@tvm-test ~]# dd if=/dev/zero of=/DATA01/dir2_3/test2.dd bs=2048 count=100000
100000+0 records in
100000+0 records out
204800000 bytes (205 MB) copied, 0.319986 s, 640 MB/s
[iyunv@tvm-test ~]# dd if=/dev/zero of=/DATA01/dir2_3/test2.dd bs=4096 count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 0.572126 s, 716 MB/s
[iyunv@tvm-test ~]# tree /DATA01/
/DATA01/
├── dir2_1
│ ├── file3.aa
│ ├── file3.bb
│ └── file3.cc
├── dir2_2
│ ├── file3.aa
│ ├── file3.bb
│ └── file3.cc
├── dir2_3
│ ├── test1.dd
│ └── test2.dd
├── file2_1
├── file2_2
├── file2_3
├── file2_4
└── file2_5
3 directories, 13 files
[iyunv@tvm-test ~]# df -h /DATA01/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg01-lv01 9.9G 590M 8.8G 7% /DATA01
三 测试给lv01建立快照
1、创建快照
[iyunv@tvm-test /]# lvcreate -L 2G -s -n snap_lv01 /dev/vg01/lv01
Logical volume "snap_lv01" created
2、挂载后压缩备份
[iyunv@tvm-test ~]# lv_snap="/dev/vg01/snap_lv01" \
&& d_backup='/data/backup/snapshot' \
&& f_snap="/data/backup/snapshot/lv01_$(date +%F).tar.gz" \
test -d ${d_backup} || mkdir -p ${d_backup}/mnt \
&& mount -o ro ${lv_snap} ${d_backup}/mnt \
&& cd ${d_backup} \
&& tar zcf ${f_snap} mnt/* \
&& ls -lh ${f_snap} \
&& df -h
-rw-r--r-- 1 root root 438K Jul 31 16:23 /data/backup/snapshot/lv01_2015-07-31.tar.gz
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 3.8G 1.7G 2.0G 46% /
tmpfs 246M 12K 246M 1% /dev/shm
/dev/sda1 194M 29M 155M 16% /boot
/dev/mapper/vg01-lv01 9.9G 590M 8.8G 7% /DATA01
/dev/mapper/vg01-snap_lv01 9.9G 590M 8.8G 7% /data/backup/snapshot/mnt
3、卸载后移除快照
[iyunv@tvm-test /]# umount ${d_backup}/mnt && lvremove -f ${lv_snap}
Logical volume "snap_lv01" successfully removed
4、总结步骤
#1 create 创建快照
#2 mount to tar backup 挂载后压缩备份
#3 umount to remove 卸载后移除快照
(可以写成脚本了)
[iyunv@tvm-test bin]# cat lvm_test_snapshot.sh
#!/bin/bash
#
# 2015/7/31
n_size='2G'
f_vg='vg01'
f_orin='lv01'
f_snap='snap_lv01'
lv_orig="/dev/${f_vg}/${f_orig}"
lv_snap="/dev/${f_vg}/${f_snap}"
d_backup='/data/backup/snapshot'
f_snap="${d_backup}/${f_orig}_$(date +%F).tar.gz"
#1 create 创建快照
lvcreate -L ${n_size} -s -n ${f_snap} ${lv_orig}
#2 mount to tar backup 挂载后压缩备份
test -d ${d_backup} || mkdir -p ${d_backup}/mnt \
&& mount -o remount,ro ${lv_snap} ${d_backup}/mnt \
&& df -h
cd ${d_backup} \
&& tar zcf ${f_snap} mnt/* \
&& ls -lh ${f_snap}
#3 umount to remove 卸载后移除快照
umount ${d_backup}/mnt && lvremove -f ${lv_snap}
四、疑问
Q:建立快照时,要分配多大的size呢?
A:翻一下lvcreate的man,注意-s选项的描述中有这么一段话:
The non thin volume snapshot with the specified size does not need the
same amount of storage the origin has. In a typical scenario, 15-20%
might be enough. In case the snapshot runs out of storage, use lvextend(8)
to grow it. Shrinking a snapshot is supported by lvreduce(8) as well.
Run lvs(8) on the snapshot in order to check how much data is allocated
to it. Note: a small amount of the space you allocate to the snapshot is
used to track the locations of the chunks of data, so you should allocate
slightly more space than you actually need and monitor (--monitor) the rate
at which the snapshot data is growing so you can avoid running out of space.
给snapshot分配20%左右的origin的容量即可。
测试:
[iyunv@tvm-test /]# dd if=/dev/zero of=/DATA01/dir2_3/test3.dd bs=4096 count=200000
200000+0 records in
200000+0 records out
819200000 bytes (819 MB) copied, 1.21615 s, 674 MB/s
[iyunv@tvm-test snapshot]# rm -f /DATA01/dir2_3/test2.dd
[iyunv@tvm-test snapshot]# pwd
/data/backup/snapshot
[iyunv@tvm-test snapshot]# ls
lv01_2015-07-31.tar.gz mnt
[iyunv@tvm-test snapshot]# tar zxvf lv01_2015-07-31.tar.gz
mnt/dir2_1/
mnt/dir2_1/file3.aa
mnt/dir2_1/file3.bb
mnt/dir2_1/file3.cc
mnt/dir2_2/
mnt/dir2_2/file3.aa
mnt/dir2_2/file3.bb
mnt/dir2_2/file3.cc
mnt/dir2_3/
mnt/dir2_3/test2.dd
mnt/dir2_3/test1.dd
mnt/file2_1
mnt/file2_2
mnt/file2_3
mnt/file2_4
mnt/file2_5
[iyunv@tvm-test snapshot]# ls mnt/
dir2_1 dir2_2 dir2_3 file2_1 file2_2 file2_3 file2_4 file2_5
[iyunv@tvm-test snapshot]# ll -h mnt/dir2_3/
total 440M
-rw-r--r-- 1 root root 49M Jul 31 15:35 test1.dd
-rw-r--r-- 1 root root 391M Jul 31 15:36 test2.dd
拷贝到另一台主机:
[iyunv@tvm-test snapshot]# rsync -avzP lv01_2015-07-31.tar.gz 192.168.56.253:/data/testarea
The authenticity of host '192.168.56.253 (192.168.56.253)' can't be established.
RSA key fingerprint is 35:10:6d:25:4e:3d:6f:59:86:87:e3:88:9f:9c:81:a8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.56.253' (RSA) to the list of known hosts.
root@192.168.56.253's password:
sending incremental file list
lv01_2015-07-31.tar.gz
447649 100% 79.13MB/s 0:00:00 (xfer#1, to-check=0/1)
sent 447890 bytes received 31 bytes 68910.92 bytes/sec
total size is 447649 speedup is 1.00
在另一台主机上操作:
[iyunv@tvm-saltmaster testarea]# ll -h
total 440K
-rw-r--r-- 1 root root 438K Jul 31 16:23 lv01_2015-07-31.tar.gz
[iyunv@tvm-saltmaster testarea]# tar zxvf lv01_2015-07-31.tar.gz
mnt/dir2_1/
mnt/dir2_1/file3.aa
mnt/dir2_1/file3.bb
mnt/dir2_1/file3.cc
mnt/dir2_2/
mnt/dir2_2/file3.aa
mnt/dir2_2/file3.bb
mnt/dir2_2/file3.cc
mnt/dir2_3/
mnt/dir2_3/test2.dd
mnt/dir2_3/test1.dd
mnt/file2_1
mnt/file2_2
mnt/file2_3
mnt/file2_4
mnt/file2_5
[iyunv@tvm-saltmaster testarea]# ls mnt/
dir2_1 dir2_2 dir2_3 file2_1 file2_2 file2_3 file2_4 file2_5
[iyunv@tvm-saltmaster testarea]# ll -h mnt/dir2_3/
total 440M
-rw-r--r-- 1 root root 49M Jul 31 15:35 test1.dd
-rw-r--r-- 1 root root 391M Jul 31 15:36 test2.dd
|