|
RAID10部分
一、模拟实验磁盘
1
2
3
| [iyunv@localhost ~]# ls /dev/sd* #添加sdb sdc sdd sde sdf sdg 六块磁盘 fdisk 命令分区所得如下查看
/dev/sda /dev/sda2 /dev/sdb1 /dev/sdc /dev/sdc2 /dev/sdd1 /dev/sde /dev/sde2 /dev/sdf1 /dev/sdg /dev/sdg2
/dev/sda1 /dev/sdb /dev/sdb2 /dev/sdc1 /dev/sdd /dev/sdd2 /dev/sde1 /dev/sdf /dev/sdf2 /dev/sdg1
|
二、创建RAID10架构 ,先创建两个raid1然后在两个raid1上创建raid0组成raid10架构
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| [iyunv@localhost ~]# mdadm -C /dev/md101 -l 1 -n 2 /dev/sd{b1,c1} # 创建第一个raid1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array?
Continue creating array? (y/n) y #此处确认要创建阵列
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md101 started.
[iyunv@localhost ~]# mdadm -C /dev/md102 -l 1 -n 2 /dev/sd{d1,e1} #创建第二个raid1
[iyunv@localhost ~]# mdadm -C /dev/md10 -l 0 -n 2 /dev/md{101,102} #创建raid0
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.
|
三、生成配置文件mdadm.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| [iyunv@localhost ~]# echo "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/md101 /dev/md102" >> /etc/mdadm.conf
[iyunv@localhost ~]# mdadm -Ds >> /etc/mdadm.conf
[iyunv@localhost ~]# cat /proc/mdstat #查看raid状态 active 状态为活动状态--正常状态
Personalities : [raid1] [raid0]
md10 : active raid0 md101[0] md102[1] #raid10
4188160 blocks super 1.2 512k chunks
md102 : active raid1 sdd1[0] sde1[1] #raid10底层raid1
2095104 blocks super 1.2 [2/2] [UU]
md101 : active raid1 sdb1[0] sdc1[1] #raid10底层raid1
2095104 blocks super 1.2 [2/2] [UU]
unused devices: <none>
[iyunv@localhost ~]#
|
四、停止raid10重新激活
1
2
3
4
5
6
7
8
| [iyunv@localhost ~]# mdadm -Ss #停止
[iyunv@localhost ~]# mdadm -As #激活
mdadm: /dev/md101 has been started with 2 drives.
mdadm: /dev/md102 has been started with 2 drives.
mdadm: /dev/md10 has been started with 2 drives.
[iyunv@localhost ~]#
[iyunv@localhost ~]# ls /dev/md10*
/dev/md10 /dev/md101 /dev/md102
|
LVM部分
五、在raid基础上模拟出三个磁盘为LVM创建备用,此处也可以不用分区直接全部使用/dev/md10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| [iyunv@localhost ~]# fdisk /dev/md10
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-8376319, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-8376319, default 8376319): +500M
Partition 1 of type Linux and of size 500 MiB is set
|
此处省略。。。。。。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| Command (m for help): p
Disk /dev/md10: 4288 MB, 4288675840 bytes, 8376320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk label type: dos
Disk identifier: 0xe9ef5c16
Device Boot Start End Blocks Id System
/dev/md10p1 2048 1026047 512000 83 Linux
/dev/md10p2 1026048 2050047 512000 83 Linux
/dev/md10p3 2050048 3074047 512000 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
|
六、创建物理卷PV
1
2
3
4
| [iyunv@localhost ~]# pvcreate /dev/md{10p1,10p2,10p3}
Physical volume "/dev/md10p1" successfully created
Physical volume "/dev/md10p2" successfully created
Physical volume "/dev/md10p3" successfully created
|
七、创建卷组VG
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| [iyunv@localhost ~]# vgcreate vg_md10 /dev/md10{p1,p2,p3}
Volume group "vg_md10" successfully created
[iyunv@localhost ~]# vgdisplay #查看VG,显示VG名称 UUID 大小等信息或者用vgs命令 查看使用或者剩余空间
--- Volume group ---
VG Name rhel
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 14.00 GiB
PE Size 4.00 MiB
Total PE 3585
Alloc PE / Size 3584 / 14.00 GiB
Free PE / Size 1 / 4.00 MiB
VG UUID FlLk0M-Tzpg-wwGU-LEY6-enhG-uwYO-selaJr
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| --- Volume group ---
VG Name vg_md10
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size 1.45 GiB
PE Size 4.00 MiB
Total PE 372
Alloc PE / Size 0 / 0
Free PE / Size 372 / 1.45 GiB
VG UUID oFxmNh-X5cv-cdXo-YsH4-ahSw-5Wb7-MlGPZu
|
八、创建逻辑卷LV
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| [iyunv@localhost ~]#
[iyunv@localhost ~]# lvcreate -L 1G -n web vg_md10 #创建web逻辑卷
Logical volume "web" created.
[iyunv@localhost ~]# lvcreate -L 300M -n data vg_md10 #创建data逻辑卷
Logical volume "data" created.
[iyunv@localhost ~]# lvdisplay #查看逻辑卷 或者用lvs命令查看使用或者剩余lv大小
--- Logical volume ---
LV Path /dev/rhel/root
LV Name root
VG Name rhel
LV UUID FJLJ2i-nNOj-kSq0-053A-jCFT-bp0l-ByVBG0
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2016-06-02 16:48:04 +0800
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| --- Logical volume ---
LV Path /dev/rhel/swap
LV Name swap
VG Name rhel
LV UUID UuE9XV-krhI-9bMp-Pdsz-umux-tT8g-FWI6uu
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2016-06-02 16:48:04 +0800
LV Status available
# open 2
LV Size 4.00 GiB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| --- Logical volume ---
LV Path /dev/vg_md10/web
LV Name web
VG Name vg_md10
LV UUID 0Mdc58-CV2l-1mdi-5DN9-X1CC-V1c3-ZJ5KDq
LV Write Access read/write
LV Creation host, time localhost, 2016-06-20 15:09:07 +0800
LV Status available
# open 0
LV Size 1.00 GiB
Current LE 256
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| --- Logical volume ---
LV Path /dev/vg_md10/data
LV Name data
VG Name vg_md10
LV UUID etznxv-xobT-rihm-kLXt-m0XM-NgRw-YRIj7n
LV Write Access read/write
LV Creation host, time localhost, 2016-06-20 15:09:43 +0800
LV Status available
# open 0
LV Size 300.00 MiB
Current LE 75
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:3
|
1
2
3
4
5
6
7
8
9
10
11
| [iyunv@localhost ~]#
[iyunv@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rhel 1 2 0 wz--n- 14.00g 4.00m
vg_md10 3 2 0 wz--n- 1.45g 164.00m
[iyunv@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel -wi-ao---- 10.00g
swap rhel -wi-ao---- 4.00g
data vg_md10 -wi-a----- 300.00m
web vg_md10 -wi-a----- 1.00g
|
九、格式化逻辑卷并挂载使用
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| [iyunv@localhost ~]# mkfs.xfs /dev/vg_md10/web
meta-data=/dev/vg_md10/web isize=256 agcount=8, agsize=32640 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=261120, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=624, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[iyunv@localhost ~]# mkfs.xfs /dev/vg_md10/data
meta-data=/dev/vg_md10/data isize=256 agcount=8, agsize=9600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=76800, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=624, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| [iyunv@localhost ~]# mkdir -p /opt/test/{web,data} #创建两个挂载点
[iyunv@localhost ~]# cat >> /etc/fstab << EOF #添加两个开机启动挂载项
> /dev/vg_md10/web /opt/test/web/ xfs defaults 0 0
> /dev/vg_md10/data /opt/test/data/ xfs defaults 0 0
> EOF
[iyunv@localhost ~]# mount -a #重新读取fstab并挂载未挂载项
[iyunv@localhost ~]# df -h #查看逻辑卷挂载大小
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 10G 3.2G 6.9G 32% /
devtmpfs 898M 0 898M 0% /dev
tmpfs 913M 84K 913M 1% /dev/shm
tmpfs 913M 9.0M 904M 1% /run
tmpfs 913M 0 913M 0% /sys/fs/cgroup
/dev/sr0 3.8G 3.8G 0 100% /mnt
/dev/sda1 497M 158M 340M 32% /boot
tmpfs 183M 16K 183M 1% /run/user/42
tmpfs 183M 0 183M 0% /run/user/0
/dev/mapper/vg_md10-web 1018M 33M 986M 4% /opt/test/web
/dev/mapper/vg_md10-data 298M 16M 283M 6% /opt/test/data
|
十、模拟数据拷贝到web和data里
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| [iyunv@localhost ~]#
[iyunv@localhost ~]# cp -r /etc/ /opt/test/web/
[iyunv@localhost ~]# cp -r /etc/ /opt/test/data/
[iyunv@localhost ~]# ls !$
ls /opt/test/data/
etc
[iyunv@localhost ~]# systemctl reboot #重启系统
[iyunv@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md10p1 vg_md10 lvm2 a-- 496.00m 0
/dev/md10p2 vg_md10 lvm2 a-- 496.00m 0
/dev/md10p3 vg_md10 lvm2 a-- 496.00m 164.00m
/dev/sda2 rhel lvm2 a-- 14.00g 4.00m
[iyunv@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rhel 1 2 0 wz--n- 14.00g 4.00m
vg_md10 3 2 0 wz--n- 1.45g 164.00m
[iyunv@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel -wi-ao---- 10.00g
swap rhel -wi-ao---- 4.00g
data vg_md10 -wi-ao---- 300.00m
web vg_md10 -wi-ao---- 1.00g
[iyunv@localhost ~]#
|
十一、模拟raid10故障----磁盘/dev/sdb1故障
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
| [iyunv@localhost ~]# mdadm /dev/md101 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md101
[iyunv@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md101[0] md102[1]
4188160 blocks super 1.2 512k chunks
md102 : active raid1 sdd1[0] sde1[1]
2095104 blocks super 1.2 [2/2] [UU]
md101 : active raid1 sdc1[1] sdb1[0](F) #F表示故障了 raid1显示一个磁盘损坏
2095104 blocks super 1.2 [2/1] [_U]
unused devices: <none>
[iyunv@localhost ~]# mdadm /dev/md101 -r /dev/sdb1 #移除故障盘
mdadm: hot removed /dev/sdb1 from /dev/md101
[iyunv@localhost ~]# more /proc/mdstat #查看raid状态
Personalities : [raid1] [raid0]
md10 : active raid0 md101[0] md102[1]
4188160 blocks super 1.2 512k chunks
md102 : active raid1 sdd1[0] sde1[1]
2095104 blocks super 1.2 [2/2] [UU]
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| md101 : active raid1 sdc1[1]
2095104 blocks super 1.2 [2/1] [_U]
unused devices: <none>
[iyunv@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel -wi-ao---- 10.00g
swap rhel -wi-ao---- 4.00g
data vg_md10 -wi-ao---- 300.00m
web vg_md10 -wi-ao---- 1.00g
[iyunv@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rhel 1 2 0 wz--n- 14.00g 4.00m
vg_md10 3 2 0 wz--n- 1.45g 164.00m
[iyunv@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md10p1 vg_md10 lvm2 a-- 496.00m 0
/dev/md10p2 vg_md10 lvm2 a-- 496.00m 0
/dev/md10p3 vg_md10 lvm2 a-- 496.00m 164.00m
/dev/sda2 rhel lvm2 a-- 14.00g 4.00m
|
十二、添加一块磁盘进去
1
2
| [iyunv@localhost ~]# mdadm /dev/md101 -a /dev/sdf1
mdadm: added /dev/sdf1
|
#########此处再打开一个终端用命令 watch -n 1 cat /proc/mdstat 可以动态查看添加进磁盘恢复的进度########################
#####################raid设备发生了变化需要重新生成mdadm.conf文件####################
测试启动故障(加入了新的设备需要重新生成新的配置文件此处我没有重新生成,)下面解说
在创建时没有保存好/etc/mdadm.conf,在系统启动是不会启动你新建的那个阵列呢。那怎么办呢?我们可以用另外一种方法激活raid组装曾创建过的阵列
--assemble或者其缩写(-A)主要是检查底层设备的元数据信息,然后再组装为活跃的阵列。如果我们已经知道阵列由那些设备组成,可以指定使用那些设备来启动阵列。
1
2
| [iyunv@localhost ~]# mdadm -A /dev/md101 /dev/sd{c1,d1,e1,f1} 101 故障
[iyunv@localhost ~]# mdadm -A /dev/md10 /dev/md{101,102}
|
如果不知道raid是由哪个组成的 可以用
1
| [iyunv@localhost ~]# mdadm -E /dev/md101或者md102或者/dev/sdb1或者c1d1e1f1
|
不过我的解决方案没这么繁琐,可以试试这样在/etc/mdadm.conf 中加入DEVICE /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1(这个是之前报错时候写的/dev/sdb1,把这个改为现在
的/dev/sdf1就能启动了!) (预先加入)
#######################此处可以用命令 watch -n 1 cat /proc/mdstat 动态查看文件的变化###################################
完成之后 diff一下 文件是否相同
十三、对比测试文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| [iyunv@localhost ~]# diff -r /etc /opt/test/web/etc/ #我这出现了提示如下,有兴趣可以自己实验一下
diff -r /etc/cups/subscriptions.conf /opt/test/web/etc/cups/subscriptions.conf
2,12c2,3
< # Written by cupsd on 2016-06-20 15:22
< NextSubscriptionId 9
< <Subscription 8>
< Events printer-state-changed printer-restarted printer-shutdown printer-stopped printer-added printer-deleted job-state-changed job-created job-completed job-stopped
< Owner root
< Recipient dbus://
< LeaseDuration 3600
< Interval 0
< ExpirationTime 1466410890
< NextEventId 1
< </Subscription>
---
> # Written by cupsd on 2016-06-20 09:52
> NextSubscriptionId 8
diff -r /etc/cups/subscriptions.conf.O /opt/test/web/etc/cups/subscriptions.conf.O
2c2
< # Written by cupsd on 2016-06-20 09:52
---
> # Written by cupsd on 2016-06-20 01:19
3a4,12
> <Subscription 7>
> Events printer-state-changed printer-restarted printer-shutdown printer-stopped printer-added printer-deleted job-state-changed job-created job-completed job-stopped
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| > Owner root
> Recipient dbus://
> LeaseDuration 3600
> Interval 0
> ExpirationTime 1466360342
> NextEventId 1
> </Subscription>
diff: /opt/test/web/etc/grub2.cfg: No such file or directory
diff: /opt/test/web/etc/localtime: No such file or directory
diff: /opt/test/web/etc/sysconfig/network-scripts/ifdown: No such file or directory
diff: /opt/test/web/etc/sysconfig/network-scripts/ifup: No such file or directory
[iyunv@localhost ~]#
[iyunv@localhost ~]#
[iyunv@localhost ~]#
十四、文件的大小没有发生变化
[iyunv@localhost ~]# ll -d /opt/test/web/etc/ /opt/test/data/etc/ /etc
drwxr-xr-x. 137 root root 8192 Jun 20 15:20 /etc
drwxr-xr-x 137 root root 8192 Jun 20 15:18 /opt/test/data/etc/
drwxr-xr-x 137 root root 8192 Jun 20 15:18 /opt/test/web/etc/
|
十五、vg扩容
1
2
3
4
| [iyunv@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rhel 1 2 0 wz--n- 14.00g 4.00m
vg_md10 3 2 0 wz--n- 1.45g 164.00m
|
卷组没有足够的空间了 剩下164MB的空闲
1
| [iyunv@localhost ~]# fdisk /dev/md10 #查看raid盘可用大小然后模拟分区以备添加到LVM
|
添加硬盘进来
1
2
3
4
5
6
7
8
9
10
11
| Device Boot Start End Blocks Id System
/dev/md10p1 2048 1026047 512000 83 Linux
/dev/md10p2 1026048 2050047 512000 83 Linux
/dev/md10p3 2050048 3074047 512000 83 Linux
/dev/md10p4 3074048 8376319 2651136 5 Extended
/dev/md10p5 3076096 4100095 512000 83 Linux
/dev/md10p6 4102144 5126143 512000 83 Linux
/dev/md10p7 5128192 6152191 512000 83 Linux
[iyunv@localhost ~]# partprobe #我这没有获取到分区信息,重新读取一下分区
[iyunv@localhost ~]# ls /dev/md10* #显示出来了
/dev/md10 /dev/md101 /dev/md102 /dev/md10p1 /dev/md10p2 /dev/md10p3 /dev/md10p4 /dev/md10p5 /dev/md10p6 /dev/md10p7
|
添加567
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| [iyunv@localhost ~]# pvcreate /dev/md10p{5,6,7} #创建PV
Physical volume "/dev/md10p5" successfully created
Physical volume "/dev/md10p6" successfully created
Physical volume "/dev/md10p7" successfully created
[iyunv@localhost ~]# vgextend vg_md10 /dev/md10p{5,6,7} #PV加入VG
Volume group "vg_md10" successfully extended
[iyunv@localhost ~]#
[iyunv@localhost ~]# vgs #VG 发生了变化
VG #PV #LV #SN Attr VSize VFree
rhel 1 2 0 wz--n- 14.00g 4.00m
vg_md10 6 2 0 wz--n- 2.91g 1.61g
[iyunv@localhost ~]# lvextend -L +1G /dev/vg_md10/web #web扩大1G
Size of logical volume vg_md10/web changed from 1.00 GiB (256 extents) to 2.00 GiB (512 extents).
Logical volume web successfully resized.
[iyunv@localhost ~]# lvextend -L +500M /dev/vg_md10/data #data 扩大500M
Size of logical volume vg_md10/data changed from 300.00 MiB (75 extents) to 800.00 MiB (200 extents).
Logical volume data successfully resized.
|
1
2
3
4
5
6
7
8
9
10
11
| [iyunv@localhost ~]# xfs_growfs /dev/vg_md10/web #RHEL6中更新ext4文件系统用resize2fs 命令(区别)
meta-data=/dev/mapper/vg_md10-web isize=256 agcount=8, agsize=32640 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=261120, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=624, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 261120 to 524288
|
1
2
3
4
5
6
7
8
9
10
11
| [iyunv@localhost ~]# xfs_growfs /dev/vg_md10/data
meta-data=/dev/mapper/vg_md10-data isize=256 agcount=8, agsize=9600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=76800, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=624, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 76800 to 204800
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| [iyunv@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 10G 3.2G 6.9G 32% /
devtmpfs 898M 0 898M 0% /dev
tmpfs 913M 88K 913M 1% /dev/shm
tmpfs 913M 9.0M 904M 1% /run
tmpfs 913M 0 913M 0% /sys/fs/cgroup
/dev/sr0 3.8G 3.8G 0 100% /mnt
/dev/mapper/vg_md10-data 798M 51M 747M 7% /opt/test/data
/dev/mapper/vg_md10-web 2.0G 68M 2.0G 4% /opt/test/web
/dev/sda1 497M 158M 340M 32% /boot
tmpfs 183M 4.0K 183M 1% /run/user/42
tmpfs 183M 12K 183M 1% /run/user/0
[iyunv@localhost ~]#
[iyunv@localhost ~]# diff /etc/passwd /opt/test/web/passwd 对比文件内容无变化
[iyunv@localhost ~]# diff /etc/passwd /opt/test/data/passwd
[iyunv@localhost ~]#
|
|
|