RAID简介:
是英文Redundant Array of Independent Disks 的缩写,翻译成中文意思是“独立磁盘冗余阵列”,有时也简称磁盘阵列(Disk Array)。简单的说,RAID是一种把多块独立的硬盘(物理硬盘)按不同的方式组合起来形成一个硬盘组(逻辑硬盘),从而提供比单个硬盘更高的存储性能和提供数据备份的技术。组成磁盘阵列的不同方式称为RAID级别(RAID Levels),常见的RAID Level 包括raid0, raid1, raid5, raid10。
Raid0:条带 :性能提升,读写能力;无冗余能力;空间利用率,N倍;至少2块盘 Raid1:镜像 :读性能提升,写性能下降;有冗余能力;空间利用率,1/2; 至少2块盘 Raid 0+1: Raid1+0 : 读性能提升,写性能下降;有冗余能力;空间利用率,1/2;至少4块盘 Raid10 性能 优于 Raid01性能 Raid 5 : 性能提升,读写; 有冗余能力; 空间利用率,(N-1)/N;至少3块盘 实验环境:
虚拟环境:vmware
centos环境:6.6 最小化安装
实现目的:
1、通过软raid 配置,分别实现配置raid 0、 raid 1、raid 5 。
2、模拟扩展磁盘:扩展raid容量、单盘故障替换。
3、卸载raid ,删除raid
一、设置磁盘(虚拟机中增加8块20G磁盘),安装mdadm 管理工具。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| [iyunv@study ~]# fdisk -l #查看磁盘状态
... #省略
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4a536351
....#省略
Disk /dev/sdh: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[iyunv@study ~]# yum install mdadm -y #安装 raid管理工具
|
二、创建Raid 0
2.1、创建分区 ,修改分区类型。使用分区/dev/sdb 和 /dev/sdc 创建raid
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
| [iyunv@study ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x4a536351.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4a536351
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610): +
Using default value 2610
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4a536351
Device Boot Start End Blocks Id System
/dev/sdb1 1 2610 20964793+ 83 Linux
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4a536351
Device Boot Start End Blocks Id System
/dev/sdb1 1 2610 20964793+ fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
|
说明:#fdisk /dev/sdb 对/dev/sdb 进行分区;
输入"m", 获取帮助;
输入"p", 查看分区前磁盘状态;
输入“n”,新建磁盘分区;
输入“e”,新建逻辑分区;
输入“p”,新建主分区;
输入“t”,修改分区类型;
输入“fd”,调整为linux raid autodetect 类型 ;
输入“p”,显示分区状态表;
输入“w”,保存新建分区。
以同样的方式,对磁盘/dev/sdc 进行分区。
2.2、通过reboot 重新加载磁盘分区,并查阅磁盘状态
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| [iyunv@study ~]# fdisk -l #查看分区状态
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4a536351
Device Boot Start End Blocks Id System
/dev/sdb1 1 2610 20964793+ fd Linux raid autodetect
Disk /dev/sdc: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x2440f7ee
Device Boot Start End Blocks Id System
/dev/sdc1 1 2610 20964793+ fd Linux raid autodetect
|
2.3、创建raid0,查看raid 状态
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
| [iyunv@study ~]#mdadm -C /dev/md0 -a yes -l 0 -n 2 /dev/sd{b,c}1 #创建raid阵列 ,名称为/dev/md0 ,自动创建设备,级别为raid0 ,包含2块磁盘,分别为/dev/sdb和/dev/sdc
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[iyunv@study ~]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdc[1] sdb[0]
41910272 blocks super 1.2 512k chunks
unused devices: <none>
[iyunv@study ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Sep 7 22:54:41 2017
Raid Level : raid0
Array Size : 41910272 (39.97 GiB 42.92 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Sep 7 22:54:41 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : study:0 (local to host study)
UUID : f5f06342:4d47f03d:260bde9d:5a678d17
Events : 0
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
|
说明:
-C --create 创建阵列; -a --auto 同意创建设备,如不加此参数时必须先使用mknod 命令来创建一个RAID设备,不过推荐使用-a yes参数一次性创建; -l --level 阵列模式,支持的阵列模式有 linear, raid0, raid1, raid4, raid5, raid6, raid10, multipath, faulty, container; -n --raid-devices 阵列中活动磁盘的数目,该数目加上备用磁盘的数目应该等于阵列中总的磁盘数目; /dev/md0 阵列的设备名称; /dev/sd{b,c} 参与创建阵列的磁盘名称; Raid Level : 阵列级别; Array Size : 阵列容量大小; Raid Devices : RAID成员的个数; Total Devices : RAID中下属成员的总计个数,包含冗余硬盘或分区,如spare, State : clean, degraded, recovering 状态,包括三个状态,clean 表示正常,degraded 表示有问题,recovering 表示正在恢复或构建; Active Devices : 被激活的RAID成员个数; Working Devices : 正常的工作的RAID成员个数; Failed Devices : 出问题的RAID成员; Spare Devices : 备用RAID成员个数,当一个RAID的成员出问题时,用其它硬盘或分区来顶替; UUID : RAID的UUID值,在系统中是唯一的;
mdadm:是multiple devices admin 的简称,它是Linux下的一款标准的软件RAID 管理工具 格式:mdadm [mode] <raiddevice> [options] <component-devices> 创建模式 -C 创建阵列 专用级别 -l 级别 -n 设备个数 -a 自动为其创建设备文件,后面要跟上yes或no -c 指定thunk 大小 -x 指定热备份盘个数 管理模式 --add| --remove |--fail mdadm /dev/md# --fail /dev/sda #模拟sda#损坏 监控模式 -F -D |--detail ,查看raid阵列的详细信息 增长模式 -G 装配模式 -A /dev/sda7 /dev/sda9 ,重新启动raid 阵列
2.4、格式raid0分区并挂载设备 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| [iyunv@study dev]# mkfs.ext4 /dev/md0 #格式化raid分区
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477568 blocks
523878 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[iyunv@study dev]# mkdir /raid{0,1,5,10}
[iyunv@study dev]# mount /dev/md0 /raid0 #挂载raid 分区/dev/md0
[iyunv@study dev]# cd /dev/raid0
[iyunv@study raid0]# ls
lost+found
[iyunv@study raid0]# df -l #显示分区/dev/md0 ,分区大小为40G
Filesystem 1K-blocks Used Available Use% Mounted on
...
/dev/md0 41121088 49032 38976544 1% /dev/raid0
|
2.5、配置开机自动挂载,重启确认,完成开机自动挂载raid0
1
2
3
4
5
6
7
| [iyunv@study ~]# echo 'Device /dev/sdb1 /dev/sdc1'>>/etc/mdadm.conf
[iyunv@study ~]# echo `mdadm -Ds /dev/md0` >>/etc/mdadm.conf
[iyunv@study raid0]# vi /etc/mdadm.conf #查看配置文件内容
DEVICE /dev/sdb1 /dev/sdc1
ARRAY /dev/md0 metadata=1.2 name=study:0 UUID=ee22d16e:210d6495:3b128114:aa36657f
[iyunv@study raid0]# vi /etc/fstab #编辑开启挂载配置文件,添加最后一行
/dev/md0 /raid0 ext4 defaults 1 1
|
三、创建Raid 1 3.1、创建分区 ,修改分区类型 。使用分区/dev/sdd 和 /dev/sde 创建raid 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
| [iyunv@study ~]# fdisk /dev/sdd
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x4a536351.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610): +
Using default value 2610
Command (m for help): p
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3208652c
Device Boot Start End Blocks Id System
/dev/sdd1 1 2610 20964793+ fd Linux raid autodetect
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4a536351
Device Boot Start End Blocks Id System
/dev/sdb1 1 2610 20964793+ fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
|
说明:#fdisk /dev/sdb 对/dev/sdb 进行分区; 输入"m", 获取帮助; 输入"p", 查看分区前磁盘状态; 输入“n”,新建磁盘分区; 输入“e”,新建逻辑分区; 输入“p”,新建主分区; 输入“t”,修改分区类型; 输入“fd”,调整为linux raid autodetect 类型 ; 输入“p”,显示分区状态表; 输入“w”,保存新建分区。 以同样的方式,对磁盘/dev/sdd 进行分区。 3.2、查看磁盘分区状态 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| [iyunv@study raid0]# fdisk -l
...
Disk /dev/sde: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6e08acaf
Device Boot Start End Blocks Id System
/dev/sde1 1 2610 20964793+ fd Linux raid autodetect
Disk /dev/sdd: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3208652c
Device Boot Start End Blocks Id System
/dev/sdd1 1 2610 20964793+ fd Linux raid autodetect
|
3.3、创建raid1并查看raid 状态
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
| [iyunv@study raid0]# mdadm -C /dev/md1 -a yes -l 1 -n 2 /dev/sd{d,e}1
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[iyunv@study raid0]# cat /proc/mdstat
Personalities : [raid0] [raid1]
md0 : active raid0 sdc1[1] sdb1[0]
41895936 blocks super 1.2 512k chunks
md0 : active raid0 sdc1[1] sdb1[0]
41895936 blocks super 1.2 512k chunks
unused devices: <none>
[iyunv@study raid0]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Fri Sep 8 18:21:30 2017
Raid Level : raid1
Array Size : 20948352 (19.98 GiB 21.45 GB)
Used Dev Size : 20948352 (19.98 GiB 21.45 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Sep 8 18:26:33 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : study:1 (local to host study)
UUID : 4ea56c89:08d9c291:0c965748:e2235b14
Events : 17
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
|
3.4、格式raid1分区并挂载设备 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| [iyunv@study raid0]# mkfs.ext4 /dev/md1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5237088 blocks
261854 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[iyunv@study raid0]# mount /dev/md1 /raid1
|
3.5、配置开机自动挂载,重启确认, 1
2
3
4
5
| [iyunv@study ~]# echo 'Device /dev/sdd1 /dev/sde1'>>/etc/mdadm.conf
[iyunv@study ~]# mdadm -Ds /dev/md1 >>/etc/mdadm.conf
[iyunv@study raid1]# vi /etc/fstab #编辑开机挂载文件,添加下面一行
/dev/md1 /raid1 ext4 defaults 1 1
|
四、创建Raid 5 4.1、创建分区 ,修改分区类型 。分区结果如下: 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
| Disk /dev/sdf: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x795371df
Device Boot Start End Blocks Id System
/dev/sdf1 1 2610 20964793+ fd Linux raid autodetect
Disk /dev/sdg: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x794d2917
Device Boot Start End Blocks Id System
/dev/sdg1 1 2610 20964793+ fd Linux raid autodetect
Disk /dev/sdh: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x5c8d97bd
Device Boot Start End Blocks Id System
/dev/sdh1 1 2610 20964793+ fd Linux raid autodetect
Disk /dev/sdi: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x46a04759
Device Boot Start End Blocks Id System
/dev/sdi1 1 2610 20964793+ fd Linux raid autodetect
|
4.2、创建raid5并查看raid 状态 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
| [iyunv@study raid1]# mdadm -C /dev/md5 -a yes -l 5 -n 3 -x 1 /dev/sd{f,g,h,i}1
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[iyunv@study raid1]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sdh1[4] sdi1[3](S) sdg1[1] sdf1[0]
41895936 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[=>...................] recovery = 8.8% (1856000/20947968) finish=2.0min speed=154666K/sec
md1 : active raid1 sde1[1] sdd1[0]
20948352 blocks super 1.2 [2/2] [UU]
md0 : active raid0 sdb1[0] sdc1[1]
41895936 blocks super 1.2 512k chunks
[iyunv@study raid1]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Sep 8 22:06:07 2017
Raid Level : raid5
Array Size : 41895936 (39.96 GiB 42.90 GB)
Used Dev Size : 20947968 (19.98 GiB 21.45 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Sep 8 22:08:18 2017
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : study:5 (local to host study)
UUID : 66e3edbd:85def510:abece399:ae8c12dc
Events : 18
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
1 8 97 1 active sync /dev/sdg1
4 8 113 2 spare rebuilding /dev/sdh1
3 8 129 - spare /dev/sdi1
|
说明:4 8 113 2 spare rebuilding /dev/sdh1 未被激活,正在构建中的成员,正在传输数据;3 8 129 - spare/dev/sdi1 热备盘 4.3、格式raid5分区并挂载设备 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
| [iyunv@study raid1]# mkfs.ext4 /dev/md5
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10473984 blocks
523699 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[iyunv@study raid1]# mount /dev/md5 /raid5
[iyunv@study raid1]# cd /raid5
[iyunv@study raid5]# ls
lost+found
[iyunv@study raid5]# df -l
Filesystem 1K-blocks Used Available Use% Mounted on
...
/dev/md0 41106752 49032 38962924 1% /raid0
/dev/md1 20488188 44992 19395780 1% /raid1
/dev/md5 41106752 49032 38962924 1% /raid5
|
4.4、配置开机自动挂载,重启确认 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| [iyunv@study raid5]# echo 'DEVICE /dev/sdh1 /dev/sdf1 /dev/sdg1 /dev/sdi1' >> /etc/mdadm.conf
[iyunv@study raid5]# mdadm -Ds /dev/md5 >> /etc/mdadm.conf
[iyunv@study raid5]# vi /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdc1
DEVICE /dev/sdd1 /dev/sde1
DEVICE /dev/sdh1 /dev/sdi1 /dev/sdf1 /dev/sdg1
ARRAY /dev/md0 metadata=1.2 name=study:0 UUID=ee22d16e:210d6495:3b128114:aa36657f
ARRAY /dev/md1 metadata=1.2 name=study:1 UUID=4ea56c89:08d9c291:0c965748:e2235b14
ARRAY /dev/md5 metadata=1.2 spares=1 name=study:5 UUID=66e3edbd:85def510:abece399:ae8c12dc
[iyunv@study raid5]# vi /etc/fstab #添加如下行
/dev/md5 /raid5 ext4 defaults 1 1
[iyunv@study raid5]# reboot
Broadcast message from root@study
(/dev/pts/0) at 22:43 ...
The system is going down for reboot NOW!
[iyunv@study raid5]#
Last login: Fri Sep 8 20:59:57 2017 from 172.16.10.229
[iyunv@study ~]# cd /raid5
[iyunv@study raid5]# ls
lost+found
[iyunv@study raid5]# df -l
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 20027260 707640 18295620 4% /
tmpfs 1954372 0 1954372 0% /dev/shm
/dev/sda6 9948012 22488 9413524 1% /home
/dev/sda5 20027260 45120 18958140 1% /usr/local
/dev/md0 41106752 49032 38962924 1% /raid0
/dev/md1 20488188 44992 19395780 1% /raid1
/dev/md5 41106752 49032 38962924 1% /raid5
|
五、模拟扩展磁盘:单磁盘故障、扩展磁盘容量 5.1、以raid 5为例,模拟单磁盘故障 , 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| [iyunv@study raid5]# mdadm /dev/md5 -f /dev/sdg1 #模拟/dev/sdg1 磁盘损坏
mdadm: set /dev/sdg1 faulty in /dev/md5
[iyunv@study raid5]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Sep 8 22:06:07 2017
Raid Level : raid5
Array Size : 41895936 (39.96 GiB 42.90 GB)
Used Dev Size : 20947968 (19.98 GiB 21.45 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Sep 8 23:07:31 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : study:5 (local to host study)
UUID : 66e3edbd:85def510:abece399:ae8c12dc
Events : 38
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
3 8 129 1 active sync /dev/sdi1
4 8 113 2 active sync /dev/sdh1
1 8 97 - faulty /dev/sdg1
|
通过实验:磁盘发生故障时,热备盘会自动顶替故障磁盘工作,阵列也能够在短时间内实现重建。 5.2、移除损坏磁盘/dev/sdg1 ,后重新加入raid 5整列,模拟新磁盘替换损坏的磁盘。 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
| [iyunv@study raid5]# mdadm /dev/md5 -r /dev/sdg1 #移除损坏磁盘/dev/sdg1
mdadm: hot removed /dev/sdg1 from /dev/md5
[iyunv@study raid5]# mdadm /dev/md5 -a /dev/sdg1 #模拟新磁盘/dev/sdg1 替换损坏并移除的磁盘。
mdadm: added /dev/sdg1
[iyunv@study raid5]# cat /proc/mdstat #raid 5 分区磁盘中,/dev/sdg1 处于备份状态
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sdg1[5](S) sdi1[3] sdh1[4] sdf1[0]
41895936 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
[iyunv@study raid5]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Sep 8 22:06:07 2017
Raid Level : raid5
Array Size : 41895936 (39.96 GiB 42.90 GB)
Used Dev Size : 20947968 (19.98 GiB 21.45 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sat Sep 9 15:03:18 2017
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : study:5 (local to host study)
UUID : 66e3edbd:85def510:abece399:ae8c12dc
Events : 40
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
3 8 129 1 active sync /dev/sdi1
4 8 113 2 active sync /dev/sdh1
5 8 97 - spare /dev/sdg1
|
5.3、 模拟新增一块磁盘/dev/sdj ,扩展raid 5 总容量。 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
| [iyunv@study ~]# fdisk /dev/sdj #新磁盘分区并更改类型
...#过程省略,参照前文磁盘分区
[iyunv@study ~]# fdisk -l #查看/dev/sdj 磁盘状态
Disk /dev/sdj: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd377667a
Device Boot Start End Blocks Id System
/dev/sdj1 1 2610 20964793+ fd Linux raid autodetect
[iyunv@study ~]# mdadm /dev/md5 -a /dev/sdj1 #把/dev/sdj1 分区加入到raid 5
mdadm: added /dev/sdj1
[iyunv@study ~]# mdadm -D /dev/md5 #/dev/md5 分区状态,新加入的磁盘分区默认为备份盘
/dev/md5:
Version : 1.2
Creation Time : Fri Sep 8 22:06:07 2017
Raid Level : raid5
Array Size : 41895936 (39.96 GiB 42.90 GB)
Used Dev Size : 20947968 (19.98 GiB 21.45 GB)
Raid Devices : 3
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sat Sep 9 15:22:28 2017
State : clean
Active Devices : 3
Working Devices : 5
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 512K
Name : study:5 (local to host study)
UUID : 66e3edbd:85def510:abece399:ae8c12dc
Events : 41
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
3 8 129 1 active sync /dev/sdi1
4 8 113 2 active sync /dev/sdh1
5 8 97 - spare /dev/sdg1
6 8 145 - spare /dev/sdj1
[iyunv@study ~]# mdadm -G /dev/md5 -n 4 #热备盘转换成活动盘,活动盘数量调整为4
[iyunv@study ~]# mdadm -D /dev/md5 #磁盘状态,/dev/sdj1 重新为活动状态。
/dev/md5:
Version : 1.2
Creation Time : Fri Sep 8 22:06:07 2017
Raid Level : raid5
Array Size : 41895936 (39.96 GiB 42.90 GB)
Used Dev Size : 20947968 (19.98 GiB 21.45 GB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sat Sep 9 15:24:36 2017
State : clean, reshaping #正在构建磁盘,等reshaping 状态结束
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Reshape Status : 0% complete
Delta Devices : 1, (3->4)
Name : study:5 (local to host study)
UUID : 66e3edbd:85def510:abece399:ae8c12dc
Events : 55
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
3 8 129 1 active sync /dev/sdi1
4 8 113 2 active sync /dev/sdh1
6 8 145 3 active sync /dev/sdj1
5 8 97 - spare /dev/sdg1
|
5.4、扩展磁盘后,需要扩展文件系统
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| [iyunv@study ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
...
/dev/md0 ext4 43G 51M 40G 1% /raid0
/dev/md1 ext4 21G 47M 20G 1% /raid1
/dev/md5 ext4 43G 51M 40G 1% /raid5
[iyunv@study ~]# resize2fs /dev/md5 #扩展raid 5 文件系统
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/md5 is mounted on /raid5; on-line resizing required
old desc_blocks = 3, new_desc_blocks = 4
Performing an on-line resize of /dev/md5 to 15710976 (4k) blocks.
[iyunv@study ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
...
/dev/md0 ext4 43G 51M 40G 1% /raid0
/dev/md1 ext4 21G 47M 20G 1% /raid1
/dev/md5 ext4 64G 55M 60G 1% /raid5
|
至此,linux 系统搭建调配软Raid 功能及实验完成。
|