设为首页 收藏本站
云服务器等爆品抢先购,低至4.2元/月
查看: 1179|回复: 0

[经验分享] vmware workstation 8.0 rhel5.6 mdadm 实现软raid5

[复制链接]

尚未签到

发表于 2015-4-6 20:45:34 | 显示全部楼层 |阅读模式
  1、查看系统盘符情况
  [iyunv@rhel5 ~]#fdisk -l
  Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        1318    10482412+  83  Linux
/dev/sda3            1319        1579     2096482+  82  Linux swap / Solaris
/dev/sda4            1580        3916    18771952+   5  Extended
/dev/sda5            1580        3916    18771921   83  Linux
  2、在系统启动时VM->Settings->HardWare->Hard Disk->add->next->create new virtual disk->next->next->输入磁盘大小->Disk file 为文件命名如:raid5-0
DSC0000.jpg
DSC0001.jpg
DSC0002.jpg
DSC0003.jpg
DSC0004.jpg
  3、重启系统查看磁盘情况
  [iyunv@rhel5 ~]#reboot
  Broadcast message from root (pts/2) (Mon Nov 12 11:46:29 2012):
  The system is going down for reboot NOW!
[iyunv@rhel5 ~]#
Last login: Mon Nov 12 11:43:15 2012 from 192.168.1.100
[iyunv@rhel5 ~]#fdisk -l
  Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        1318    10482412+  83  Linux
/dev/sda3            1319        1579     2096482+  82  Linux swap / Solaris
/dev/sda4            1580        3916    18771952+   5  Extended
/dev/sda5            1580        3916    18771921   83  Linux
  Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk /dev/sdb doesn't contain a valid partition table
  Disk /dev/sdc: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk /dev/sdc doesn't contain a valid partition table
  Disk /dev/sdd: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk /dev/sdd doesn't contain a valid partition table
  Disk /dev/sde: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Disk /dev/sde doesn't contain a valid partition table
  可以看到,新增的4块硬盘/dev/sdb、/dev/sdc、/dev/sdd和/dev/sde
  4、下面对4块硬盘逐个进行分区
  [iyunv@rhel5 ~]#fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
  Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
  Command (m for help): m  //输入m打开帮助
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)
  Command (m for help): n  //输入n新建一个分区,因为是新盘,所以新建的分区只能是主分区,即分区号1-4
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-652, default 652):
Using default value 652
  更改分区的系统类型为RAID自动监测,然后不要忘记最后重要的一步选择w保存退出。
  按同样方法依次处理其它的新增硬盘。完成后再次使用fdisk查看,4块硬盘都已经处理完毕,准备就绪。
  Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
  Command (m for help): w
The partition table has been altered!
  Calling ioctl() to re-read partition table.
Syncing disks.
[iyunv@rhel5 ~]#fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
  Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
  Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-652, default 652):
Using default value 652
  Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
  Command (m for help): p
  Disk /dev/sdc: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         652     5237158+  fd  Linux raid autodetect
  Command (m for help): w
The partition table has been altered!
  Calling ioctl() to re-read partition table.
Syncing disks.
[iyunv@rhel5 ~]#fdisk /dev/sdd
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
  Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
  Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-652, default 652):
Using default value 652
  Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
  Command (m for help): p
  Disk /dev/sdd: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         652     5237158+  fd  Linux raid autodetect
  Command (m for help): w
The partition table has been altered!
  Calling ioctl() to re-read partition table.
Syncing disks.
[iyunv@rhel5 ~]#fdisk /dev/sde
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
  Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
  Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-652, default 652):
Using default value 652
  Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
  Command (m for help): p
  Disk /dev/sde: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
  Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1         652     5237158+  fd  Linux raid autodetect
  Command (m for help): w
The partition table has been altered!
  Calling ioctl() to re-read partition table.
Syncing disks.
  5、将分区信息写入内核
  [iyunv@rhel5 ~]#partprobe
  在这里首先需要将光驱弹出
  6、查看硬盘信息
  [iyunv@rhel5 ~]# cat /proc/partitions
major minor  #blocks  name
  8     0   31457280 sda
   8     1     104391 sda1
   8     2   10482412 sda2
   8     3    2096482 sda3
   8     4          0 sda4
   8     5   18771921 sda5
   8    16    5242880 sdb
   8    17    5237158 sdb1
   8    32    5242880 sdc
   8    33    5237158 sdc1
   8    48    5242880 sdd
   8    49    5237158 sdd1
   8    64    5242880 sde
   8    65    5237158 sde1
  7、创建RAID采用create选项(可以先使用mdadm --help熟悉下mdadm的选项和参数),查看帮助
  [iyunv@rhel5 ~]#mdadm --create --help
Usage:  mdadm --create device -chunk=X --level=Y --raid-devices=Z devices
  This usage will initialise a new md array, associate some
devices with it, and activate the array.   In order to create an
array with some devices missing, use the special word 'missing' in
place of the relevant device name.
  Before devices are added, they are checked to see if they already contain
raid superblocks or filesystems.  They are also checked to see if
the variance in device size exceeds 1%.
If any discrepancy is found, the user will be prompted for confirmation
before the array is created.  The presence of a '--run' can override this
caution.
  If the --size option is given then only that many kilobytes of each
device is used, no matter how big each device is.
If no --size is given, the apparent size of the smallest drive given
is used for raid level 1 and greater, and the full device is used for
other levels.
  Options that are valid with --create (-C) are:
  --bitmap=          : Create a bitmap for the array with the given filename
  --chunk=      -c   : chunk size of kibibytes
  --rounding=        : rounding factor for linear array (==chunk size)
  --level=      -l   : raid level: 0,1,4,5,6,linear,multipath and synonyms
  --parity=     -p   : raid5/6 parity algorithm: {left,right}-{,a}symmetric
  --layout=          : same as --parity
  --raid-devices= -n : number of active devices in array
  --spare-devices= -x: number of spares (eXtras) devices in initial array
  --size=       -z   : Size (in K) of each drive in RAID1/4/5/6/10 - optional
  --force       -f   : Honour devices as listed on command line.  Don't
                     : insert a missing drive for RAID5.
  --run         -R   : insist of running the array even if not all
                     : devices are present or some look odd.
  --readonly    -o   : start the array readonly - not supported yet.
  --name=       -N   : Textual name for array - max 32 characters
  --bitmap-chunk=    : bitmap chunksize in Kilobytes.
  --delay=      -d   : bitmap update delay in seconds.
  8、接下来我们使用mdadm创建在/dev/md0上创建一个由sdb、sdc、sdd3块盘组成(另外1块盘sde为热备)的RAID5
  [iyunv@rhel5 ~]#mdadm --create --verbose /dev/md0 --level=raid5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd --spare-devices=1 /dev/sde
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: size set to 5242816K
mdadm: array /dev/md0 started.
可以看到大部分选项都有简写形式,使用下面的指令创建RAID,指令可以解释为:

-Cv 创建RAID并输出详细信息

/dev/md0 RAID的设备名

-l5 level5即RAID5

-n3 3块激活磁盘/dev/sdb /dev/sdc /dev/sdd

-x1 1块备份盘/dev/sde

9、查看创建RAID的结果,可以看到你的RAID的一些信息。

接下来我们使用cat /proc/mdstat命令来查看一下RAID的状态,我们也可以利用watch命令来每隔一段时间刷新/proc/mdstat的输出。使用CTRL+C可以取消。

  [iyunv@rhel5 ~]#watch -n 10 'cat /proc/mdstat'
  Every 10.0s: cat /proc/mdstat Mon Nov 12 11:57:41 2012
  Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[2] sde[3](S) sdc[1] sdb[0]
      10485632 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
  unused devices:

10、为阵列创建文件系统:

[iyunv@rhel5 ~]# mkfs.ext3 /dev/md0

11、挂载分区

[iyunv@rhel5 ~]#mount /dev/md0 /data

12、查看raid设备的状态

  [iyunv@rhel5 ~]#mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Nov 12 11:55:51 2012
     Raid Level : raid5
     Array Size : 10485632 (10.00 GiB 10.74 GB)
  Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent
  Update Time : Mon Nov 12 12:52:34 2012
          State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
  Spare Devices : 1
  Layout : left-symmetric
     Chunk Size : 64K
  UUID : 23542a8f:14e76650:90c202c4:b0bd7c76
         Events : 0.14
  Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
  3       8       64        -      spare   /dev/sde
  13、一般来说,一个新阵列被创建后我们最好创建一个/etc/mdadm.conf文件。没有该文件在激活阵列时我们就得指定更详细的信息,为方便,我们使用下列命令:
  [iyunv@rhel5 ~]#mdadm --detail --scan
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=23542a8f:14e76650:90c202c4:b0bd7c76
  [iyunv@rhel5 ~]#mdadm --detail --scan >> /etc/mdadm.conf
  [iyunv@rhel5 ~]#cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=23542a8f:14e76650:90c202c4:b0bd7c76
  14、启动停止Raid(停止前必须先umount)
  [iyunv@rhel5 ~]#umount /data
  [iyunv@rhel5 ~]#mdadm -S /dev/md0  
mdadm: stopped /dev/md0
  重新启动可以使用:
  [iyunv@rhel5 ~]# mdadm -As /dev/md0
mdadm: /dev/md0 has been started with 3 drives and 1 spare.
  15、模拟故障
  [iyunv@rhel5 ~]#mdadm --set-faulty --help
Usage: mdadm arraydevice options component devices...
  This usage is for managing the component devices within an array.
The --manage option is not needed and is assumed if the first argument
is a device name or a management option.
The first device listed will be taken to be an md array device, and
subsequent devices are (potential) components of that array.
  Options that are valid with management mode are:
  --add         -a   : hotadd subsequent devices to the array
  --remove      -r   : remove subsequent devices, which must not be active
  --fail        -f   : mark subsequent devices a faulty
  --set-faulty       : same as --fail
  --run         -R   : start a partially built array
  --stop        -S   : deactivate array, releasing all resources
  --readonly    -o   : mark array as readonly
  --readwrite   -w   : mark array as readwrite
  模拟/dev/sdb故障:
  [iyunv@rhel5 ~]#mdadm --manage --set-faulty /dev/md0 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
[iyunv@rhel5 ~]#mdadm --detail /dev/md0                     
/dev/md0:
        Version : 0.90
  Creation Time : Mon Nov 12 11:55:51 2012
     Raid Level : raid5
     Array Size : 10485632 (10.00 GiB 10.74 GB)
  Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent
  Update Time : Mon Nov 12 13:48:19 2012
          State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
  Spare Devices : 1
  Layout : left-symmetric
     Chunk Size : 64K
  Rebuild Status : 13% complete
  UUID : 23542a8f:14e76650:90c202c4:b0bd7c76
         Events : 0.16
  Number   Major   Minor   RaidDevice State
       4       8       64        0      spare rebuilding   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
  3       8       16        -      faulty spare   /dev/sdb
  检查/proc/mdstat,如果配置的冗余磁盘可用,阵列可能已经开始重建
  [iyunv@rhel5 ~]#cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[3](F) sde[4] sdd[2] sdc[1]
      10485632 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]
      [=========>...........]  recovery = 48.6% (2549344/5242816) finish=1.0min speed=42230K/sec
      
unused devices:
  查看一下日志消息:
  [iyunv@rhel5 ~]#tail /var/log/messages
Nov 12 13:48:20 rhel5 kernel: md: syncing RAID array md0
Nov 12 13:48:20 rhel5 kernel: md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
Nov 12 13:48:20 rhel5 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reconstruction.
Nov 12 13:48:20 rhel5 kernel: md: using 128k window, over a total of 5242816 blocks.
Nov 12 13:50:31 rhel5 kernel: md: md0: sync done.
Nov 12 13:50:31 rhel5 kernel: RAID5 conf printout:
Nov 12 13:50:31 rhel5 kernel:  --- rd:3 wd:3 fd:0
Nov 12 13:50:31 rhel5 kernel:  disk 0, o:1, dev:sde
Nov 12 13:50:31 rhel5 kernel:  disk 1, o:1, dev:sdc
Nov 12 13:50:31 rhel5 kernel:  disk 2, o:1, dev:sdd
  使用mdadm -E命令查看一下/dev/sdb的情况:
  [iyunv@rhel5 ~]# mdadm -E /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 23542a8f:14e76650:90c202c4:b0bd7c76
  Creation Time : Mon Nov 12 11:55:51 2012
     Raid Level : raid5
  Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
     Array Size : 10485632 (10.00 GiB 10.74 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
  Update Time : Mon Nov 12 12:52:34 2012
          State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
  Spare Devices : 0
       Checksum : c47853e8 - correct
         Events : 14
  Layout : left-symmetric
     Chunk Size : 64K
  Number   Major   Minor   RaidDevice State
this     0       8       16        0      active sync   /dev/sdb
  0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
  自动修复完成后,我们再查看一下RAID的状态:
  再查看一下raid的状态
  [iyunv@rhel5 ~]#mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Nov 12 11:55:51 2012
Raid Level : raid5
Array Size : 10485632 (10.00 GiB 10.74 GB)
Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
  Update Time : Mon Nov 12 13:50:31 2012
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
  Layout : left-symmetric
Chunk Size : 64K
  UUID : 23542a8f:14e76650:90c202c4:b0bd7c76
Events : 0.18
  Number Major Minor RaidDevice State
0 8 64 0 active sync /dev/sde
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
  3 8 16 - faulty spare /dev/sdb
  我们可以看到/dev/sde已经替换了/dev/sdb。
  这时我们可以从/dev/md0中移除/dev/sdb设备:
  
  [iyunv@rhel5 ~]#mdadm /dev/md0 -r /dev/sdb
mdadm: hot removed /dev/sdb
  查看raid详情:
  [iyunv@rhel5 ~]#mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Nov 12 11:55:51 2012
     Raid Level : raid5
     Array Size : 10485632 (10.00 GiB 10.74 GB)
  Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent
  Update Time : Mon Nov 12 13:54:18 2012
          State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
  Spare Devices : 0
  Layout : left-symmetric
     Chunk Size : 64K
  UUID : 23542a8f:14e76650:90c202c4:b0bd7c76
         Events : 0.20
  Number   Major   Minor   RaidDevice State
       0       8       64        0      active sync   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
  我们可以使用下列命令向/dev/md0中添加一个设备:
  [iyunv@rhel5 ~]#mdadm /dev/md0 --add /dev/sdb
mdadm: added /dev/sdb
  再次查看raid详情:
  [iyunv@rhel5 ~]#mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Nov 12 11:55:51 2012
     Raid Level : raid5
     Array Size : 10485632 (10.00 GiB 10.74 GB)
  Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent
  Update Time : Mon Nov 12 13:54:18 2012
          State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
  Spare Devices : 1
  Layout : left-symmetric
     Chunk Size : 64K
  UUID : 23542a8f:14e76650:90c202c4:b0bd7c76
         Events : 0.20
  Number   Major   Minor   RaidDevice State
       0       8       64        0      active sync   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
  3       8       16        -      spare   /dev/sdb
  可以看到/dev/sdb 已经添加完成作为备盘
  再次将/dev/sde设为故障,sdb会自动启动并恢复,就此/dev/sde再次成为备盘sdb成为活动盘
  如下所示:
  [iyunv@rhel5 ~]#mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Nov 12 11:55:51 2012
     Raid Level : raid5
     Array Size : 10485632 (10.00 GiB 10.74 GB)
  Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent
  Update Time : Mon Nov 12 14:11:29 2012
          State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
  Spare Devices : 1
  Layout : left-symmetric
     Chunk Size : 64K
  UUID : 23542a8f:14e76650:90c202c4:b0bd7c76
         Events : 0.26
  Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
  3       8       64        -      spare   /dev/sde
  16、监控RAID
  mdadm的监控模式提供一些实用的功能,你可以使用下列命令来监控/dev/md0,delay参数意味着检测的时间间隔,这样紧急事件和严重的错误会及时发送给系统管理员
  转:

[iyunv@localhost eric4ever]# mdadm --monitor --mail=eric4ever@localhost --delay=300 /dev/md0  
当使用监控模式时,mdadm不会退出,你可以使用下列命令:


[iyunv@localhost eric4ever]# nohup mdadm --monitor --mail=eric4ever@localhost --delay=300 /dev/md0 &
[1] 3113
[iyunv@localhost eric4ever]# nohup: appending output to `nohup.out'


安装完成之后,需要配置raid挂载目录开机自动挂载
more   /etc/mtab 此命令显示出来的相对应挂载信息
写入到vi /etc/fstab 文件中

若要开机自动挂载,请加入/etc/fstab中: /dev/md0                /mnt/mdadm                auto    defaults        0 0 其实这样开机后是无法自动挂载的,因为在关机时就停止了软阵列 因为找md系统在读入fstab之前,如果找不到md,根本不可能写进fstab里面的解决方法:在/ect/rc.local中加入               //在linux启动的最后阶段,系统会执行存于rc.local中的命令。 mdadm -As /dev/md0 mount -a
两个错误: 1、mdadm: no such device: /dev/md0:  解决方法:需要重新创建软RAID5阵列
  2、mdadm: md device /dev/md0 does not appear to be active.
  启动指定的阵列,重新阵列装配到系统中(--assemble)::
  [iyunv@flyer ~]# mdadm --assemble --scan /dev/md0  //配置文件错误/dev/mdadm.conf
  mdadm: Unknown keyword devices
  mdadm: no devices found for /dev/md0  
  [iyunv@flyer ~]# vi /etc/mdadm.conf     
  devices /dev/sdb1 /dev/sdc1 /dev/sdd1   //应该是device,这是一个细节,有好多网页教程里是都是devices
  ARRAY /dev/md0 level=raid5 num-devices=3 UUID=e62a8ca6:2033f8a1:f333e527:78b0278a
  
  [iyunv@flyer ~]# mdadm -Av /dev/md0 /dev/sd{b,c,d}1  //指定设备启动可以,没有问题
  
  解决方法:修改/ect/mdadm.conf文件
  
  
  
  
  
  
  
  
  
  
  
  

[img]http://onexin.iyunv.com/source/plugin/onexin_bigdata/file:///C:UsersThinkPadAppDataRoamingTencentUsers904690742QQWinTempRichOle~]U]((220C_S2[_6WOW$XFB.jpg[/img]

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-54436-1-1.html 上篇帖子: VMware Workstation装CentOS7虚拟机NAT方式上网 下篇帖子: VMware Workstation 7 for linux
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表