准备环境
node1:192.168.139.2
node2:192.168.139.4 node4:192.168.139.8 node5:192.168.139.9 node1:target
node2|node4|node5:initiator端
在node1安装scsi-target-utils 因为要将发现并登入的target做成集成文件系统及clvm,所以必须在node2 node4 node5上安装gfs2-utils,lvm2-cluster,iscsi-initiator-utils 并且将node2 node4 node5安装cman+rgmanager后配置成一个三节点的RHCS高可用集群,因为gfs2为一个集群文件系统,必须借助HA高可用集群将故障节点Fence掉,及借助Message Layer进行节点信息传递。
[iyunv@node2 ~]# clustat \\这是上一个实验创建的集群,删除掉 Cluster Status for mycluster @ Wed Dec 21 21:58:31 2016 Member Status: Quorate
Member Name ID Status ------ ---- ---- ------ node2.zxl.com 1 Online, Local node4.zxl.com 2 Online node5.zxl.com 3 Online [iyunv@node2 ~]# service cman stop [iyunv@node2 ~]# service cman stop [iyunv@node2 ~]# rm -f /etc/cluster/*
通过编辑配置文件的方式。自动创建target,和LUN [iyunv@node1 tgt]# vim /etc/tgt/targets.conf # backing-store /dev/LVM/somedevice #</target> <target iqn.2016-12.com.zxl:disk1.1> <backing-store /dev/sdc> vendor_id zxl lun 1 </backing-store> initiator-address 192.168.139.0/24 incominguser zxl 888 </target> [iyunv@node1 tgt]# tgtadm --lld iscsi --mode target --op show Target 1: iqn.2016-12.com.zxl:disk1.1 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 21475 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/sdc Backing store flags: Account information: zxl ACL information: 192.168.139.0/24 [iyunv@node1 tgt]# netstat -tnlp |grep tgtd tcp 0 LISTEN 1937/tgtd tcp 0 LISTEN 1937/tgtd [iyunv@node1 tgt]# chkconfig tgtd on
直接删除discovery的数据库,清理以前实验的数据 [iyunv@node2 iscsi]# iscsiadm -m node -T iqn.2016-12.com.zxl:store1.disk1 -p 192.168.139.2 -o delete [iyunv@node2 iscsi]# rm -rf /var/lib/iscsi/send_targets/192.168.139.2,3260/ \\192.168.139.2,3260目录下已近没有了记录数据,将这个目录页删了 [iyunv@node4 iscsi]# iscsiadm -m node -T iqn.2016-12.com.zxl:store1.disk1 -p 192.168.139.2 -o delete
[iyunv@node4 iscsi]# rm -rf /var/lib/iscsi/send_targets/192.168.139.2,3260/ [iyunv@node5 iscsi]# iscsiadm -m node -T iqn.2016-12.com.zxl:store1.disk1 -p 192.168.139.2 -o delete [iyunv@node5 iscsi]# rm -rf /var/lib/iscsi/send_targets/192.168.139.2,3260/
为initiator起个名 [iyunv@node2 iscsi]# iscsi-iname -p iqn.2016-12.com.zxl \\你写前缀,后缀系统生成 iqn.2016-12.com.zxl:79883141ce9
将initiator_name写入文件 [iyunv@node2 iscsi]# echo "InitiatorName=`iscsi-iname -p iqn.2016-12.com.zxl`" > /etc/iscsi/initiatorname.iscsi
[iyunv@node2 iscsi]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2016-12.com.zxl:31dcbfbcf845
[iyunv@node4 iscsi]# echo "InitiatorName=`iscsi-iname -p iqn.2016-12.com.zxl`" > /etc/iscsi/initiatorname.iscsi [iyunv@node5 iscsi]# echo "InitiatorName=`iscsi-iname -p iqn.2016-12.com.zxl`" > /etc/iscsi/initiatorname.iscsi
编辑配置文件,进行基于用户的认证 [iyunv@node2 iscsi]# vim /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP node.session.auth.username = zxl node.session.auth.password = 888
在三个节点执行一下步骤 [iyunv@node2 iscsi]# service iscsi start [iyunv@node2 iscsi]# chkconfig iscsi on 发现target [iyunv@node2 iscsi]# iscsiadm -m discovery -t st -p 192.168.139.2 192.168.139.2:3260,1 iqn.2016-12.com.zxl:disk1.1 登录target [iyunv@node2 iscsi]# iscsiadm -m node -T iqn.2016-12.com.zxl:disk1.1 -p 192.168.139.2 -l Logging in to [iface: default, target: iqn.2016-12.com.zxl:disk1.1, portal: 192.168.139.2,3260] (multiple) Login to [iface: default, target: iqn.2016-12.com.zxl:disk1.1, portal: 192.168.139.2,3260] successful.
[iyunv@node2 iscsi]# fdisk -l Disk /dev/sdc: 21.5 GB, 21474836480 bytes
在三个节点安装cman rgmanager gfs2-utils lvm2-cluster
[iyunv@node2 iscsi]# yum install -y cman rgmanager gfs2-utils lvm2-cluster
clvm:将共享存储做成lvm,要借助HA集群的心跳传输机制,脑裂阻止机制......,从而让多个节点共同使用LVM,一个节点对LVM的操作会立马通过集群的Message Layer层通知给其他节点 cLVM在各个节点都启动一个服务clvmd,这个服务让各个节点相互通信
启动lvm集群功能在每个节点上 [iyunv@node2 mnt]# vim /etc/lvm/lvm.conf locking_type = 3 或者用命令也可以改 [iyunv@node2 cluster]# lvmconf --enable-cluster
创建集群,添加Fence设备,加入集群节点 [iyunv@node4 cluster]# ccs_tool create mycluster [iyunv@node4 cluster]# ccs_tool addfence meatware fence_manual [iyunv@node4 cluster]# ccs_tool addnode -v 1 -n 1 -f meatware node2.zxl.com [iyunv@node4 cluster]# ccs_tool addnode -v 1 -n 2 -f meatware node4.zxl.com [iyunv@node4 cluster]# ccs_tool addnode -v 1 -n 3 -f meatware node5.zxl.com [iyunv@node4 cluster]# ccs_tool lsnode
Cluster name: mycluster, config_version: 5
Nodename Votes Nodeid Fencetype node2.zxl.com 1 1 meatware node4.zxl.com 1 2 meatware node5.zxl.com 1 3 meatware
启动cman rgmanager [iyunv@node2 mnt]# service cman start cman启动时启动不了,可能是因为组播冲突,当节点刚启动时,由于没有cluster.conf这个配置文件,节点可能会接受任何组播传来的配置文件(当一个教室许多人同时做实验时),解决办法:提前改一个没人用的组播或者直接将配置文件手动scp过去,不用ccsd自动同步 [iyunv@node2 mnt]# service rgmanager start [iyunv@node4 mnt]# service cman start [iyunv@node4 mnt]# service rgmanager start [iyunv@node5 mnt]# service cman start [iyunv@node5 mnt]# service rgmanager start [iyunv@node2 mnt]# clustat
[iyunv@node2 cluster]# clustat Cluster Status for mycluster @ Wed Dec 21 23:06:16 2016 Member Status: Quorate
Member Name ID Status ------ ---- ---- ------ node2.zxl.com 1 Online, Local node4.zxl.com 2 Online node5.zxl.com 3 Online 三个节点都进行如下操作 [iyunv@node2 cluster]# service clvmd start [iyunv@node2 cluster]# chkconfig clvmd on [iyunv@node2 cluster]# chkconfig cman on [iyunv@node2 cluster]# chkconfig rgmanager on
创建LVM [iyunv@node2 cluster]# fdisk -l Disk /dev/sdb: 21.5 GB, 21474836480 bytes [iyunv@node2 cluster]# pvcreate /dev/sdc [iyunv@node2 cluster]# vgcreate clustervg /dev/sdc [iyunv@node2 cluster]# lvcreate -L 10G -n clusterlv clustervg Logical volume "clusterlv" created. [iyunv@node2 cluster]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert clusterlv clustervg -wi-a----- 10.00g
换个节点也可以看到创建的LV [iyunv@node4 cluster]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert clusterlv clustervg -wi-a----- 10.00g
格式化为gfs2文件系统 [iyunv@node2 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t mycluster:lock1 /dev/clustervg/clusterlv [iyunv@node2 ~]# mkdir /mydata root@node2 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata [iyunv@node2 ~]# cd /mydata/ [iyunv@node2 mydata]# touch a.txt [iyunv@node2 mydata]# ll total 8 -rw-r--r--. 1 root root 0 Dec 22 08:57 a.txt
显示所有可调的参数 [iyunv@node2 mydata]# gfs2_tool gettune /mydata incore_log_blocks = 8192 log_flush_secs = 60 \\60秒刷新一下日志 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_directio = 0 \\0 表示不直接写入磁盘,要先通知其他节点,1表示直接写入磁盘,性能更 差 改参数的值为1 [iyunv@node2 mydata]#gfs2_tool settune /mydata new_files_directio 1
冻结集群文件系统,相当于只读模式(创建备份时很好用) [iyunv@node2 mydata]# gfs2_tool freeze /mydata [iyunv@node2 mydata]# touch b.txt \\卡着不能创建 另打开一个连接,解除冻结 [iyunv@node2 ~]# gfs2_tool unfreeze /mydata/ 文件立马创建好了 [iyunv@node2 mydata]# ll total 16 -rw-r--r--. 1 root root 0 Dec 22 08:57 a.txt -rw-r--r--. 1 root root 0 Dec 22 09:25 b.txt
node4挂载测试 [iyunv@node4 ~]# mount /dev/clustervg/clusterlv /mydata/ [iyunv@node4 ~]# cd /mydata/ [iyunv@node4 mydata]# touch c.txt [iyunv@node4 mydata]# ll total 24 -rw-r--r--. 1 root root 0 Dec 22 08:57 a.txt -rw-r--r--. 1 root root 0 Dec 22 09:25 b.txt -rw-r--r--. 1 root root 0 Dec 22 09:29 c.txt
再加一个日志,再挂载一个节点node5 [iyunv@node2 mydata]# gfs2_jadd -j 1 /dev/clustervg/clusterlv [iyunv@node5 ~]# mount /dev/clustervg/clusterlv /mydata/ [iyunv@node5 ~]# cd /mydata/ [iyunv@node5 mydata]# touch d.txt [iyunv@node5 mydata]# ll total 32 -rw-r--r--. 1 root root 0 Dec 22 08:57 a.txt -rw-r--r--. 1 root root 0 Dec 22 09:25 b.txt -rw-r--r--. 1 root root 0 Dec 22 09:29 c.txt -rw-r--r--. 1 root root 0 Dec 22 09:31 d.txt
扩展LV(物理边界) [iyunv@node2 mydata]# lvextend -L +5G /dev/clustervg/clusterlv [iyunv@node2 mydata]# lvs \\已经扩展为15G clusterlv clustervg -wi-ao---- 15.00g 换个节点 [iyunv@node4 mydata]# lvs \\其他节点立马知道了分区大小的改变 clusterlv clustervg -wi-ao---- 15.00g
扩展gfs2文件系统的边界(逻辑边界) [iyunv@node2 mydata]# gfs2_grow /dev/clustervg/clusterlv The file system grew by 5120MB. gfs2_grow complete.
本次实验结束
|