rule 发表于 2019-2-1 10:11:17

GlusterFS部署安装集群测试

  
  GlusterFS集群部署
  适用于比较大的存储例如kvm镜像
  官方文档
  http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/#purpose-of-this-document
http://s1.运维网.com/images/20171226/1514265511180845.png
第1章 安装部署(两台都操作)
1.1 更新repo源
  
#yum安装centos-release-gluster
#yum安装glusterfs-server  

  
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo1.2 yum安装
# yum install -y glusterfs-server  安装完成后启动
1.3 启动
# /etc/init.d/glusterd start
Starting glusterd:                                       1.4 gluster形成信任池
# gluster peer probe 192.168.80.123   另一个机器的iP
peer probe: success.1.4.1 查看信任池
# gluster peer status   每个服务器执行都可以
Number of Peers: 1Hostname: 192.168.80.123
Uuid: 49e15d0e-d499-427a-87aa-fe573a7fd345
State: Peer in Cluster (Connected)1.5 创建分布式卷
  http://s1.运维网.com/images/20171226/1514266070728459.png
192.168.80.123 机器1
192.168.80.201 机器2  测试
  两台服务器分别创建
#mkdir /data/exp1 -p   机器1
#mkdir /data/exp2 -p   机器2  在机器1操作
# gluster volume create test-volume 192.168.80.123:/data/exp1/ 192.168.80.201:/data/exp2
volume create: test-volume: failed: The brick 192.168.80.123:/data/exp1 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
# gluster volume create test-volume 192.168.80.123:/data/exp1/ 192.168.80.201:/data/exp2 force
volume create: test-volume: success: please start the volume to access data  分布式卷创建成功
1.5.1 查看分布式卷
# gluster volume info
Volume Name: test-volume
Type: Distribute
Volume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on1.6 创建复制式卷(类似raid1)
  测试
  两台服务器分别创建
#mkdir /data/exp3 -p   机器1
#mkdir /data/exp4 -p   机器2# gluster volume create repl-volume replica 2 transport tcp 192.168.80.123:/data/exp3/ 192.168.80.201:/data/exp4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: repl-volume: failed: The brick 192.168.80.123:/data/exp3 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
# gluster volume create repl-volume replica 2 transport tcp 192.168.80.123:/data/exp3/ 192.168.80.201:/data/exp4 force
volume create: repl-volume: success: please start the volume to access data1.6.1 查看复制式卷
# gluster volume info repl-volume
Volume Name: repl-volume
Type: Replicate
Volume ID: 089c6f46-8131-473a-a6e7-c475e2bd5785
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp3
Brick2: 192.168.80.201:/data/exp4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off1.7 创建条带卷
  测试
  两台机器分别操作
#mkdir /data/exp5 -p   机器1
#mkdir /data/exp6 -p   机器2  创建
# gluster volume create raid0-volume stripe 2 transport tcp 192.168.80.123:/data/exp5/ 192.168.80.201:/data/exp6 force
volume create: raid0-volume: success: please start the volume to access data1.7.1 查看raid0券
# gluster volume info raid0-volume
Volume Name: raid0-volume
Type: Stripe
Volume ID: 123ddf8e-9081-44ba-8d9d-0178c05c6a68
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp5
Brick2: 192.168.80.201:/data/exp6
Options Reconfigured:
transport.address-family: inet
nfs.disable: on1.8 用券的话要先启动券
  查看
# gluster volume status
Volume raid0-volume is not started
Volume repl-volume is not started
Volume test-volume is not started  启动
# gluster volume start raid0-volume
volume start: raid0-volume: success
# gluster volume start repl-volume
volume start: repl-volume: success
# gluster volume start test-volume
volume start: test-volume: success  再查看
# gluster volume status
Status of volume: raid0-volume
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp5             49152   0          Y       43622
Brick 192.168.80.201:/data/exp6             49152   0          Y       43507
Task Status of Volume raid0-volume
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: repl-volume
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp3             49153   0          Y       43657
Brick 192.168.80.201:/data/exp4             49153   0          Y       43548
Self-heal Daemon on localhost               N/A       N/A      Y       43569
Self-heal Daemon on 192.168.80.123          N/A       N/A      Y       43678
Task Status of Volume repl-volume
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: test-volume
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp1             49154   0          Y       43704
Brick 192.168.80.201:/data/exp2             49154   0          Y       43608
Task Status of Volume test-volume
------------------------------------------------------------------------------
There are no active volume tasks  通过info查看
# gluster volume info
Volume Name: raid0-volume
Type: Stripe
Volume ID: 123ddf8e-9081-44ba-8d9d-0178c05c6a68
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp5
Brick2: 192.168.80.201:/data/exp6
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
Volume Name: repl-volume
Type: Replicate
Volume ID: 089c6f46-8131-473a-a6e7-c475e2bd5785
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp3
Brick2: 192.168.80.201:/data/exp4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Volume Name: test-volume
Type: Distribute
Volume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on  测试挂载
  在随便一台服务器挂载(前提需要有glusterfs-client这个服务)
# mount.glusterfs 192.168.80.123:/test-volume /mnt/g1/
# mount.glusterfs 192.168.80.123:/repl-volume /mnt/g2
# mount.glusterfs 192.168.80.123:/raid0-volume /mnt/g3
# df -h|column -t
Filesystem                  SizeUsedAvailUse%   Mounted   on
/dev/sda3                     18G   5.7G12G    34%      /
tmpfs                         491M0   491M   0%       /dev/shm
/dev/sda1                     190M31M   150M   17%      /boot
192.168.80.123:/test-volume
36G                           7.8G27G   23%    /mnt/g1
192.168.80.123:/repl-volume
18G                           5.7G12G   34%    /mnt/g2
192.168.80.123:/raid0-volume
36G                           7.8G27G   23%    /mnt/g31.9 测试三种卷
  分布式卷
  随机选择一台服务器写到/data/exp1 或2
  复制式卷
  两台复制写到/data/exp* 相当于raid1 写两份
  调带式卷
1.10 测试分布式+复制
  两台服务器都操作
# mkdir /exp1 /exp2  创建逻辑卷(+force强制创建)
  [
root@lanyezi ~]# gluster volume create hehe-volume replica 2 transport tcp 192.168.80.123:/exp1/ 192.168.80.123:/exp2/ 192.168.80.201:/exp1/ 192.168.80.201:/exp2/force
volume create: hehe-volume: success: please start the volume to access data  启动
# gluster volume start hehe-volume
volume start: hehe-volume: success  查询
# gluster volume info hehe-volume
Volume Name: hehe-volume
Type: Distributed-Replicate
Volume ID: 321c8da7-43cd-40ad-a187-277018e43c9e
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/exp1
Brick2: 192.168.80.123:/exp2
Brick3: 192.168.80.201:/exp1
Brick4: 192.168.80.201:/exp2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
创建挂载目录并挂载
# mkdir /mnt/g5
# mount.glusterfs 192.168.80.123:/hehe-volume /mnt/g5/
# df -h
Filesystem            SizeUsed Avail Use% Mounted on
/dev/sda3            18G2.2G   15G13% /
tmpfs               491M   0491M   0% /dev/shm
/dev/sda1             190M   31M150M17% /boot
192.168.80.123:/hehe-volume
                     18G3.9G   14G23% /mnt/g5  测试成果
# man tcp > /mnt/g5/tcp1.txt
# man tcp > /mnt/g5/tcp2.txt
# man tcp > /mnt/g5/tcpe.txt
# man tcp > /mnt/g5/tcp4.txt  机器1
# tree /exp*
/exp1
├── tcp2.txt
├── tcp4.txt
└── tcpe.txt
/exp2
├── tcp2.txt
├── tcp4.txt
└── tcpe.txt  机器2
# tree /exp*
/exp1
└── tcp1.txt
/exp2
└── tcp1.txt
0 directories, 2 files  这样的分布不均匀 所以这些目录里的文件创建的时候跟逻辑卷的顺序有关系
  
  我们再来测试创建不同顺序的复制卷
# gluster volume create hehehe-volume replica 2 transport tcp 192.168.80.123:/exp3/ 192.168.80.201:/exp3/ 192.168.80.123:/exp4/ 192.168.80.201:/exp4/ force
volume create: hehehe-volume: success: please start the volume to access data  启动
# gluster volume start hehehe-volume
volume start: hehehe-volume: success  查看状态
# gluster volume info hehehe-volume
Volume Name: hehehe-volume
Type: Distributed-Replicate
Volume ID: 2f24e2cf-bb86-4fe8-a2bc-23f3d07f6f86
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/exp3
Brick2: 192.168.80.201:/exp3
Brick3: 192.168.80.123:/exp4
Brick4: 192.168.80.201:/exp4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off  挂载
# mkdir /mnt/gg
# mount.glusterfs 192.168.80.123:/hehehe-volume /mnt/gg  测试写入文件
# man tcp> /mnt/gg/tcp1.txt
# man tcp> /mnt/gg/tcp2.txt
# man tcp> /mnt/gg/tcp3.txt
# man tcp> /mnt/gg/tcp4.txt  
  机器1查看
# ll /exp3
total 168
-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp2.txt
-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp3.txt
-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp4.txt
# ll /exp4
total 56
-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp1.txt  机器2查看
# ll /exp3/
total 168
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp2.txt
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp3.txt
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp4.txt
# ll /exp4/
total 56
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp1.txt  这样分布就均匀了 数据库也分布复制成功
1.11 扩展卷测试
# mkdir /data/exp9
# gluster volume add-brick test-volume 192.168.80.201:/data/exp9/ force    添加的是已经存在的卷
volume add-brick: success  查看状态
# gluster volume info test-volume
Volume Name: test-volume
Type: Distribute
Volume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Brick3: 192.168.80.201:/data/exp9   新增加的卷
Options Reconfigured:
transport.address-family: inet
nfs.disable: on  重新平衡一下分布式券
# gluster volume rebalance test-volume start
volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: aa05486b-11df-4bac-9ac7-2237a8c12ad61.11.1 删除测试卷
  删除brick 数据会丢失
# gluster volume remove-brick test-volume 192.168.80.201:/data/exp9 start
volume remove-brick start: success
ID: 4f16428a-7e9f-4b7b-bb07-2917a2f14323  再次查看
# gluster volume status test-volume
Status of volume: test-volume
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp1             49158   0          Y       1237
Brick 192.168.80.201:/data/exp2             49154   0          Y       43608
Brick 192.168.80.201:/data/exp9             49159   0          Y       44717
Task Status of Volume test-volume
------------------------------------------------------------------------------
Task               : Remove brick      
ID                   : 4f16428a-7e9f-4b7b-bb07-2917a2f14323
Removed bricks:   
192.168.80.201:/data/exp9
Status               : completed  再次删除
# gluster volume remove-brick test-volume 192.168.80.201:/data/exp9 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success  再次查看状态
# gluster volume status test-volume
Status of volume: test-volume
Gluster process                           TCP PortRDMA PortOnlinePid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp1             49158   0          Y       1237
Brick 192.168.80.201:/data/exp2             49154   0          Y       43608
Task Status of Volume test-volume
------------------------------------------------------------------------------
There are no active volume tasks  删除卷之后再平衡一下
# gluster volume rebalance test-volume start
volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 747d499c-8f20-4514-b3af-29d93ce3a995
# gluster volume info test-volume
Volume Name: test-volume
Type: Distribute
Volume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet


页: [1]
查看完整版本: GlusterFS部署安装集群测试