设为首页 收藏本站
查看: 881|回复: 0

[经验分享] GlusterFS部署安装集群测试

[复制链接]

尚未签到

发表于 2019-2-1 10:11:17 | 显示全部楼层 |阅读模式
  
  GlusterFS集群部署
  适用于比较大的存储  例如kvm镜像
  官方文档
  http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/#purpose-of-this-document

第1章 安装部署(两台都操作)
1.1 更新repo
  
#yum安装centos-release-gluster
#yum安装glusterfs-server  

  
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo1.2 yum安装
[root@master ~]# yum install -y glusterfs-server  安装完成后启动
1.3 启动
[root@master ~]# /etc/init.d/glusterd start
Starting glusterd:                                         [  OK  ]1.4 gluster形成信任池
[root@master ~]# gluster peer probe 192.168.80.123   另一个机器的iP
peer probe: success.1.4.1 查看信任池
[root@master ~]# gluster peer status   每个服务器执行都可以
Number of Peers: 1Hostname: 192.168.80.123
Uuid: 49e15d0e-d499-427a-87aa-fe573a7fd345
State: Peer in Cluster (Connected)1.5 创建分布式卷
  
192.168.80.123 机器1
192.168.80.201 机器2  测试
  两台服务器分别创建
[root@master ~]#  mkdir /data/exp1 -p   机器1
[root@master ~]#  mkdir /data/exp2 -p   机器2  在机器1操作
[root@lanyezi yum.repos.d]# gluster volume create test-volume 192.168.80.123:/data/exp1/ 192.168.80.201:/data/exp2
volume create: test-volume: failed: The brick 192.168.80.123:/data/exp1 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
[root@lanyezi yum.repos.d]# gluster volume create test-volume 192.168.80.123:/data/exp1/ 192.168.80.201:/data/exp2 force
volume create: test-volume: success: please start the volume to access data  分布式卷创建成功
1.5.1 查看分布式卷
[root@lanyezi yum.repos.d]# gluster volume info
Volume Name: test-volume
Type: Distribute
Volume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on1.6 创建复制式卷(类似raid1
  测试
  两台服务器分别创建
[root@master ~]#  mkdir /data/exp3 -p   机器1
[root@master ~]#  mkdir /data/exp4 -p   机器2[root@lanyezi yum.repos.d]# gluster volume create repl-volume replica 2 transport tcp 192.168.80.123:/data/exp3/ 192.168.80.201:/data/exp4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: repl-volume: failed: The brick 192.168.80.123:/data/exp3 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
[root@lanyezi yum.repos.d]# gluster volume create repl-volume replica 2 transport tcp 192.168.80.123:/data/exp3/ 192.168.80.201:/data/exp4 force
volume create: repl-volume: success: please start the volume to access data1.6.1 查看复制式卷
[root@lanyezi yum.repos.d]# gluster volume info repl-volume
Volume Name: repl-volume
Type: Replicate
Volume ID: 089c6f46-8131-473a-a6e7-c475e2bd5785
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp3
Brick2: 192.168.80.201:/data/exp4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off1.7 创建条带卷
  测试
  两台机器分别操作
[root@master ~]#  mkdir /data/exp5 -p   机器1
[root@master ~]#  mkdir /data/exp6 -p   机器2  创建
[root@lanyezi exp3]# gluster volume create raid0-volume stripe 2 transport tcp 192.168.80.123:/data/exp5/ 192.168.80.201:/data/exp6 force
volume create: raid0-volume: success: please start the volume to access data1.7.1 查看raid0
[root@lanyezi exp3]# gluster volume info raid0-volume
Volume Name: raid0-volume
Type: Stripe
Volume ID: 123ddf8e-9081-44ba-8d9d-0178c05c6a68
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp5
Brick2: 192.168.80.201:/data/exp6
Options Reconfigured:
transport.address-family: inet
nfs.disable: on1.8 用券的话要先启动券
  查看
[root@lanyezi exp3]# gluster volume status
Volume raid0-volume is not started
Volume repl-volume is not started
Volume test-volume is not started  启动
[root@master exp4]# gluster volume start raid0-volume
volume start: raid0-volume: success
[root@master exp4]# gluster volume start repl-volume
volume start: repl-volume: success
[root@master exp4]# gluster volume start test-volume
volume start: test-volume: success  再查看
[root@master exp4]# gluster volume status
Status of volume: raid0-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp5             49152     0          Y       43622
Brick 192.168.80.201:/data/exp6             49152     0          Y       43507
Task Status of Volume raid0-volume
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: repl-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp3             49153     0          Y       43657
Brick 192.168.80.201:/data/exp4             49153     0          Y       43548
Self-heal Daemon on localhost               N/A       N/A        Y       43569
Self-heal Daemon on 192.168.80.123          N/A       N/A        Y       43678
Task Status of Volume repl-volume
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: test-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp1             49154     0          Y       43704
Brick 192.168.80.201:/data/exp2             49154     0          Y       43608
Task Status of Volume test-volume
------------------------------------------------------------------------------
There are no active volume tasks  通过info查看
[root@master exp4]# gluster volume info
Volume Name: raid0-volume
Type: Stripe
Volume ID: 123ddf8e-9081-44ba-8d9d-0178c05c6a68
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp5
Brick2: 192.168.80.201:/data/exp6
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
Volume Name: repl-volume
Type: Replicate
Volume ID: 089c6f46-8131-473a-a6e7-c475e2bd5785
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp3
Brick2: 192.168.80.201:/data/exp4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Volume Name: test-volume
Type: Distribute
Volume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on  测试挂载
  在随便一台服务器挂载(前提需要有glusterfs-client这个服务)
[root@master exp4]# mount.glusterfs 192.168.80.123:/test-volume /mnt/g1/
[root@master exp4]# mount.glusterfs 192.168.80.123:/repl-volume /mnt/g2
[root@master exp4]# mount.glusterfs 192.168.80.123:/raid0-volume /mnt/g3
[root@master exp4]# df -h|column -t
Filesystem                    Size  Used  Avail  Use%     Mounted   on
/dev/sda3                     18G   5.7G  12G    34%      /
tmpfs                         491M  0     491M   0%       /dev/shm
/dev/sda1                     190M  31M   150M   17%      /boot
192.168.80.123:/test-volume
36G                           7.8G  27G   23%    /mnt/g1
192.168.80.123:/repl-volume
18G                           5.7G  12G   34%    /mnt/g2
192.168.80.123:/raid0-volume
36G                           7.8G  27G   23%    /mnt/g31.9 测试三种卷
  分布式卷
  随机选择一台服务器写到/data/exp1 2
  复制式卷
  两台复制写到/data/exp* 相当于raid1 写两份
  调带式卷
1.10 测试分布式+复制
  两台服务器都操作
[root@lanyezi exp3]# mkdir /exp1 /exp2  创建逻辑卷(+force强制创建)
  [
root@lanyezi ~]# gluster volume create hehe-volume replica 2 transport tcp 192.168.80.123:/exp1/ 192.168.80.123:/exp2/ 192.168.80.201:/exp1/ 192.168.80.201:/exp2/  force
volume create: hehe-volume: success: please start the volume to access data  启动
[root@lanyezi ~]# gluster volume start hehe-volume
volume start: hehe-volume: success  查询
[root@lanyezi ~]# gluster volume info hehe-volume
Volume Name: hehe-volume
Type: Distributed-Replicate
Volume ID: 321c8da7-43cd-40ad-a187-277018e43c9e
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/exp1
Brick2: 192.168.80.123:/exp2
Brick3: 192.168.80.201:/exp1
Brick4: 192.168.80.201:/exp2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
创建挂载目录并挂载
[root@lanyezi ~]# mkdir /mnt/g5
[root@lanyezi ~]# mount.glusterfs 192.168.80.123:/hehe-volume /mnt/g5/
[root@lanyezi ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3              18G  2.2G   15G  13% /
tmpfs                 491M     0  491M   0% /dev/shm
/dev/sda1             190M   31M  150M  17% /boot
192.168.80.123:/hehe-volume
                       18G  3.9G   14G  23% /mnt/g5  测试成果
[root@lanyezi ~]# man tcp > /mnt/g5/tcp1.txt
[root@lanyezi ~]# man tcp > /mnt/g5/tcp2.txt
[root@lanyezi ~]# man tcp > /mnt/g5/tcpe.txt
[root@lanyezi ~]# man tcp > /mnt/g5/tcp4.txt  机器1
[root@lanyezi ~]# tree /exp*
/exp1
├── tcp2.txt
├── tcp4.txt
└── tcpe.txt
/exp2
├── tcp2.txt
├── tcp4.txt
└── tcpe.txt  机器2
[root@master ~]# tree /exp*
/exp1
└── tcp1.txt
/exp2
└── tcp1.txt
0 directories, 2 files  这样的分布不均匀 所以这些目录里的文件创建的时候跟逻辑卷的顺序有关系
  
  我们再来测试创建不同顺序的复制卷
[root@lanyezi ~]# gluster volume create hehehe-volume replica 2 transport tcp 192.168.80.123:/exp3/ 192.168.80.201:/exp3/ 192.168.80.123:/exp4/ 192.168.80.201:/exp4/ force
volume create: hehehe-volume: success: please start the volume to access data  启动
[root@lanyezi ~]# gluster volume start hehehe-volume
volume start: hehehe-volume: success  查看状态
[root@lanyezi ~]# gluster volume info hehehe-volume
Volume Name: hehehe-volume
Type: Distributed-Replicate
Volume ID: 2f24e2cf-bb86-4fe8-a2bc-23f3d07f6f86
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/exp3
Brick2: 192.168.80.201:/exp3
Brick3: 192.168.80.123:/exp4
Brick4: 192.168.80.201:/exp4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off  挂载
[root@lanyezi ~]# mkdir /mnt/gg
[root@lanyezi ~]# mount.glusterfs 192.168.80.123:/hehehe-volume /mnt/gg  测试写入文件
[root@lanyezi gg]# man tcp  > /mnt/gg/tcp1.txt
[root@lanyezi gg]# man tcp  > /mnt/gg/tcp2.txt
[root@lanyezi gg]# man tcp  > /mnt/gg/tcp3.txt
[root@lanyezi gg]# man tcp  > /mnt/gg/tcp4.txt  
  机器1查看
[root@lanyezi gg]# ll /exp3
total 168
-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp2.txt
-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp3.txt
-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp4.txt
[root@lanyezi gg]# ll /exp4
total 56
-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp1.txt  机器2查看
[root@master ~]# ll /exp3/
total 168
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp2.txt
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp3.txt
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp4.txt
[root@master ~]# ll /exp4/
total 56
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp1.txt  这样分布就均匀了 数据库也分布复制成功
1.11 扩展卷测试
[root@master ~]# mkdir /data/exp9
[root@master ~]# gluster volume add-brick test-volume 192.168.80.201:/data/exp9/ force    添加的是已经存在的卷
volume add-brick: success  查看状态
[root@master ~]# gluster volume info test-volume
Volume Name: test-volume
Type: Distribute
Volume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Brick3: 192.168.80.201:/data/exp9   新增加的卷
Options Reconfigured:
transport.address-family: inet
nfs.disable: on  重新平衡一下分布式券
[root@master g1]# gluster volume rebalance test-volume start
volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: aa05486b-11df-4bac-9ac7-2237a8c12ad61.11.1 删除测试卷
  删除brick 数据会丢失
[root@lanyezi gg]# gluster volume remove-brick test-volume 192.168.80.201:/data/exp9 start
volume remove-brick start: success
ID: 4f16428a-7e9f-4b7b-bb07-2917a2f14323  再次查看
[root@master g1]# gluster volume status test-volume
Status of volume: test-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp1             49158     0          Y       1237
Brick 192.168.80.201:/data/exp2             49154     0          Y       43608
Brick 192.168.80.201:/data/exp9             49159     0          Y       44717
Task Status of Volume test-volume
------------------------------------------------------------------------------
Task                 : Remove brick      
ID                   : 4f16428a-7e9f-4b7b-bb07-2917a2f14323
Removed bricks:   
192.168.80.201:/data/exp9
Status               : completed  再次删除
[root@master g1]# gluster volume remove-brick test-volume 192.168.80.201:/data/exp9 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success  再次查看状态
[root@master g1]# gluster volume status test-volume
Status of volume: test-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.80.123:/data/exp1             49158     0          Y       1237
Brick 192.168.80.201:/data/exp2             49154     0          Y       43608
Task Status of Volume test-volume
------------------------------------------------------------------------------
There are no active volume tasks  删除卷之后再平衡一下
[root@lanyezi gg]# gluster volume rebalance test-volume start
volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 747d499c-8f20-4514-b3af-29d93ce3a995
[root@lanyezi gg]# gluster volume info test-volume
Volume Name: test-volume
Type: Distribute
Volume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet



运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-670333-1-1.html 上篇帖子: GlusterFS学习 下篇帖子: 在Centos上安装使用GlusterFS
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表