设为首页 收藏本站
查看: 1419|回复: 0

[经验分享] GlusterFS 安装与配置

[复制链接]

尚未签到

发表于 2019-2-1 10:45:56 | 显示全部楼层 |阅读模式
  GlusterFS是一个开源的分布式文件系统,于2011年被红帽收购.它具有高扩展性、高性能、高可用性、可横向扩展的弹性特点,无元数据服务器设计使glusterfs没有单点故障隐患,详细介绍请查看官网:www.gluster.org 。
  部署环境:

  OS: CentOS>  Server:
  c1:192.168.242.132
  c2:192.168.242.133
  c3:192.168.242.134
  c4:192.168.242.135
  hosts:
  192.168.242.132 c1
  192.168.242.133 c2
  192.168.242.134 c3
  192.168.242.135 c4
  具体操作:
  c1/c2/c3/c4上执行
  [root@c1 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
  [root@c1 yum.repos.d]# yum install -y glusterfs glusterfs-server glusterfs-fuse
  [root@c1 yum.repos.d]# /etc/init.d/glusterd start
  Starting glusterd: [ OK ]
  [root@c1 yum.repos.d]# chkconfig glusterd on
  c1上配置集群
  [root@c1 ~]# gluster peer probe c1
  peer probe: success. Probe on localhost not needed
  [root@c1 ~]# gluster peer probe c2
  peer probe: success.
  [root@c1 ~]# gluster peer probe c3
  peer probe: success.
  [root@c1 ~]# gluster peer probe c4
  peer probe: success.
  如果c1在peer表中被识别为ip地址,可能后面集群过程中会出现通讯问题,
  我们可以使用ip来进行修复:
  [root@c3 ~]# gluster peer status
  Number of Peers: 3
  Hostname: 192.168.242.132
  Uuid: 6e8d6880-ec36-4331-a806-2e8fb4fda7be
  State: Peer in Cluster (Connected)
  Hostname: c2
  Uuid: 9a722f50-911e-4181-823d-572296640486
  State: Peer in Cluster (Connected)
  Hostname: c4
  Uuid: 1ee3588a-8a16-47ff-ba59-c0285a2a95bd
  State: Peer in Cluster (Connected)
  [root@c3 ~]# gluster peer detach 192.168.242.132
  peer detach: success
  [root@c3 ~]# gluster peer probe c1
  peer probe: success.
  [root@c3 ~]# gluster peer status
  Number of Peers: 3
  Hostname: c2
  Uuid: 9a722f50-911e-4181-823d-572296640486
  State: Peer in Cluster (Connected)
  Hostname: c4
  Uuid: 1ee3588a-8a16-47ff-ba59-c0285a2a95bd
  State: Peer in Cluster (Connected)
  Hostname: c1
  Uuid: 6e8d6880-ec36-4331-a806-2e8fb4fda7be
  State: Peer in Cluster (Connected)
  c1上创建集群磁盘
  [root@c1 ~]# gluster volume create datavolume1 replica 2 transport tcp c1:/usr/local/share/datavolume1 c2:/usr/local/share/datavolume1 c3:/usr/local/share/datavolume1 c4:/usr/local/share/datavolume1 force
  volume create: datavolume1: success: please start the volume to access data
  [root@c1 ~]# gluster volume create datavolume2 replica 2 transport tcp c1:/usr/local/share/datavolume2 c2:/usr/local/share/datavolume2 c3:/usr/local/share/datavolume2 c4:/usr/local/share/datavolume2 force
  volume create: datavolume2: success: please start the volume to access data
  [root@c1 ~]# gluster volume create datavolume3 replica 2 transport tcp c1:/usr/local/share/datavolume3 c2:/usr/local/share/datavolume3 c3:/usr/local/share/datavolume3 c4:/usr/local/share/datavolume3 force
  volume create: datavolume3: success: please start the volume to access data
  [root@c1 ~]# gluster volume start datavolume1
  volume start: datavolume1: success
  [root@c1 ~]# gluster volume start datavolume2
  volume start: datavolume2: success
  [root@c1 ~]# gluster volume start datavolume3
  volume start: datavolume3: success
  [root@c1 ~]# gluster volume info
  Volume Name: datavolume1
  Type: Distributed-Replicate

  Volume>  Status: Started
  Number of Bricks: 2 x 2 = 4
  Transport-type: tcp
  Bricks:
  Brick1: c1:/usr/local/share/datavolume1
  Brick2: c2:/usr/local/share/datavolume1
  Brick3: c3:/usr/local/share/datavolume1
  Brick4: c4:/usr/local/share/datavolume1
  Volume Name: datavolume2
  Type: Distributed-Replicate

  Volume>  Status: Started
  Number of Bricks: 2 x 2 = 4
  Transport-type: tcp
  Bricks:
  Brick1: c1:/usr/local/share/datavolume2
  Brick2: c2:/usr/local/share/datavolume2
  Brick3: c3:/usr/local/share/datavolume2
  Brick4: c4:/usr/local/share/datavolume2
  Volume Name: datavolume3
  Type: Distributed-Replicate

  Volume>  Status: Started
  Number of Bricks: 2 x 2 = 4
  Transport-type: tcp
  Bricks:
  Brick1: c1:/usr/local/share/datavolume3
  Brick2: c2:/usr/local/share/datavolume3
  Brick3: c3:/usr/local/share/datavolume3
  Brick4: c4:/usr/local/share/datavolume3
  客户端环境部署
  Centos OS 6.5 x64 并加入hosts
  [root@c5 ~]#wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
  [root@c5 ~]#yum install -y glusterfs glusterfs-fuse
  [root@c5 ~]# mkdir -p /mnt/{datavolume1,datavolume2,datavolume3}
  [root@c5 ~]# mount -t glusterfs -o ro c1:datavolume1 /mnt/datavolume1/
  [root@c5 ~]# mount -t glusterfs -o ro c1:datavolume2 /mnt/datavolume2/
  [root@c5 ~]# mount -t glusterfs -o ro c1:datavolume3 /mnt/datavolume3/
  me3
  [root@c5 ~]# df -h

  Filesystem>  /dev/mapper/VolGroup-lv_root
  38G 840M 36G 3% /
  tmpfs 242M 0 242M 0% /dev/shm
  /dev/sda1 485M 32M 429M 7% /boot
  c1:datavolume1 57G 2.4G 52G 5% /mnt/datavolume1
  c1:datavolume2 57G 2.4G 52G 5% /mnt/datavolume2
  c1:datavolume3 57G 2.4G 52G 5% /mnt/datavolume3
  客户端测试
  [root@c5 ~]# umount /mnt/datavolume1/
  [root@c5 ~]# mount -t glusterfs c1:datavolume1 /mnt/datavolume1/
  [root@c5 ~]# touch /mnt/datavolume1/test.txt
  [root@c5 ~]# ls /mnt/datavolume1/test.txt
  /mnt/datavolume1/test.txt
  [root@c2 ~]# ls -al /usr/local/share/datavolume1/
  total 16
  drwxr-xr-x. 3 root root 4096 May 15 03:50 .
  drwxr-xr-x. 8 root root 4096 May 15 02:28 ..
  drw——-. 6 root root 4096 May 15 03:50 .glusterfs
  -rw-r–r–. 2 root root 0 May 20 2014 test.txt
  [root@c1 ~]# ls -al /usr/local/share/datavolume1/
  total 16
  drwxr-xr-x. 3 root root 4096 May 15 03:50 .
  drwxr-xr-x. 8 root root 4096 May 15 02:28 ..
  drw——-. 6 root root 4096 May 15 03:50 .glusterfs
  -rw-r–r–. 2 root root 0 May 20 2014 test.txt
  删除GlusterFS磁盘:
  gluster volume stop datavolume1
  gluster volume delete datavolume1
  卸载GlusterFS磁盘:

  gluster peer detach>  访问控制:
  gluster volume set datavolume1 auth.allow 192.168.242.*,192.168.241.*
  添加GlusterFS节点:
  gluster peer probe c6
  gluster peer probe c7
  gluster volume add-brick datavolume1 c6:/usr/local/share/datavolume1 c7:/usr/local/share/datavolume1
  迁移GlusterFS磁盘数据:
  gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 start
  gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 status
  gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 commit
  数据重新分配:
  gluster volume rebalance datavolume1 start
  gluster volume rebalance datavolume1 status
  gluster volume rebalance datavolume1 stop
  修复GlusterFS磁盘数据(例如在c1宕机的情况下):
  gluster volume replace-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 commit -force
  gluster volume heal datavolume1 full
  转自:http://www.btschina.com/home/index.php/glusterfs-an-zhuang-yu-pei-zhi.html


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-670365-1-1.html 上篇帖子: GlusterFS 分布式存储部署使用 下篇帖子: GlusterFS 源码安装
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表