221rrere 发表于 2016-6-27 09:53:29

GlusterFS安装与配置


操作系统: CentOS6.4
GlusterFS: 3.4.2
测试工具:atop, iperf, iozone, fio, postmark配置规划
下载并安装1、下载地址:http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/
需要下载的rpm包
2、RPM安装包
glusterfs-3.4.2-1.el6.x86_64.rpm
glusterfs-api-3.4.2-1.el6.x86_64.rpm
glusterfs-cli-3.4.2-1.el6.x86_64.rpm
glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
glusterfs-libs-3.4.2-1.el6.x86_64.rpm
glusterfs-server-3.4.2-1.el6.x86_64.rpm
开始下载:
1
2
3
4
5
6
wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-3.4.2-1.el6.x86_64.rpm
wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-api-3.4.2-1.el6.x86_64.rpm
wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-cli-3.4.2-1.el6.x86_64.rpm
wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-libs-3.4.2-1.el6.x86_64.rpm
wget http://bits.gluster.org/pub/gluster/glusterfs/3.4.2/x86_64/glusterfs-server-3.4.2-1.el6.x86_64.rpm




3、依赖包安装
Rpcbind
Libaio
Lvm2-devel
经验:一般来讲,先rpm安装gluster,有报错再按依赖包
1
# yum install rpcbind libaio lvm2-devel -y




4、软件安装
安装服务器端软件包
glusterfs-3.4.2-1.el6.x86_64.rpm
glusterfs-cli-3.4.2-1.el6.x86_64.rpm
glusterfs-libs-3.4.2-1.el6.x86_64.rpm
glusterfs-api-3.4.2-1.el6.x86_64.rpm
glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
glusterfs-server-3.4.2-1.el6.x86_64.rpm
说明:三台服务器端同步安装,在此只介绍一台安装
1
2
3
4
5
6
7
8
9
# rpm -ivh *.rpm
Preparing...                ###########################################
   1:glusterfs-libs         ########################################### [ 17%]
   2:glusterfs            ########################################### [ 33%]
   3:glusterfs-cli          ########################################### [ 50%]
   4:glusterfs-fuse         ########################################### [ 67%]
   5:glusterfs-server       ########################################### [ 83%]
error reading information on service glusterfsd: No such file or directory
   6:glusterfs-api          ###########################################




安装客户端软件包
glusterfs-libs-3.4.2-1.el6.x86_64.rpm
glusterfs-3.4.2-1.el6.x86_64.rpm
glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
说明:本次实验的一台客户端使用
1
2
3
4
5
6
7
8
9
10
# yum install rpcbind libaio lvm2-devel -y
#rpm -ivh glusterfs-libs-3.4.2-1.el6.x86_64.rpm
Preparing...                ###########################################
   1:glusterfs-libs         ###########################################
# rpm -ivh glusterfs-3.4.2-1.el6.x86_64.rpm
Preparing...                ###########################################
   1:glusterfs            ###########################################
# rpm -ivh glusterfs-fuse-3.4.2-1.el6.x86_64.rpm
Preparing...                ###########################################
   1:glusterfs-fuse         ###########################################




5、格式化磁盘并挂载,设置开机自启动
1)创建分区,所有服务器同样操作,本次创建了/dev/sdb1和/dev/sdc1
注:下列为例子,每台gluster服务端都创建了/dev/sdb1和/dev/sdc1
1
2
3
4
5
6
7
8
9
10
11
12
# fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305):
Using default value 1305
Command (m for help): wq




2)格式化磁盘,ext4,所有服务器同样操作,本次格式化了/dev/sdb1和/dev/sdc1
注:下列为例子,每台gluster服务端都格式化了/dev/sdb1和/dev/sdc1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2620595 blocks
131029 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
      32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.Use tune2fs -c or -i to override.




3)开机自动挂载并放到fstab里
挂载:
注:多少个存储节点,就执行多少次下面的命令
1
2
3
4
5
6
7
8
9
# mount /dev/sdb1 /brick1/
# mount /dev/sdc1 /brick2/
# df -h
Filesystem      SizeUsed Avail Use% Mounted on
/dev/sda2      17G1.7G   15G11% /
tmpfs         935M   0935M   0% /dev/shm
/dev/sda1       194M   34M151M19% /boot
/dev/sdb1       9.9G151M9.2G   2% /brick1
/dev/sdc1       9.9G151M9.2G   2% /brick2




开机自动挂载:
注:多少个存储节点,就执行多少次下面的命令
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Dec 22 11:46:04 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=5b700fc3-d716-4f56-be1b-22b16ee2921c /                     ext4    defaults      1 1
UUID=38903668-682e-4a31-a131-fe65e3213d10 /boot                   ext4    defaults      1 2
UUID=5591bf6f-0509-4153-9cfc-10cd7eb3b4ee swap                  swap    defaults      0 0
tmpfs                   /dev/shm                tmpfs   defaults      0 0
devpts                  /dev/pts                devptsgid=5,mode=6200 0
sysfs                   /sys                  sysfs   defaults      0 0
proc                  /proc                   proc    defaults      0 0
/dev/sdb1               /brick1/                ext4    defaults      0 0
/dev/sdc1               /brick2/                ext4    defaults      0 0




4)开机自启动gluster服务
注:多少个存储节点,就执行多少次下面的命令
1
2
3
# chkconfig glusterd on      
# chkconfig |grep glusterd
glusterd      0:off   1:off   2:on    3:on    4:on    5:on    6:off




5)设置集群节点名hosts,代替使用IP
修改hosts
注:多少个存储节点,就执行多少次下面的命令
1
2
3
4
5
6
# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.210 server1
192.168.1.220 server2
192.168.1.230 server3




ping测试
注:多少个存储节点,就执行多少次下面的命令
1
2
3
4
# ping server2
PING server2 (192.168.1.220) 56(84) bytes of data.
64 bytes from server2 (192.168.1.220): icmp_seq=1 ttl=64 time=1.22 ms
64 bytes from server2 (192.168.1.220): icmp_seq=2 ttl=64 time=1.57 ms




Gluster实战部署1、选择Qinglin-A 192.168.1.210作为当前集群节点
gluster peer probe组建集群
gluster peer probe server2
gluster peer probe server3
1
2
3
4
# gluster peer probe server2
peer probe: success
# gluster peer probe server3
peer probe: success




校验刚刚添加的两个节点
1
2
3
4
5
6
7
8
9
10
# gluster peer status
Number of Peers: 2
Hostname: server2
Port: 24007
Uuid: b35ae918-1c7d-4ab2-9c1f-0c909309142e
State: Peer in Cluster (Connected)
Hostname: server3
Port: 24007
Uuid: b9d523ff-2637-40bd-b544-ae7852ad8834
State: Peer in Cluster (Connected)




2、创建卷
确定创建卷的类型
确定创建卷的brick列表
确定创建卷的网络类型(TCP/RDMA)
Glustervolume create创建卷,其中下列的testvol是自己命名的
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# gluster volume create testvol server1:/brick1/b1
volume create: testvol: success: please start the volume to access data
# gluster volume start testvol
volume start: testvol: success
# gluster volume info

Volume Name: testvol
Type: Distribute
Volume ID: ea92d7e0-637e-4562-b528-56b508db0f1d
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: server1:/brick1/b1




3、客户端挂载
修改host,或用IP访问
1
2
3
4
5
6
# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.210 server1
192.168.1.220 server2
192.168.1.230 server3




然后进行挂载
1
2
3
4
5
6
7
8
# mount -t glusterfs server1:/testvol /mnt/
# df -h
Filesystem         SizeUsed Avail Use% Mounted on
/dev/sda2             19G1.7G   16G10% /
tmpfs                495M   0495M   0% /dev/shm
/dev/sda1            194M   29M155M16% /boot
192.168.1.210:/data   17G1.7G   15G11% /data
server1:/testvol   9.9G151M9.2G   2% /mnt




4、测试
在客户端上写一个文件,在服务器端上找文件
客户端
1
2
3
4
5
6
# touch file
# mkdir dir
# ls
dirfile
# pwd
/mnt




服务端QINGLIN-A
1
2
3
# cd /brick1/b1/
# ls
dirfile



页: [1]
查看完整版本: GlusterFS安装与配置