Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储GlusterFS基础
Linux与云计算——第二阶段Linux服务器架设第五章:存储Storage服务器架设—分布式存储GlusterFS基础
http://s5.运维网.com/wyfs02/M00/85/48/wKiom1efDtWx0eyZAAr1e1gNFZs364.jpg-wh_500x0-wm_3-wmp_4-s_1990403464.jpg
1 GlusterFS GlusterFS安装
Install GlusterFS to Configure Storage Cluster.
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
Install GlusterFS Server on all Nodes in Cluster.
# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo
# enable EPEL, too
# yum --enablerepo=epel -y install glusterfs-server
# systemctl start glusterd
# systemctl enable glusterd
If Firewalld is running, allow GlusterFS service on all nodes.
# firewall-cmd --add-service=glusterfs --permanent
success
# firewall-cmd --reload
Success
It's OK if you mount GlusterFS volumes from clients with GlusterFS Native Client.
GlusterFS supports NFS (v3), so if you mount GlusterFS volumes from clients with NFS, Configure additinally like follows.
# yum -y install rpcbind
# systemctl start rpcbind
# systemctl enable rpcbind
# systemctl restart glusterd
Installing and Basic Settings of GlusterFS are OK. Refer to next section for settings of clustering.
2 设置Distributed
Configure Storage Clustering.
For example, create a distributed volume with 2 servers.
This example shows to use 2 servers but it's possible to use more than 3 servers.
|
+----------------------+ | +----------------------+
| |10.0.0.51 | 10.0.0.52| |
| node01.srv.world +----------+----------+ node02.srv.world|
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
Install GlusterFS Server on All Nodes, refer to here.
Create a Directory for GlusterFS Volume on all Nodes.
# mkdir /glusterfs/distributed
Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
# gluster peer probe node02
peer probe: success.
# show status
# gluster peer status
Number of Peers: 1
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
# create volume
# gluster volume create vol_distributed transport tcp \
node01:/glusterfs/distributed \
node02:/glusterfs/distributed
volume create: vol_distributed: success: please start the volume to access data
# start volume
# gluster volume start vol_distributed
volume start: vol_distributed: success
# show volume info
# gluster volume info
Volume Name: vol_distributed
Type: Distribute
Volume ID: 6677caa9-9aab-4c1a-83e5-2921ee78150d
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
performance.readdir-ahead: on
To mount GlusterFS volume on clients, refer to here.
3 设置Replication
Configure Storage Clustering.
For example, create a Replication volume with 2 servers.
This example shows to use 2 servers but it's possible to use more than 3 servers.
|
+----------------------+ | +----------------------+
| |10.0.0.51 | 10.0.0.52| |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
Install GlusterFS Server on All Nodes, refer to here.
Create a Directory for GlusterFS Volume on all Nodes.
# mkdir /glusterfs/replica
Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
# gluster peer probe node02
peer probe: success.
# show status
# gluster peer status
Number of Peers: 1
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
# create volume
# gluster volume create vol_replica replica 2 transport tcp \
node01:/glusterfs/replica \
node02:/glusterfs/replica
volume create: vol_replica: success: please start the volume to access data
# start volume
# gluster volume start vol_replica
volume start: vol_replica: success
# show volume info
# gluster volume info
Volume Name: vol_replica
Type: Replicate
Volume ID: 0d5d5ef7-bdfa-416c-8046-205c4d9766e6
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/replica
Brick2: node02:/glusterfs/replica
Options Reconfigured:
performance.readdir-ahead: on
To mount GlusterFS volume on clients, refer to here.
4 设置Striping
Configure Storage Clustering.
For example, create a Striping volume with 2 servers.
This example shows to use 2 servers but it's possible to use more than 3 servers.
|
+----------------------+ | +----------------------+
| |10.0.0.51 | 10.0.0.52| |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
Install GlusterFS Server on All Nodes, refer to here.
Create a Directory for GlusterFS Volume on all Nodes.
# mkdir /glusterfs/striped
Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
# gluster peer probe node02
peer probe: success.
# show status
# gluster peer status
Number of Peers: 1
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
# create volume
# gluster volume create vol_striped stripe 2 transport tcp \
node01:/glusterfs/striped \
node02:/glusterfs/striped
volume create: vol_striped: success: please start the volume to access data
# start volume
# gluster volume start vol_striped
volume start: vol_replica: success
# show volume info
# gluster volume info
Volume Name: vol_striped
Type: Stripe
Volume ID: b6f6b090-3856-418c-aed3-bc430db91dc6
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/striped
Brick2: node02:/glusterfs/striped
Options Reconfigured:
performance.readdir-ahead: on
To mount GlusterFS volume on clients, refer to here.
5 Distributed+Replication
Configure Storage Clustering.
For example, create a Distributed + Replication volume with 4 servers.
|
+----------------------+ | +----------------------+
| |10.0.0.51 | 10.0.0.52| |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | | |
+----------------------+ | +----------------------+
|
+----------------------+ | +----------------------+
| |10.0.0.53 | 10.0.0.54| |
| node03.srv.world +----------+----------+ node04.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
Install GlusterFS Server on All Nodes, refer to here.
Create a Directory for GlusterFS Volume on all Nodes.
# mkdir /glusterfs/dist-replica
Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
# gluster peer probe node02
peer probe: success.
# gluster peer probe node03
peer probe: success.
# gluster peer probe node04
peer probe: success.
# show status
# gluster peer status
Number of Peers: 3
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
Hostname: node03
Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a
State: Peer in Cluster (Connected)
Hostname: node04
Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638
State: Peer in Cluster (Connected)
# create volume
# gluster volume create vol_dist-replica replica 2 transport tcp \
node01:/glusterfs/dist-replica \
node02:/glusterfs/dist-replica \
node03:/glusterfs/dist-replica \
node04:/glusterfs/dist-replica
volume create: vol_dist-replica: success: please start the volume to access data
# start volume
# gluster volume start vol_dist-replica
volume start: vol_dist-replica: success
# show volume info
# gluster volume info
Volume Name: vol_dist-replica
Type: Distributed-Replicate
Volume ID: 784d2953-6599-4102-afc2-9069932894cc
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/dist-replica
Brick2: node02:/glusterfs/dist-replica
Brick3: node03:/glusterfs/dist-replica
Brick4: node04:/glusterfs/dist-replica
Options Reconfigured:
performance.readdir-ahead: on
To mount GlusterFS volume on clients, refer to here.
6 Striping+Replication
Configure Storage Clustering.
For example, create a Striping + Replication volume with 4 servers.
|
+----------------------+ | +----------------------+
| |10.0.0.51 | 10.0.0.52| |
| node01.srv.world +----------+----------+ node02.srv.world |
| | | | |
+----------------------+ | +----------------------+
|
+----------------------+ | +----------------------+
| |10.0.0.53 | 10.0.0.54| |
| node03.srv.world +----------+----------+ node04.srv.world |
| | | |
+----------------------+ +----------------------+
It is recommended to use partitions for GlusterFS volumes which are different from the / partition.
The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.
Install GlusterFS Server on All Nodes, refer to here.
Create a Directory for GlusterFS Volume on all Nodes.
# mkdir /glusterfs/strip-replica
Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
# gluster peer probe node02
peer probe: success.
# gluster peer probe node03
peer probe: success.
# gluster peer probe node04
peer probe: success.
# show status
# gluster peer status
Number of Peers: 3
Hostname: node02
Uuid: 2ca22769-28a1-4204-9957-886579db2231
State: Peer in Cluster (Connected)
Hostname: node03
Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a
State: Peer in Cluster (Connected)
Hostname: node04
Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638
State: Peer in Cluster (Connected)
# create volume
# gluster volume create vol_strip-replica stripe 2 replica 2 transport tcp \
node01:/glusterfs/strip-replica \
node02:/glusterfs/strip-replica \
node03:/glusterfs/strip-replica \
node04:/glusterfs/strip-replica
volume create: vol_strip-replica: success: please start the volume to access data
# start volume
# gluster volume start vol_strip-replica
volume start: vol_strip-replica: success
# show volume info
# gluster volume info
Volume Name: vol_strip-replica
Type: Striped-Replicate
Volume ID: ec36b0d3-8467-47f6-aa83-1020555f58b6
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/strip-replica
Brick2: node02:/glusterfs/strip-replica
Brick3: node03:/glusterfs/strip-replica
Brick4: node04:/glusterfs/strip-replica
Options Reconfigured:
performance.readdir-ahead: on
To mount GlusterFS volume on clients, refer to here.
7 Clients Settings
It's the settings for GlusterFS clients to mount GlusterFS volumes.
For mounting with GlusterFS Native Client, Configure like follows.
# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo
# yum -y install glusterfs glusterfs-fuse
# mount vol_distributed volume on /mnt
# mount -t glusterfs node01.srv.world:/vol_distributed /mnt
# df -hT
Filesystem Type SizeUsed Avail Use% Mounted on
/dev/mapper/centos-root xfs 27G1.1G 26G 5% /
devtmpfs devtmpfs 2.0G 02.0G 0% /dev
tmpfs tmpfs 2.0G 02.0G 0% /dev/shm
tmpfs tmpfs 2.0G8.3M2.0G 1% /run
tmpfs tmpfs 2.0G 02.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 497M151M347M31% /boot
node01.srv.world:/vol_distributed fuse.glusterfs 40G 65M 40G 1% /mnt
NFS (v3) is also supported, so it's possible to mount with NFS.
Configure for it on GlusterFS Servers first, refer to here.
# yum -y install nfs-utils
# systemctl start rpcbind rpc-statd
# systemctl enable rpcbind rpc-statd
# mount -t nfs -o mountvers=3 node01.srv.world:/vol_distributed /mnt
# df -hT
Filesystem Type SizeUsed Avail Use% Mounted on
/dev/mapper/centos-root xfs 27G1.1G 26G 5% /
devtmpfs devtmpfs2.0G 02.0G 0% /dev
tmpfs tmpfs 2.0G 02.0G 0% /dev/shm
tmpfs tmpfs 2.0G8.3M2.0G 1% /run
tmpfs tmpfs 2.0G 02.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 497M151M347M31% /boot
node01.srv.world:/vol_distributed nfs 40G 64M 40G 1% /mnt
详细视频课程请戳—→ http://edu.运维网.com/course/course_id-6574.html
页:
[1]