|
一、环境介绍:
Cloudstack-Management:10.12.0.10
Gluster-Management:10.12.0.11
Node:10.12.0.12
Node:10.12.0.13
Node:10.12.0.14
Node:10.12.0.15
Node:10.12.0.16
Node:10.12.0.17
Node:10.12.0.18
Node:10.12.0.19
Node:10.12.0.20
Node:10.12.0.21
Node:10.12.0.22
Node:10.12.0.23
Node:10.12.0.24
Node:10.12.0.25
Node:10.12.0.26
二 、安装:
首先搭建Glusterfs存储环境(10.12.0.11):
Glusterfs 分部式存储搭建:
wget -P /etc/yum.repos.d/ http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
yum -y install epel-release.noarch
yum clean all
yum makecache
yum -y install glusterfs glusterfs-server
添加服务并启动:
chkconfig glusterd on
service glusterd start
创建卷目录
mkdir -p /home/primary (建立用于主存的文件目录)
mkdir -p /home/secondary (建立用于辅存的文件目录)
mkdir -p /home/ctdb (建立用于ctdb的文件目录)
sed -i "/OUTPUT/a -A INPUT -s 10.12.0.0/16 -j ACCEPT" /etc/sysconfig/iptables
添加Gluster节点:
gluster peer probe 10.12.0.12
gluster peer probe 10.12.0.13
gluster peer probe 10.12.0.14
gluster peer probe 10.12.0.15
gluster peer probe 10.12.0.16
gluster peer probe 10.12.0.17
gluster peer probe 10.12.0.18
gluster peer probe 10.12.0.19
gluster peer probe 10.12.0.20
gluster peer probe 10.12.0.21
gluster peer probe 10.12.0.22
gluster peer probe 10.12.0.23
gluster peer probe 10.12.0.24
gluster peer probe 10.12.0.25
gluster peer probe 10.12.0.26
查看节点状态:
gluster peer status
gluster volume info
创建虚拟卷:
gluster volume create 虚拟卷名 stripe 3 replica 2 10.12.0.12:/home/primary
{
distribute分布式, stripe条带式, replica副本式,可叠加组合
“replica 2”表示每个文件保存为两个副本,最少需要2台服务器。
“stripe 2 replica 2”表示每个文件切成条保存到2个地方,且保存为两个副本,最少需要4台服务器。
如果卷目录位于根分区下,后面要加force
}
例:
gluster volume create primary stripe 8 replica 2 10.12.0.11:/home/primary/ 10.12.0.12:/home/primary/ 10.12.0.13:/home/primary/ 10.12.0.14:/home/primary/ 10.12.0.15:/home/primary/ 10.12.0.16:/home/primary/ 10.12.0.17:/home/primary/ 10.12.0.18:/home/primary/ 10.12.0.19:/home/primary/ 10.12.0.20:/home/primary/ 10.12.0.21:/home/primary/ 10.12.0.22:/home/primary/ 10.12.0.23:/home/primary/ 10.12.0.24:/home/primary/ 10.12.0.25:/home/primary/ 10.12.0.26:/home/primary/
gluster volume create secondary stripe 8 replica 2 10.12.0.11:/home/secondary/ 10.12.0.12:/home/secondary/ 10.12.0.13:/home/secondary/ 10.12.0.14:/home/secondary/ 10.12.0.15:/home/secondary/ 10.12.0.16:/home/secondary/ 10.12.0.17:/home/secondary/ 10.12.0.18:/home/secondary/ 10.12.0.19:/home/secondary/ 10.12.0.20:/home/secondary/ 10.12.0.21:/home/secondary/ 10.12.0.22:/home/secondary/ 10.12.0.23:/home/secondary/ 10.12.0.24:/home/secondary/ 10.12.0.25:/home/secondary/ 10.12.0.26:/home/secondary/
启动卷:
gluster volume start primary
gluster volume start secondary
设置访问权限:
gluster volume set primary auth.allow 10.12.0.* (设置允许访问的地址,如果有多个地址可以用逗号连接,不设置,默认允许所有)
gluster volume set secondary auth.allow 10.12.0.*
搭建ctdb来使用nfs共享:
安装ctdb(此操作也在:10.12.0.11上):
yum install nfs-utils
yum install ctdb
创建本地挂载Glusterfs的secondary卷目录并挂载:
mkdir /secondary
mount -t glusterfs 10.12.0.11:/secondary /secondary
修改开机自动挂载:在/etc/fstab的后面一行加入以下:
vim /etc/fstab
192.168.30.239:/secondary /secondary glusterfs defaults 0 0
创建ctdb配置文件:
1、创建nfs文件:
mv /etc/sysconfig/nfs /etc/sysconfig/nfs.bak
vim /secondary/nfs
CTDB_MANAGES_NFS=yes
NFS_TICKLE_SHARED_DIRECTORY=/secondary/nfs-tickles
STATD_PORT=595
STATD_OUTGOING_PORT=596
MOUNTD_PORT=597
RQUOTAD_PORT=598
LOCKD_UDPPORT=599
LOCKD_TCPPORT=599
STATD_SHARED_DIRECTORY=/secondary/lock/nfs-state
NFS_HOSTNAME="Node11.test.com"
STATD_HOSTNAME="$NFS_HOSTNAME -P "$STATD_SHARED_DIRECTORY/$PUBLIC_IP" -H /etc/ctdb/statd-callout -p 97"
RPCNFSDARGS="-N 4"
2、创建NFS配置文件:
ln -s /secondary/nfs /etc/sysconfig/nfs
3、创建NFS共享目录配置文件:
vi /secondary/exports
/secondary *(fsid=1235,insecure,rw,async,no_root_squash,no_subtree_check)
rm -rf /etc/exports
ln -s /secondary/exports /etc/exports
添加防火墙规则:
vim /etc/sysconfig/iptables
-A INPUT -s 10.12.0.0/16 -m state --state NEW -p udp --dport 111 -j ACCEPT
-A INPUT -s
-A INPUT -s
-A INPUT -s
-A INPUT -s
-A INPUT -s
-A INPUT -s
-A INPUT -s
-A INPUT -s
-A INPUT -s
-A INPUT -s
创建ctdb配置文件,同样是放到存储卷里面,给其他服务器共用:
vim /secondary/ctdb
CTDB_RECOVERY_LOCK=/secondary/lockfile
CTDB_PUBLIC_INTERFACE=eth0
CTDB_PUBLIC_ADDRESSES=/secondary/public_addresses
CTDB_MANAGES_NFS=yes
CTDB_NODES=/secondary/nodes
CTDB_DEBUGLEVEL=ERR
(注;ctdb可能是对网桥模式有干扰,建议使用“直连”网卡:即:eth0-3)
mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.bak
ln -s /secondary/ctdb /etc/sysconfig/ctdb
创建ctdb虚拟地址(VIP):
vim /secondary/public_addresses
10.12.0.10/24 eth0
创建ctdb节点文件:
vim /secondary/nodes
10.12.0.11
10.12.0.12
10.12.0.13
10.12.0.14
10.12.0.15
10.12.0.16
10.12.0.17
10.12.0.18
10.12.0.19
10.12.0.20
10.12.0.21
10.12.0.22
10.12.0.23
10.12.0.24
10.12.0.25
10.12.0.26
设置开机启动并启动:
chkconfig ctdb on
chkconfig nfs off
/etc/init.d/ctdb start
查看ctdb信息:
ctdb status
ctdb ping -n all
ctdb ip
Public IPs on node 1
10.10.10.15 node[0] active[] available[eth0] configured[eth0] (显示10.10.10.15这个接口地址正工作在node0上)
ctdb pnn
PNN:1 (自己是node1)
以下操作是在所有节点服务器上操作(即:12---26)
yum -y install nfs-utils
yum -y install ctdb
mkdir /secondary
mount -t glusterfs 10.12.0.11:/secondary /secondary
修改开机自动挂载:在/etc/fstab的后面一行加入以下:
vim /etc/fstab
192.168.30.239:/secondary /secondary glusterfs defaults 0 0
mv /etc/sysconfig/nfs /etc/sysconfig/nfs.bak
ln -s /secondary/nfs /etc/sysconfig/nfs
rm -rf /etc/exports
ln -s /secondary/exports /etc/exports
mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.bak
ln -s /secondary/ctdb /etc/sysconfig/ctdb
设置开机启动并启动:
chkconfig ctdb on
chkconfig rpcbind on
chkconfig nfs off
/etc/init.d/ctdb start
最后再重启下10.12.10.11这台机器的ctdb:
/etc/init.d/ctdb restart
ctdb status
ctdb ping -n all
ctdb ip
ctdb pnn
使用ip r查看下是否有VIP:
ip r
测试:
找任何一台机器,mount上后创建一个文件,查看其它机器上是否已经写入;
例:
mount -t nfs 10.12.0.10:/secondary /mnt
touch /mnt/a.txt
在其它节点(12--26)上安装操作:
wget -P /etc/yum.repos.d/ http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
yum -y install epel-release.noarch
yum clean all
yum makecache
yum -y install glusterfs glusterfs-server
chkconfig glusterd on
service glusterd start
在 Cloudstack-Management:10.12.0.10上安装:
网络配置
使用脚本脚本修改:
脚本下载地址:http://pan.baidu.com/s/1pLaNjZ1
或 使用以下方式修改:
cd /etc/sysconfig/network-scripts/
cat |
|