server2:/gluster/nas2
Creation of volume nas_volume has been successful. Please start the volume to access data. [root@server1 gluster]# [root@server1 gluster]# gluster volume create lock_volume stripe 2 server1:/gluster/lock1
server2:/gluster/lock2
Creation of volume lock_volume has been successful. Please start the volume to access data. [root@server1 gluster]# [root@server1 gluster]# [root@server1 gluster]# gluster volume start nas_volume
Starting volume nas_volume has been successful [root@server1 gluster]# gluster volume start lock_volume
Starting volume lock_volume has been successful [root@server1 gluster]# gluster volume status
Status of volume: nas_volume Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick server1:/gluster/nas1 24011 Y 3762 Brick server2:/gluster/nas2 24012 Y 3768 NFS Server on 192.168.142.142 38467 Y 8140 NFS Server on 192.168.142.142 38467 Y 3806
Status of volume: lock_volume
Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick server1:/gluster/lock1 24013 Y 3794 Brick server2:/gluster/lock2 24014 Y 3800 NFS Server on 192.168.142.142 38467 Y 8140 NFS Server on 192.168.142.142 38467 Y 3806
1.2 挂载卷
在2个节点上同时mount以上创建的nas和ctdb卷:
[root@server1 gluster]# mount -t glusterfs server1:nas_volume /gluster/share (集群NAS使用) [root@server1 gluster]# mount -t glusterfs server1:lock_volume /gluster/ctdb_lock/(CTDB使用) [root@server2 gluster]# mount -t glusterfs server1:nas_volume /gluster/share (集群NAS使用) [root@server2 gluster]# mount -t glusterfs server1:lock_volume /gluster/ctdb_lock/(CTDB使用)
[global] workgroup = MYGROUP
netbios name = MYSERVER
server string = Samba Server Version %v
private dir = /gluster/ctdb_lock
log file = /var/log/samba/log.%m max log size = 50 clustering = Yes //The clustering = yes entry instructs Samba to use CTDB
idmap backend = tdb2
guest ok = Yes
cups options = raw
[homes] comment = Home Directories
read only = No
browseable = No
[resource] comment = CTDB/GlusterFS resource path = /gluster/share
valid users = centostest
read only = No ea support = Yes //The ea parameter is required if you plan to use extended attributes
3. CTDB配置
CTDB是一种轻量级的集群数据库实现,基于它可以实现很多应用集群,目前CTDB支持Samba, NFS, HTTPD, VSFTPD, ISCSI, WINBIND应用,集群共享数据存储支持GPFS,GFS(2),Glusterfs, Luster, OCFS(2)。CTDB本身不是HA解决方案,但与集群文件系统相结合,它可以提供一种简便高效的HA集群解决方案。
集群配置两组IP,Private IP用于heartbeat和集群内部通信,Public IP用于提供外部服务访问。Public IP动态在所有集群节点上分配,当有节点发生故障,CTDB将调度其他可用节点接管其原先分配的Public IP,故障节点恢复后,漂移的PublicIP会重新被接管。这个过程对客户端是透明的,保证应用不会中断,也就是我们这里所说的高可用HA。
高可用集群NAS的安装配置详细过程如下: 3.1 前提条件
? 升级Gluster
? 安装CTDB
? 配置GlusterFS并在所有节点加载用于CTDB的卷:A Gluster replicated volume for the CTDB lock file. The lock file is only required for CIFS. The best practice is to have a volume dedicated to the lock file.
? 安装并配置好
? Port 4379 open between the Gluster servers 3.2 CTDB设置
配置还是比较简单的,所有节点必须把samba服务停掉,因为以后就由CTDB控制。然后配置所有节点的3个文件,确保一致。当然最好是共享存储。这里我们将/etc/sysconfig/ctdb, /etc/ctdb/public_addresses, /etc/ctdb/nodes存放于CTDB lock卷上,并为所有节点建立符号链接。
chkconfig ctdb on service ctdb start
ctdb ip
ctdb ping -n all
ctdb status [root@server1 ~]# ctdb status
Number of nodes:2 pnn:0 192.168.72.129 OK
pnn:1 192.168.72.128 OK (THIS NODE) Generation:585626714 Size:2 hash:0 lmaster:0 hash:1 lmaster:1 Recovery mode:NORMAL (0) Recovery master:0
从CTDB日志我们可以看到:
[root@server1 ~]# tail -f /var/log/log.ctdb 2012/07/3122:45:14.907926 [19762]: Node became HEALTHY. Ask recovery master 0 to perform ip
reallocation
3.4 访问测试
Windows CIFS访问:
\\192.168.142.100
\\192.168.142.101
拷贝一65MB的电影: