heartbeat+drbd+nfs构建高可用文件共共享存储
Server1 ha1.a.cometh0 192.168.1.40 27 eth1 192.168.20.100 24(心跳网卡)Server2 ha2.a.cometh0 192.168.1.41 27 eth1 192.168.20.101 24(心跳网卡)
Vip192.168.1.38 27
NAS(drbd)的安装配置:
因为要根据名字进行探测,所以还要编辑/etc/hosts文件。
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393282EgmE.png
Ha-server1:(ha-server2的配置与之基本一致,不再重复写了)
先创建一个新的分区,作为共享存储磁盘。
# fdisk /dev/sda
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393285I7ty.png
# partprobe /dev/sda#重新加载分区表
# rpm -ivh drbd83-8.3.8-1.el5.centos.i386.rpm
# rpm -ivh kmod-drbd83-8.3.8-1.el5.centos.i686.rpm
# cp /usr/share/doc/drbd83-8.3.8/drbd.conf /etc/ #拷贝一份样例配置文件
# vim drbd.conf
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_13363932860eJG.png
# vim /etc/drbd.d/global_common.conf# 编辑全局,通用配置文件。
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393290xras.png
# vim /etc/drbd.d/web.res #编辑同步的资源配置文件
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_13363932920AeS.png
# drbdadm create-md web#为web资源创建多设备drbd
# service drbd start #启动服务,两台主机都需要启用,一台启动的话会一直等待
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393295L91a.png
# cat /proc/drbd#查看一下drbd状态
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393297Ripy.png
# drbdadm -- --overwrite-data-of-peerprimaryweb#在web资源中配置为主设备,并同步资源。只需要在主设备上进行。(我在ha2上进行的)
# watch -n 1'cat /proc/drbd'#查看同步的过程
# mkfs -t ext3 -L drbdweb/dev/drbd0#格式化drbd0,并创建卷标
# mkdir/mnt/1#创建挂载点
两台机器都配置完成后,现在可以手动的切换进行drbd的同步,以及锁机制了,若要进行高可用性的自动切换,还需要安装配置heartbeat。
HA(heartbeat)的安装配置:
Ha-server1的ip配置:
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393306VF1E.png
Ha-server2的ip配置:
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393316NZBV.png
Ha-server1的配置:(Ha-server2的配置与Ha-server1的配置基本一致)
# yum localinstallheartbeat-2.1.4-9.el5.i386.rpm heartbeat-pils-2.1.4-10.el5.i386.rpm heartbeat-stonith-2.1.4-10.el5.i386.rpm libnet-1.1.4-3.el5.i386.rpm perl-MailTools-1.77-1.el5.noarch.rpm--nogpgcheck
# cd /etc/ha.d/
# cp /usr/share/doc/heartbeat-2.1.4/ha.cf./
# cp /usr/share/doc/heartbeat-2.1.4/haresources./
# cp /usr/share/doc/heartbeat-2.1.4/authkeys./
# vim ha.cf
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393317Dg6q.png
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_13363933187JMq.png
# vim haresources
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393321kr7w.png
# vim authkeys
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393322T7fR.png
# chmod 600/etc/ha.d/authkeys #修改该文件权限,否则出错
# cp /etc/init.d/nfs/etc/ha.d/resource.d/#将nfs的控制脚本拷贝给heartbeat管理
# serviceheartbeat start
NFS的安装配置:
1、两台服务器都修改 nfs 配置文件如下:
# vi /etc/exports
/data *(rw,sync,insecure,no_root_squash,no_wdelay)
# service portmap start && chkconfig portmap on
# service nfs start && chkconfig nfs on
2、两台服务器都修改 nfs 启动脚本。将/etc/init.d/nfs 脚本中的 stop 部分中的 killproc
nfsd -2 修改为 -9
以下为测试:
首先我在ha-server2(主节点)上查看以下:
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393323OSxW.png
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393326sHhn.png
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393328HLdj.png
然后进行模拟ha-server2失效的情况:
# ./hb_standby
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393335Iuq2.png
在ha-server1上进行查看:
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393339ZObF.png
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_13363933418SUY.png
# ./hb_takeover# 模拟server2 恢复
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393346hfWw.png
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393348OdYC.png
然后在另一台测试机上测试nfs服务:(推送与锁机制,可以自己测试,不再写了)
# mount 192.168.1.38:/mnt/1 /mnt/nfs/#将远程nfs挂载到本地
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393350J9kK.png
# cd /mnt/nfs/
http://chenyz.blog.运维网.com/attachment/201205/7/4394827_1336393351x0vt.png
页:
[1]