轻量级高可用软件keepalived
keepalived是一款用C编写的,旨在给linux系统和基于linux的设施提供简单、稳固的高可用和负载均衡功能的软件。它基于linux内核的ipvs模块实现4层负载均衡,能应用一系列的健康状态检测机制基于VRRP协议实现服务的高可用。一、VRRP协议
http://s5.运维网.com/wyfs02/M00/80/EC/wKioL1dFCdHjMLxOAASnch3umek110.png
VRRP(Virtual Router Redundancy Protocol,虚拟路由冗余协议)是一种容错协议。通常,一个网络内的所有主机都设置一条默认路由,这样,主机发出的目的地址不在本网段的报文将被通过默认路由发往路由器RouterA,从而实现了主机与外部网络的通信。当路由器RouterA 坏掉时,本网段内所有以RouterA 为默认路由下一跳的主机将无法与外部通信,这就是单点故障。VRRP就是为解决上述问题而提出的。
VRRP 将局域网的一组路由器组织成一个虚拟路由器。这个虚拟路由器通过虚拟IP对外提供服务,而在虚拟路由器内部是多个物理路由器协同工作,同一时间内只有一台物理路由器占有这个虚拟IP,作为master实际负责ARP响应和数据包转发等工作;其它物理路由器作为backup,不提供对外服务,仅接收master的vrrp状态通告信息。
master由优先级选举产生,每个物理路由器都有一个 1-255 之间的优先级,级别最高的(highest priority)将成为master,若优先级相同,则IP地址较大者胜出。
在vrrp协议中,所有的报文都是通过IP多播形式发送的,而在一个虚拟路由器中,只有处于master角色的路由器会一直发送VRRP数据包,处于backup角色的路由器只接收master发过来的报文信息,用来监控master的运行状态,因此不会发生backup抢占的现象,除非它的优先级更高。当master不可用时,backup也就无法收到master发过来的报文信息,于是就认定master出现故障,接着多台backup就会进行优先级最高的backup就将成为新的master,这样就保证了服务的持续可用性。
二、keepalived的组件
http://s5.运维网.com/wyfs02/M01/82/0A/wKiom1dIQUeguHDsAAQNuHeofkc671.png
keepalived的主要组件:
WatchDog:负责监控checkers和VRRP进程
Checkers:实现对服务器运行状态检测和故障隔离
VRRP Stack:实现vrrp协议,即实现HA集群中失败切换功能
IPVS wrapper:将设置好的ipvs规则送给内核ipvs模块
Netlink Reflector:负责虚拟IP的设置和切换
三、keepalive的适用场景
keepalived理论上可以为mysqld,httpd等服务提供高可用,给这些服务做高可用通常需要配置共享存储资源,对此keepalived需要借助额外的命令或脚本实现,这种情况下keepalived的性能是显然不如heartbeat或corosync的。keepalived适合提供轻量级的高可用方案,如对作为反向代理的nginx、haproxy以及ipvs做高可用,给这些服务做高可用都无需配置共享存储资源。keepalived即能给ipvs做高可用,又能基于ipvs做负载均衡,并且能应用一系列的健康状态检测机制,获知后端服务器的状态,实现服务器池的动态维护和管理。
四、使用keepalived做反向代理nginx的高可用
1、实验拓扑图
http://s3.运维网.com/wyfs02/M00/82/07/wKioL1dIBJXRJQcPAAA2uzZ3aME045.png
2、在node3上安装httpd,在node1和node2上安装nginx并配置为反向代理服务器,代理至node3;另外,要确保做高可用的两个节点时间同步。
# yum -y install httpd
...
# vim /var/www/html/test.html#创建一个测试页面
hello,keepalived
# service httpd start
...# yum -y install nginx;ssh root@node2 'yum -y install nginx'
...
# vim /etc/nginx/conf.d/default.conf
...
location ~ \.html$ {
proxy_pass http://192.168.30.13;
}
...
# scp /etc/nginx/conf.d/default.conf root@node2:/etc/nginx/conf.d/
...
# service nginx start;ssh root@node2 'service nginx start' #启动两个节点上的nginx服务
... 3、在node1和node2上安装配置keepalived
yum -y install keepalived
主配置文件:/etc/keepalived/keepalived.conf
查看配置文件帮助信息:man keepalived.conf
keepalived的日志信息会输出至/var/log/messages
# yum -y install keepalived;ssh root@node2 'yum -y install keepalived'
...
# rpm -ql keepalived
/etc/keepalived
/etc/keepalived/keepalived.conf #配置文件
/etc/rc.d/init.d/keepalived
/etc/sysconfig/keepalived
/usr/bin/genhash
/usr/libexec/keepalived
/usr/sbin/keepalived
...
/usr/share/doc/keepalived-1.2.13/samples #该目录下有一些样例文件,供参考
...
# ls /usr/share/doc/keepalived-1.2.13/samples
keepalived.conf.fwmark keepalived.conf.SMTP_CHECK keepalived.conf.vrrp.lvs_syncd
keepalived.conf.HTTP_GET.port keepalived.conf.SSL_GET keepalived.conf.vrrp.routes
keepalived.conf.inhibit keepalived.conf.status_code keepalived.conf.vrrp.scripts
keepalived.conf.IPv6 keepalived.conf.track_interface keepalived.conf.vrrp.static_ipaddress
keepalived.conf.misc_check keepalived.conf.virtualhost keepalived.conf.vrrp.sync
keepalived.conf.misc_check_argkeepalived.conf.virtual_server_groupsample.misccheck.smbcheck.sh
keepalived.conf.quorum keepalived.conf.vrrp
keepalived.conf.sample keepalived.conf.vrrp.localcheck
keepalived配置文件主要包括四大段:
global_defs:全局配置段,主要配置邮件通知和机器标识
vrrp_instance:vrrp实例配置段,配置高可用集群
virtual_server:lvs配置段,配置负载均衡集群
vrrp_script:vrrp脚本配置段,用来实现对集群资源的监控,要在vrrp_instance配置段中调用才能生效
注:以上各配置段并非都必不可少,可按需配置。如果只是单纯地做负载均衡集群,可不用配置vrrp_instance段,反过来也是;甚至如果不想使用邮件通知,都不需要配置global_defs段。
# cd /etc/keepalived/
# mv keepalived.conf keepalived.conf.back
# vim keepalived.conf
! Configuration File for keepalived #以“!”开头的为注释
global_defs { #全局配置段,主要配置邮件通知和机器标识
notification_email { #指定keepalived在发生事件(如切换、故障)时发送email给谁,多个写多行
root@localhost #收件人
magedu@126.com
}
notification_email_from kanotify@magedu.com #发件人
smtp_connect_timeout 10 #连接smtp服务器的超时时长
smtp_server 127.0.0.1 #smtp服务器地址
router_id nginx-node1 #机器标识;通常为hostname,但不是必须为hostname,会显示在邮件主题中
}
vrrp_script chk_nginx { #vrrp脚本配置段,要在vrrp实例中调用才生效
script "killall -0 nginx" #信号0用来判断进程的状态;返回状态码为0表示正常,返回状态码非0表示异常
interval 1 #检测的间隔时长
weight -2 #确定异常则将优先级减2
fall 3 #确定为异常需要检测的次数
rise 1#确定为正常需要检测的次数
}
vrrp_script chk_mantaince_down { #此vrrp脚本的作用在于可以让我们通过手动创建文件的方式使节点降级
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
fall 3
rise 1
}
vrrp_instance VI_1 { #vrrp实例配置段,可定义多个,每个vrrp实例名称要惟一
interface eth0 #指定vrrp绑定的物理接口
state MASTER #指定节点的初始状态;但这里指定的不算,keepalived启动时根据节点的优先级高低来确定节点的角色
priority 100 #优点级,1-255
virtual_router_id 11 #虚拟路由id,用来区分多个instance的VRRP组播
garp_master_delay 1 #当切为master状态后多久更新ARP缓存,默认5秒
smtp_alert #当有状态切换时使用全局配置段的邮件设置来发送通知
authentication { #节点间通信认证,认证类型有PASS和AH
auth_type PASS
auth_pass magedu #在一个vrrp实例中,master和backup要使用相同密码才能正常通信
}
track_interface { #跟踪接口,若接口出现故障,则节点状态变成fault
eth0
}
virtual_ipaddress {
192.168.30.30/24 dev eth0 label eth0:0 #虚拟IP地址,会添加在master上
}
track_script { #调用vrrp脚本
chk_nginx
chk_mantaince_down
}
}
# vrrp实例配置段中其它常用参数:
nopreempt 表示不抢占。允许一个priority较低的节点保持master状态,即使priority更高的节点恢复正常。
因为节点的切换会毕竟会造成服务短暂的中断,而且存在一定的风险和不稳定性。因此应尽量减少切换操作。
要配置非抢占模式,在优先级较高节点的配置文件中:
state BACKUP
nopreempt
preempt_delay 抢占延迟时间;有时候系统重启之后需要经过一段时间后才能工作,在这种情况下进行主备切换是没必要的
notify_master/backup/fault 分别表示节点状态为主/备/出错时所执行的脚本
use_vmac 是否使用VRRP的虚拟MAC地址# vim /etc/keepalived/keepalived.conf #对node2上的keepalived配置文件做适当修改
...
router_id nginx-node2
...
state BACKUP
priority 99 #从节点优点级较低
...
# service keepalived start #先尝试启动从节点
Starting keepalived:
# tail -f /var/log/messages
...
May 27 21:34:15 node2 Keepalived: Starting Keepalived v1.2.13 (03/19,2015)
May 27 21:34:15 node2 Keepalived: Starting Healthcheck child process, pid=17799
May 27 21:34:15 node2 Keepalived: Starting VRRP child process, pid=17801
...
May 27 21:34:15 node2 Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
...
May 27 21:34:15 node2 Keepalived_vrrp: Configuration is using : 67460 Bytes
...
May 27 21:34:15 node2 Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
May 27 21:34:15 node2 Keepalived_healthcheckers: Configuration is using : 7507 Bytes
May 27 21:34:15 node2 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
May 27 21:34:15 node2 Keepalived_vrrp: VRRP sockpool:
May 27 21:34:15 node2 Keepalived_vrrp: Remote SMTP server :25 connected.
May 27 21:34:15 node2 Keepalived_healthcheckers: Using LinkWatch kernel netlink reflector...
May 27 21:34:15 node2 Keepalived_vrrp: VRRP_Script(chk_mantaince_down) succeeded
May 27 21:34:15 node2 Keepalived_vrrp: VRRP_Script(chk_nginx) succeeded
May 27 21:34:16 node2 Keepalived_vrrp: SMTP alert successfully sent.
May 27 21:34:19 node2 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
May 27 21:34:20 node2 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
May 27 21:34:20 node2 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
May 27 21:34:20 node2 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.30
May 27 21:34:20 node2 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.30.30 added
May 27 21:34:20 node2 Keepalived_vrrp: Remote SMTP server :25 connected.
May 27 21:34:20 node2 Keepalived_vrrp: SMTP alert successfully sent.
May 27 21:34:21 node2 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.30
# keepalived会启动Healthcheck和VRRP两个子进程;
# keepalived备节点在启动keepalived后,由于自身角色为backup,所以会首先进入backup状态,然后运行vrrp_script模块检查资源状态,
如果nginx正常,则输出succeeded。
# 因为keepalived主节点还没启动,此时备节点发现自己的优先级最高,于是转变为master状态,然后设置vip。
# ip addr show #可以看到vip已配置
...
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:bd:68:23 brd ff:ff:ff:ff:ff:ff
inet 192.168.30.20/24 brd 192.168.30.255 scope global eth0
inet 192.168.30.30/24 scope global secondary eth0:0
inet6 fe80::20c:29ff:febd:6823/64 scope link
valid_lft forever preferred_lft forever
Heirloom Mail version 12.4 7/29/08.Type ? for help.
"/var/spool/mail/root": 2 messages 2 new
>N1 kanotify@magedu.com Fri May 27 21:3414/505 " VRRP Instance VI_1 - Entering BACKUP state"
N2 kanotify@magedu.com Fri May 27 21:3414/500 " VRRP Instance VI_1 - Entering MASTER state"# service keepalived start
Starting keepalived:
# tail -f /var/log/messages
...
May 27 21:55:06 node1 Keepalived_vrrp: VRRP_Script(chk_mantaince_down) succeeded
May 27 21:55:06 node1 Keepalived_vrrp: VRRP_Script(chk_nginx) succeeded
May 27 21:55:07 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
May 27 21:55:07 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
May 27 21:55:08 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
May 27 21:55:08 node1 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
May 27 21:55:08 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.30
May 27 21:55:08 node1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.30.30 added
May 27 21:55:08 node1 Keepalived_vrrp: Remote SMTP server :25 connected.
May 27 21:55:08 node1 Keepalived_vrrp: SMTP alert successfully sent.
May 27 21:55:09 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.30
# 从日志可看出,在keepalived主节点启动后,由于自身角色为master,所以先用vrrp_script模块进行检测,检测正常后才转为master状态
# 当收到一个优先级较低的通告,强制进行新的选举,然后确定进入master状态
# ip addr show #可以看到node1已抢到vip
...
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:40:35:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.30.10/24 brd 192.168.30.255 scope global eth0
inet 192.168.30.30/24 scope global secondary eth0:0
inet6 fe80::20c:29ff:fe40:359d/64 scope link
valid_lft forever preferred_lft forever
Heirloom Mail version 12.4 7/29/08.Type ? for help.
"/var/spool/mail/root": 1 message 1 new
>N1 kanotify@magedu.com Fri May 27 21:5514/500 " VRRP Instance VI_1 - Entering MASTER state"# tail -f /var/log/messages #node2上配置的vip已被移除
...
May 27 21:55:07 node2 Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
May 27 21:55:07 node2 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
May 27 21:55:07 node2 Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
May 27 21:55:07 node2 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.30.30 removed
May 27 21:55:07 node2 Keepalived_vrrp: Remote SMTP server :25 connected.
May 27 21:55:07 node2 Keepalived_vrrp: SMTP alert successfully sent. 4、测试
# curl 192.168.30.30/test.html
hello,keepalived 模拟故障切换:
# service nginx stop
Stopping nginx:
# tail -f /var/log/messages
...
May 27 22:06:17 node1 Keepalived_vrrp: VRRP_Script(chk_nginx) failed
May 27 22:06:19 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
May 27 22:06:19 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
May 27 22:06:19 node1 Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
May 27 22:06:19 node1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.30.30 removed
May 27 22:06:19 node1 Keepalived_vrrp: Remote SMTP server :25 connected.
May 27 22:06:19 node1 Keepalived_vrrp: SMTP alert successfully sent.
# 由于nginx服务停止,vrrp_script(chk_nginx)随即检测失败,优先级减2,低于对方节点的优先级,于是转变为backup状态
# ip addr show #vip已被移除
...
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:40:35:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.30.10/24 brd 192.168.30.255 scope global eth0
inet6 fe80::20c:29ff:fe40:359d/64 scope link
valid_lft forever preferred_lft forever# ip addr show #vip已转移到node2上
...
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:bd:68:23 brd ff:ff:ff:ff:ff:ff
inet 192.168.30.20/24 brd 192.168.30.255 scope global eth0
inet 192.168.30.30/24 scope global secondary eth0:0
inet6 fe80::20c:29ff:febd:6823/64 scope link
valid_lft forever preferred_lft forever# curl 192.168.30.30/test.html
hello,keepalived 5、优化配置
# vim keepalived.conf #将配置文件做如下修改
...
vrrp_script chk_nginx { #检测失败后keepalived的状态变成fault
script "killall -0 nginx"
interval 1
fall 3
rise 1
}
...
vrrp_instance VI_1 {
...
preempt_delay 300 #设置抢占延迟
...
notify_fault "/etc/rc.d/init.d/nginx restart"
#当状态为fault时尝试重启nginx,若重启不成功,会被对方节点抢占
}
# node2的配置文件也做相同修改
# service nginx status;ssh root@node2 'service nginx status'
nginx (pid27762) is running...
nginx (pid10962) is running...
# service keepalived start;ssh root@node2 'service nginx start'
...
Starting nginx:
# ifconfig
...
eth0:0 Link encap:EthernetHWaddr 00:0C:29:40:35:9D
inet addr:192.168.30.30Bcast:0.0.0.0Mask:255.255.255.0
UP BROADCAST RUNNING MULTICASTMTU:1500Metric:1 模拟短暂性故障:
# service nginx stop
Stopping nginx:
# tail -f /var/log/messages
...
May 27 22:46:58 node1 Keepalived_vrrp: VRRP_Script(chk_nginx) failed
May 27 22:46:59 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering FAULT STATE
May 27 22:46:59 node1 Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
May 27 22:46:59 node1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.30.30 removed
May 27 22:46:59 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Now in FAULT state
May 27 22:46:59 node1 Keepalived_vrrp: VRRP_Script(chk_nginx) succeeded
May 27 22:47:00 node1 Keepalived_vrrp: VRRP_Instance(VI_1) prio is higher than received advert
May 27 22:47:00 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
May 27 22:47:00 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
May 27 22:47:01 node1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
May 27 22:47:01 node1 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
...
#停止nginx服务后,vrrp_script(chk_nginx)随即检测失败,进入fault状态,触发notify_fault指定的重启操作,然后重新检测成功
# service nginx status
nginx (pid29373) is running...
五、使用keepalived做ipvs的高可用,采用双主模型
1、实验拓扑图
http://s2.运维网.com/wyfs02/M01/82/08/wKiom1dIBGijl8hoAABLl5uM6iA507.png
2、配置好两个后端real server
# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
# ifconfig lo:0 192.168.30.31 netmask 255.255.255.255 broadcast 192.168.30.31 up
# ifconfig lo:1 192.168.30.32 netmask 255.255.255.255 broadcast 192.168.30.32 up
# route add -host 192.168.30.31 dev lo:0
# route add -host 192.168.30.32 dev lo:1
# cd /var/www/html
# ls
# vim index.html
hello
# vim test.html
hello,this is node3
# service httpd start
...
# 在node4上执行类似的步骤 3、对ipvs做双主模型的高可用,并将两个ipvs节点作为后端RS出现故障时的备用服务器
要配置双主模型,需要配置两个vrrp实例,两个节点在一个实例中的角色为分别为主/从,在另一个实例中则刚好反过来,为从/主
# yum -y install httpd;ssh root@node2 'yum -y install httpd'
...
# vim /var/www/html/test.html
fallback1
# service httpd start
...
# cd /etc/keepalived/
# vim keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
magedu@126.com
}
notification_email_from kanotify@magedu.com
smtp_connect_timeout 10
smtp_server 127.0.0.1
router_id LVS_DEVEL
}
vrrp_script chk_schedown1 {
script "[[ -f /etc/keepalived/down1 ]] && exit 1 || exit 0"
interval 2
weight -2
}
vrrp_script chk_schedown2 {
script "[[ -f /etc/keepalived/down2 ]] && exit 1 || exit 0"
interval 2
weight -2
}
vrrp_instance VI_1 { #第一个vrrp实例
interface eth0
state MASTER #将node1配置为第一个实例中的master
priority 100
virtual_router_id 51
garp_master_delay 1
smtp_alert #开启邮件通知
authentication {
auth_type PASS
auth_pass magedu
}
track_interface {
eth0
}
virtual_ipaddress {
192.168.30.31/24 dev eth0 label eth0:0 #vip1
}
track_script {
chk_schedown1
}
}
virtual_server 192.168.30.31 80 { #定义一个集群服务
delay_loop 6 #健康检查的时间间隔
lb_algo rr #lvs调度算法
lb_kind DR #lvs模型
persistence_timeout 30 #持久连接
protocol TCP
sorry_server 127.0.0.1 80 #指定所有后端real server不可用时的备用服务器
real_server 192.168.30.13 80 { #定义后端real server
weight 1
HTTP_GET { #定义后端RS健康状态检测的方式
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3 #重试次数
delay_before_retry 3 #重试间隔
}
}
# RS健康状态检测方式有多种:HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK,具体用法可man keepalived.conf
# 如果要使用TCP_CHECK检测各real server的健康状态,那么,上面HTTP_GET部分的定义可以替换为如下内容:
# TCP_CHECK {
# connect_port 80
# connect_timeout 3
# }
real_server 192.168.30.14 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
vrrp_instance VI_2 { #第二个vrrp实例
interface eth0
state BACKUP
priority 99
virtual_router_id 52 #每个vrrp实例的虚拟路由ID必须惟一
garp_master_delay 1
smtp_alert
authentication {
auth_type PASS
auth_pass magedu
}
track_interface {
eth0
}
virtual_ipaddress {
192.168.30.32/24 dev eth0 label eth0:1 #vip2
}
track_script {
chk_schedown2
}
}
virtual_server 192.168.30.32 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 30
protocol TCP
sorry_server 127.0.0.1 80
real_server 192.168.30.13 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.30.14 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
# scp keepalived.conf root@node2:/etc/keepalived/
keepalived.conf 100% 2694 2.6KB/s 00:00# vim /var/www/html/test.html
fallback2
# service httpd start
...
# cd /etc/keepalived/
# vim keepalived.conf #对node2上的keepalived配置文件做适当修改
...
vrrp_instance VI_1 {
...
state BACKUP
priority 99
...
}
...
vrrp_instance VI_2 {
...
state MASTER
priority 100
...
}
# service keepalived start;ssh root@node1 'service keepalived start'
Starting keepalived:
Starting keepalived:
# yum -y install ipvsadm;ssh root@node1 'yum -y install ipvsadm'
...
#安装ipvsadm只是便于查看ipvs规则,非必须
# ip addr show #因node2在第二个vrrp实例中优先级列高,故vip2已配置在node2上
...
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:bd:68:23 brd ff:ff:ff:ff:ff:ff
inet 192.168.30.20/24 brd 192.168.30.255 scope global eth0
inet 192.168.30.32/24 scope global secondary eth0:1
inet6 fe80::20c:29ff:febd:6823/64 scope link
valid_lft forever preferred_lft forever
# ipvsadm -L -n #ipvs规则已生成
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.30.31:80 rr persistent 30
-> 192.168.30.13:80 Route 1 0 0
-> 192.168.30.14:80 Route 1 0 0
TCP192.168.30.32:80 rr persistent 30
-> 192.168.30.13:80 Route 1 0 0
-> 192.168.30.14:80 Route 1 0 0# ip addr show #vip1已配置在node1上
...
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:40:35:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.30.10/24 brd 192.168.30.255 scope global eth0
inet 192.168.30.31/24 scope global secondary eth0:0
inet6 fe80::20c:29ff:fe40:359d/64 scope link
valid_lft forever preferred_lft forever
# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.30.31:80 rr persistent 30
-> 192.168.30.13:80 Route 1 0 0
-> 192.168.30.14:80 Route 1 0 0
TCP192.168.30.32:80 rr persistent 30
-> 192.168.30.13:80 Route 1 0 0
-> 192.168.30.14:80 Route 1 0 0 4、测试
# curl 192.168.30.31/test.html #因为启用了持久连接,所以请求可能在一段时间内始终被定向至同一RS
hello,this is node4
# curl 192.168.30.31/test.html
hello,this is node4
# curl 192.168.30.31/test.html
hello,this is node3
# curl 192.168.30.32/test.html
hello,this is node3
# curl 192.168.30.32/test.html
hello,this is node3# service httpd stop #将一个RS上的服务停掉
Stopping httpd: # mail #收到邮件通知
Heirloom Mail version 12.4 7/29/08.Type ? for help.
"/var/spool/mail/root": 3 messages 1 new 3 unread
U1 kanotify@magedu.com Fri May 27 11:5315/508 " VRRP Instance VI_2 - Entering BACKUP state"
U2 kanotify@magedu.com Fri May 27 11:5315/503 " VRRP Instance VI_1 - Entering MASTER state"
>N3 kanotify@magedu.com Fri May 27 12:0314/492 " Realserver :80 - DOWN"
# ipvsadm -L -n #keepalived探测到后端RS的异常状态,将故障RS移除
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.30.31:80 rr persistent 30
-> 192.168.30.13:80 Route 1 0 0
TCP192.168.30.32:80 rr persistent 30
-> 192.168.30.13:80 Route 1 0 1# curl 192.168.30.31/test.html
hello,this is node3
# curl 192.168.30.31/test.html
hello,this is node3
# curl 192.168.30.32/test.html
hello,this is node3# service httpd stop #将另一个RS上的服务也停掉
Stopping httpd: # ipvsadm -L -n #可以看到sorry server已被添加进ipvs规则中
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.30.31:80 rr persistent 30
-> 127.0.0.1:80 Local 1 0 0
TCP192.168.30.32:80 rr persistent 30
-> 127.0.0.1:80 Local 1 0 0# curl 192.168.30.31/test.html
fallback1
# curl 192.168.30.32/test.html
fallback2
页:
[1]