2.为两台Realserver提供Sysv格式的脚本来自动修改内核参数与虚拟IP并运行脚本
#vim rs.sh
. /etc/rc.d/init.d/functions
VIP=172.16.31.188
host=`/bin/hostname`
case "$1" in
start)
# Start LVS-DR real server on this machine.
/sbin/ifconfig lo down
/sbin/ifconfig lo up
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
;;
stop)
# Stop LVS-DR real server loopback device(s).
/sbin/ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
;;
status)
# Status of LVS-DR real server.
islothere=`/sbin/ifconfig lo:0 | grep $VIP`
isrothere=`netstat -rn | grep "lo:0" | grep $VIP`
if [ ! "$islothere" -o ! "isrothere" ];then
# Either the route or the lo:0 device
# not found.
echo "LVS-DR real server Stopped."
else
echo "LVS-DR real server Running."
fi
;;
*)
# Invalid entry.
echo "$0: Usage: $0 {start|status|stop}"
exit 1
;;
esac 注释:脚本中的VIP定义的是虚拟IP地址
3.脚本两个节点都存放一份;然后执行此脚本:
#sh rs.sh start
4.查看设置后的状态:
# sh rs.sh status
LVS-DR real server Running.
查看节点的VIP地址设置完成与否: tom1节点;
[root@tom1 ~]# ip addr show lo
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 172.16.31.188/32 brd 172.16.31.188 scope global lo:0
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
tom2节点:
[root@tom2 ~]# ip addr show lo
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 172.16.31.188/32 brd 172.16.31.188 scope global lo:0
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever 5.查看MASTER节点proxy的ipvs规则:
[root@proxy keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.31.188:0 rr persistent 50
-> 172.16.31.50:8080 Route 1 0 0
-> 172.16.31.51:8080 Route 1 0 0
从规则中可以看出虚拟IP与Port及调度算法为rr;其中有两个Realserver
三.访问测试及故障模拟ces
1.访问测试:
由上可知,使用的是rr调度算法,在访问测试时可能需要多访问几次或换个浏览器来测试访问
2.模拟Master服务器出现故障,将Master主机上的Keepalived服务停止,查看Backup服务器是否接管所有服务
[root@proxy keepalived]# service keepalived stop
Stopping keepalived: [ OK ]
3.观察节点切换后的系统日志: MASTER节点的日志:
[root@proxy keepalived]# tail -f /var/log/messages
Jan 17 16:20:40 proxy Keepalived[15021]: Stopping Keepalived v1.2.13 (10/15,2014)
Jan 17 16:20:40 proxy Keepalived_vrrp[15025]: VRRP_Instance(VI_1) sending 0 priority
Jan 17 16:20:40 proxy Keepalived_vrrp[15025]: VRRP_Instance(VI_1) removing protocol VIPs.
Jan 17 16:20:40 proxy Keepalived_healthcheckers[15024]: Netlink reflector reports IP 172.16.31.188 removed
Jan 17 16:20:40 proxy Keepalived_healthcheckers[15024]: Removing service [172.16.31.50]:8080 from VS [172.16.31.188]:0
Jan 17 16:20:40 proxy Keepalived_healthcheckers[15024]: Removing service [172.16.31.51]:8080 from VS [172.16.31.188]:0
Jan 17 16:20:40 proxy kernel: IPVS: __ip_vs_del_service: enter
BACKUP节点的日志:
[root@proxy2 keepalived]# tail -f /var/log/messages
Jan 17 16:20:41 proxy2 Keepalived_vrrp[15010]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 17 16:20:42 proxy2 Keepalived_vrrp[15010]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 17 16:20:42 proxy2 Keepalived_vrrp[15010]: VRRP_Instance(VI_1) setting protocol VIPs.
Jan 17 16:20:42 proxy2 Keepalived_healthcheckers[15009]: Netlink reflector reports IP 172.16.31.188 added
Jan 17 16:20:42 proxy2 Keepalived_vrrp[15010]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.31.188
Jan 17 16:20:47 proxy2 Keepalived_vrrp[15010]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.31.188 4.查看各节点的VIP地址状态及IPVS规则: 原MASTER节点:
[root@proxy keepalived]# ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:3b:23:60 brd ff:ff:ff:ff:ff:ff
inet 172.16.31.52/16 brd 172.16.255.255 scope global eth0
inet6 fe80::a00:27ff:fe3b:2360/64 scope link
valid_lft forever preferred_lft forever[root@proxy keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn 由上可见Master服务器上已删除虚拟IP与LVS规则
BACKUP节点:
[root@proxy2 keepalived]# ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:6e:bd:28 brd ff:ff:ff:ff:ff:ff
inet 172.16.31.53/16 brd 172.16.255.255 scope global eth0
inet 172.16.31.188/32 scope global eth0
inet6 fe80::a00:27ff:fe6e:bd28/64 scope link
valid_lft forever preferred_lft forever 由上可见,虚拟IP地址已成功在Backup服务器启动
[root@proxy2 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.31.188:0 rr persistent 50
-> 172.16.31.50:8080 Route 1 0 0
-> 172.16.31.51:8080 Route 1 0 0
LVS的规则也已成功配置在Backup服务器上面
5.再次访问测试服务器是否正常提供服务
6.假如Master服务器修复好已重新上线,则虚拟IP地址与LVS规则会重新配置到Master服务器上而在Backup服务器上删除
[root@proxy keepalived]# service keepalived start
Starting keepalived: [ OK ] 观察启动日志: MASTER节点的日志:
[root@proxy keepalived]# tail -f /var/log/messages
Jan 17 16:34:50 proxy Keepalived_vrrp[15366]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 17 16:34:50 proxy Keepalived_vrrp[15366]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
Jan 17 16:34:51 proxy Keepalived_vrrp[15366]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 17 16:34:51 proxy Keepalived_vrrp[15366]: VRRP_Instance(VI_1) setting protocol VIPs.
Jan 17 16:34:51 proxy Keepalived_vrrp[15366]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.31.188
Jan 17 16:34:51 proxy Keepalived_healthcheckers[15365]: Netlink reflector reports IP 172.16.31.188 added
Jan 17 16:34:56 proxy Keepalived_vrrp[15366]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.31.188 自动进入了MASTER状态;
BACKUP节点的日志:
[root@proxy2 keepalived]# tail -f /var/log/messages
Jan 17 16:34:50 proxy2 Keepalived_vrrp[15258]: VRRP_Instance(VI_1) Received higher prio advert
Jan 17 16:34:50 proxy2 Keepalived_vrrp[15258]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jan 17 16:34:50 proxy2 Keepalived_vrrp[15258]: VRRP_Instance(VI_1) removing protocol VIPs.
Jan 17 16:34:50 proxy2 Keepalived_healthcheckers[15257]: Netlink reflector reports IP 172.16.31.188 removed 自动进入BACKUP状态,移除VIP;
查看各节点的VIP 和IPVS规则: MASTER节点的的VIP生成成功,IPVS规则生成:
[root@proxy keepalived]# ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:3b:23:60 brd ff:ff:ff:ff:ff:ff
inet 172.16.31.52/16 brd 172.16.255.255 scope global eth0
inet 172.16.31.188/32 scope global eth0
inet6 fe80::a00:27ff:fe3b:2360/64 scope link
valid_lft forever preferred_lft forever
[root@proxy keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.31.188:0 rr persistent 50
-> 172.16.31.50:8080 Route 1 0 0
-> 172.16.31.51:8080 Route 1 0 0 BACKUP节点自动降级,VIP自动移除,但是ipvs规则还是存在的,这对我们是没有影响的,没有了IP地址只有规则也是不生效的
[root@proxy2 keepalived]# ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:6e:bd:28 brd ff:ff:ff:ff:ff:ff
inet 172.16.31.53/16 brd 172.16.255.255 scope global eth0
inet6 fe80::a00:27ff:fe6e:bd28/64 scope link
valid_lft forever preferred_lft forever
[root@proxy2 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.31.188:0 rr persistent 50
-> 172.16.31.50:8080 Route 1 0 0
-> 172.16.31.51:8080 Route 1 0 0
7.如果后端Realserver出现故障,则LVS规则会清除相应Realserver的规则 我们将后端的tom1节点的tomcat服务停止:
[root@tom1 ~]# catalina.sh stop
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME: /usr/java/latest
Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
[root@tom1 ~]#
观察LVS节点的启动日志:
MASTER节点: 出现连接错误,发送了邮件:
[root@proxy keepalived]# tail -f /var/log/messages
Jan 17 16:40:39 proxy Keepalived_healthcheckers[15365]: Error connecting server [172.16.31.50]:8080.
Jan 17 16:40:39 proxy Keepalived_healthcheckers[15365]: Removing service [172.16.31.50]:8080 from VS [172.16.31.188]:0
Jan 17 16:40:39 proxy Keepalived_healthcheckers[15365]: Remote SMTP server [127.0.0.1]:25 connected.
Jan 17 16:40:39 proxy Keepalived_healthcheckers[15365]: SMTP alert successfully sent.
BACKUP节点: 出现连接错误,发送了邮件:
[root@proxy2 keepalived]# tail -f /var/log/messages
Jan 17 16:40:42 proxy2 Keepalived_healthcheckers[15257]: Error connecting server [172.16.31.50]:8080.
Jan 17 16:40:42 proxy2 Keepalived_healthcheckers[15257]: Removing service [172.16.31.50]:8080 from VS [172.16.31.188]:0
Jan 17 16:40:42 proxy2 Keepalived_healthcheckers[15257]: Remote SMTP server [127.0.0.1]:25 connected.
Jan 17 16:40:42 proxy2 Keepalived_healthcheckers[15257]: SMTP alert successfully sent.
在到MASTER节点查看ipvs规则:
[root@proxy keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.31.188:0 rr persistent 50
-> 172.16.31.51:8080 Route 1 0 0
You have new mail in /var/spool/mail/root
规则已经将故障的后端tomcat服务器移除,并且我们还收到了邮件;查看邮件:
[root@proxy keepalived]# mail
Heirloom Mail version 12.4 7/29/08. Type ? for help.
"/var/spool/mail/root": 16 messages 12 new 14 unread
略.....
N 12 admin@stu31.com Sat Jan 17 15:44 17/594 "[LVS_DEVEL] Realserver [172.16.31.51]:80 - DOWN"
N 13 admin@stu31.com Sat Jan 17 15:44 17/594 "[LVS_DEVEL] Realserver [172.16.31.50]:80 - DOWN"
N 14 admin@stu31.com Sat Jan 17 15:50 17/594 "[LVS_DEVEL] Realserver [172.16.31.51]:80 - DOWN"
N 15 admin@stu31.com Sat Jan 17 15:50 17/594 "[LVS_DEVEL] Realserver [172.16.31.50]:80 - DOWN"
N 16 admin@stu31.com Sat Jan 17 16:40 17/596 "[LVS_DEVEL] Realserver [172.16.31.50]:8080 - DOWN"
& 16
Message 16:
From admin@stu31.com Sat Jan 17 16:40:39 2015
Return-Path:
X-Original-To: root@localhost
Delivered-To: root@localhost.stu31.com
Date: Sat, 17 Jan 2015 08:40:39 +0000
From: admin@stu31.com
Subject: [LVS_DEVEL] Realserver [172.16.31.50]:8080 - DOWN
X-Mailer: Keepalived
To: root@localhost.stu31.com
Status: R
=> CHECK failed on service : connection error