LVS负载均衡之lvs高可用实例部署(案例篇)
在实际环境中,keepalive常常与lvs,nginx,haproxy,Mysql等等应用组成高可用计算集群服务,比如web前端应用等等场景,今天一起来讲讲关于keepalive+lvs实例部署Keepalive+Lvs(lvs/dr模式)实例部署
如图所示为整体的拓扑图:
http://s4.运维网.com/wyfs02/M01/7C/F2/wKiom1bc-LrxS5ZSAACpdEiwwsY617.png
一.部署前说明:
(1)系统版本: centos 6.6(64位)
(2)角色及ip相关信息:
角色名称网络ip信息客户端(CIP)192.168.0.242/24Lvs_Master_DReth0:172.51.96.105/24 &ð1:192.168.0.105/24Lvs_Backup_DReth0:172.51.96.119/24 &ð1:192.168.0.119/24RS_RIP1eth0:172.51.96.235/24 &ð1:192.168.0.235/24RS_RIP2eth0:172.51.96.236/24 &ð1:192.168.0.236/24LVS_VIP192.168.0.88/32(3)相关中间件信息
keepalive版本信息: keepalived-1.2.15
httpd版本信息: httpd-2.2(提供http服务)
ipvsadm版本信息: ipvsadm-1.2.1
二.部署操作:
负载均衡器上配置操作
(1)分别在Lvs_Master_DR和Lvs_backup_DR上安装Keepalive,ipvsadm所需要的相关依赖包:
# yum install openssl-devel popt-devel libnl-devel kernel-devel-y
(2)分别在Lvs_Master_DR和Lvs_backup_DR上安装Keepalive以及ipvsadm,如下:
1. 安装ipvsadm软件
# yum installipvsadm-y
2. 编译安装keepalive
1.1 keepalived的源码获取
keepalived源码包我们可以到keepalived的官网:http://www.keepalived.org/去下载,相关说明文档亦可在其官网查看,比如keepalived的使用,相关配置说明,这里演示的版本为:1.2.15
# cd ~
# wget http://www.keepalived.org/software/keepalived-1.2.15.tar.gz
1.2 编译安装keepalived
# ln -s /usr/src/kernels/2.6.32-573.18.1.el6.x86_64/ /usr/src/linux
# tar zxvf keepalived-1.2.15.tar.gz -C /usr/local/src
# cd /usr/local/src/keepalived-1.2.15/
# ./configure \
--prefix=/usr/local/keepalived \
--with-kernel-dir=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64
# makemake install
# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
# cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
# chkconfig --add keepalived
# chkconfig --level 2345 keepalived on
# mkdir -p /etc/keepalived
# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived
备注说明:
1.keepalived安装完成后,相关路径如下:
安装目录为:/usr/lccal/keepalived, 配置文件路径为:/etc/keepalive/
2.安装完成后,需要将启动脚本复制到/etc/ini.d/下
3.注意一定要执行述上的相关操作,不然有可能导致keepalived服务起不来
1.3启动keepalived服务
# service keepalived start
(3)分别配置Lvs_Master_DR以及Lvs_Backup_DR上的keepalive实例,如下所示:
1. Lvs_master_dr配置代码示例(主调度器)
vim /etc/keepalived/keepalived.conf内容如下
! Configuration File for keepalived
global_defs {
notification_email {
admin@bluemobi.cn
}
notification_email_fromlvs_admin@bluemobi.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id DR_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/scripts/check_nginx.sh"
interval 3
weight -5
}
############################################################################################
# vrrp_script check_nginx { #表示创建一个脚本check_nginx
script "/etc/keepalived/scripts/check_nginx.sh"#引用的脚本路径
interval 3#表示检测时间的间隔为3s
weight -5 #当脚本执行结果失败,则优先级降低5级
}
############################################################################################
vrrp_instance http {
state BACKUP
interface eth0
lvs sync daemon interface eth0
############################################################################################
# lvs sync daemon interface eth0:类似HA心跳检测的端口
############################################################################################
dont_track_primary
nopreempt
############################################################################################
# nopreempt: 不抢占master
############################################################################################
track_interface {
eth0
eth1
}
############################################################################################
# track_interface {}:表示需要检测的网卡
############################################################################################
mcast_src_ip 172.51.96.105
############################################################################################
# track_interface :表示设置组播的源地址
############################################################################################
garp_master_delay 6
virtual_router_id 60
priority 110
advert_int 1
authentication {
auth_type PASS
autp_pass 1234
}
virtual_ipaddress {
192.168.0.88/32 brd 192.168.0.88 dev eth0 label eth0:1
}
virtual_routes {
192.168.0.88/32 dev eth1
}
track_script {
check_nginx weight
}
notify_master /etc/keepalived/scripts/state_master.sh
notify_backup /etc/keepalived/scripts/state_backup.sh
notify_fault/etc/keepalived/scripts/state_fault.sh
}
############################################################################################
# notify_master :表示当前调度器竞选为master server时需要调用的脚本
# notify_backup :表示当前调度器竞选为backup server时需要调用的脚本
# notify_fault:表示当前调度器竞选为出现问题时(比如网卡down)需要调用的脚本
# notify_stop :表示当前调度器竞服务停止时(比如keepalive服务down)需要调用的脚本
############################################################################################
virtual_server 192.168.0.88 80 {
delay_loop 1
lb_algo rr
lb_kind DR
persistence_timeout 30
nat_mask 255.255.255.0
protocol TCP
real_server 192.168.0.235 80 {
weight 1
notify_down /etc/keepalived/scripts/rs_state.sh
###########################################################################################
# notify_down: 表示如果后端rs server检测失败时要调用的脚本
# notify_up: 表示如果后端rs server检测成功时要调用的脚本
###########################################################################################
HTTP_GET {
url {
path /info.php
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.0.236 80 {
weight 1
notify_down /etc/keepalived/scripts/rs_state.sh
HTTP_GET {
url {
path /info.php
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
2 Lvs_backup_dr配置示例(备调度器)
vim /etc/keepalived/keepalived.conf内容如下
! Configuration File for keepalived
global_defs {
notification_email {
admin@bluemobi.cn
}
notification_email_fromlvs_admin@bluemobi.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id DR_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/scripts/check_nginx.sh"
interval 3
weight -5
}
vrrp_instance http {
state BACKUP
interface eth0
lvs sync daemon interface eth0
dont_track_primary
nopreempt
track_interface {
eth0
eth1
}
mcast_src_ip 172.51.96.119
garp_master_delay 6
virtual_router_id 60
priority 109
advert_int 1
authentication {
auth_type PASS
autp_pass 1234
}
virtual_ipaddress {
192.168.0.88/32 brd 192.168.0.88 dev eth0 label eth0:1
}
virtual_routes {
192.168.0.88/32 dev eth1
}
track_script {
check_nginx
}
notify_master /etc/keepalived/scripts/state_master.sh
notify_backup /etc/keepalived/scripts/state_backup.sh
notify_fault/etc/keepalived/scripts/state_fault.sh
}
virtual_server 192.168.0.88 80 {
delay_loop 1
lb_algo rr
lb_kind DR
persistence_timeout 30
nat_mask 255.255.255.0
protocol TCP
real_server 192.168.0.235 80 {
weight 1
notify_down /etc/keepalived/scripts/rs_state.sh
HTTP_GET {
url {
path /info.php
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.0.236 80 {
weight 1
notify_down /etc/keepalived/scripts/rs_state.sh
HTTP_GET {
url {
path /info.php
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}3.分别在主调度server和备调度server编写以下脚本,如下:
i 当调度器为切换master server时,记录切换时间日志
vim /etc/keepalived/scripts/state_master.sh
代码如下:
#!/bin/bash
echo -e>> $LOGFILE
host=CN-SH-DR01 #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}Starting to become master server...." >> $LOGFILE 2>&1
echo "Please run the “ipvsadm -Ln”check the keepalived state ..." >> $LOGFILE
echo ".........................................................................!">> $LOGFILE
echo >>$LOGFILE
ii 当调度器为切换backup server时,记录切换时间日志
vim /etc/keepalived/scripts/state_backup.sh
代码如下:
#!/bin/bash
echo -e >> $LOGFILE
host=CN-SH-DR01 #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}Starting to become Backup server...." >> $LOGFILE 2>&1
echo "Please run the “ipvsadm -Ln”check the state ..." >> $LOGFILE
echo "........................................................................!">> $LOGFILE
echo>> $LOGFILEiii当调度器出现错误时,记录错误时间日志
vim /etc/keepalived/scripts/state_fault.sh
代码如下:
#!/bin/bash
echo -e >> $LOGFILE
host=CN-SH-DR01 #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}is fault error...." >> $LOGFILE 2>&1
echo "Please check the server state ..." >> $LOGFILE
echo "........................................................................!">> $LOGFILE
echo>> $LOGFILE服务状态健康监测脚本
,比如nginx和keepalive在同一台server上,当nginx不可用时,
#!/bin/bash
#nginx="/usr/local/nginx/sbin/nginx"
PID=`ps -C nginx --no-heading|wc -l`
if [ "${PID}" = "0" ];
then
/etc/init.d/nginx start
sleep 3
LOCK=`ps -C nginx --no-heading|wc -l`
if [ "${LOCK}" = "0" ];
then
/etc/init.d/keepalived restart
fi
fi1.3 重新启动keepalived服务
# service keepalived restart
后端RS server上配置操作
(1)分别在每个RIP(RIP1,RIP2)上新建一个shell脚本文件,如下操作所示:
# vim /etc/init.d/lvs-dr
脚本内容如下
#!/bin/sh
#
# Startup script handle the initialisation of LVS
# chkconfig: - 28 72
# description: Initialise the Linux Virtual Server for DR
#
### BEGIN INIT INFO
# Provides: ipvsadm
# Required-Start: $local_fs $network $named
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: Initialise the Linux Virtual Server
# Description: The Linux Virtual Server is a highly scalable and highly
# available server built on a cluster of real servers, with the load
# balancer running on Linux.
# description: start LVS of DR-RIP
LOCK=/var/lock/ipvsadm.lock
VIP=192.168.0.88
. /etc/rc.d/init.d/functions
start() {
PID=`ifconfig | grep lo:0 | wc -l`
if [ $PID -ne 0 ];
then
echo "The LVS-DR-RIP Server is already running !"
else
/sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP up
/sbin/route add -host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/eth1/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/eth1/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
/bin/touch $LOCK
echo "starting LVS-DR-RIP server is ok !"
fi
}
stop() {
/sbin/route del -host $VIP dev lo:0
/sbin/ifconfig lo:0 down>/dev/null
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/eth1/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/eth1/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
rm -rf $LOCK
echo "stopping LVS-DR-RIP server is ok !"
}
status() {
if [ -e $LOCK ];
then
echo "The LVS-DR-RIP Server is already running !"
else
echo "The LVS-DR-RIP Server is not running !"
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status
;;
*)
echo "Usage: $1 {start|stop|restart|status}"
exit 1
esac
exit 0注意:关于arp仰制,最好也把RIP上同DIR直连的网卡物理网卡也设置arp仰制,如上eth1所示。
授权并启动该脚本
# chmod 777 /etc/init.d/lvs-dr
# service lvd-dr start
(2)分别在每个RIP上安装http服务,并创建测试页,如下分别为RIP上测试页面:
RIP1(192.168.0.235)上的测试页:
http://s2.运维网.com/wyfs02/M00/7D/1E/wKioL1bgKY-y9GWqAAAMbnvE42o660.png
RIP2(192.168.0.236)上的测试页:
http://s1.运维网.com/wyfs02/M02/7D/1E/wKioL1bgKc_A0bEqAAAMLYjD8XE980.png
三.测试验证:
我们可以通过messages查看vip抢夺情况,如下所示:
http://s3.运维网.com/wyfs02/M01/7D/1F/wKiom1bgL1nhxwfiAABv3eAKTic779.png
http://s1.运维网.com/wyfs02/M02/7D/1F/wKiom1bgL5WxTxfEAABvLONjuWY551.png
从上面我们可以看到cn-sh-sq-web01由于优先级为110>cn-sh-sq-web02的优先级(109),顺利竞选成为master,这里我们可以在cn-sh-sq-web01上可以观察到vip的存在,如下图所示:
http://s4.运维网.com/wyfs02/M02/7D/1D/wKioL1bgJaLzu1JeAAB_IXGHHpU425.png
同时日志记录脚本也会记录相关信息:
# tail -f /var/log/keepalived-state.log
Wed Mar9 21:56:25 CST 2016
The CN-SH-DR01Starting to become Backup server....
Please run the “ipvsadm -Ln”check the state ...
...............................................................................!
Wed Mar9 21:56:28 CST 2016
The CN-SH-DR01Starting to become master server....
Please run the “ipvsadm -Ln”check the keepalived state ...
...............................................................................! 这时我们在CIP上访问http://vip,可以看到后端两台rs sever页面轮询出现如下所示:
http://s4.运维网.com/wyfs02/M00/7D/1F/wKiom1bgLYizfNbLAAAlfTgagUM007.png
我们在maste-dr运行“ipvsadm -Ln -c”可以看到连接情况,:
http://s3.运维网.com/wyfs02/M00/7D/1E/wKioL1bgL1GwlRyzAAAk_Qg-m7k963.png
假如此时master-dr出故障,如网卡down,如下:
http://s1.运维网.com/wyfs02/M02/7D/1F/wKiom1bgMSCwXx1lAABX6liSOaA226.png
http://s4.运维网.com/wyfs02/M01/7D/1E/wKioL1bgMW7i06nBAABwA_3ZBJY014.png
通过“tail -f /var/log/keepalived-state.log”可以查看故障日志记录:
# tail -f /var/log/keepalived-state.log
The CN-SH-DR01Starting to become master server....
Please run the “ipvsadm -Ln”check the keepalived state ...
...............................................................................!
Wed Mar9 22:11:48 CST 2016
The CN-SH-DR01is fault error....
Please check the server state ...
...............................................................................! 我们发现vip转移到backup-dr上,此时我们再在客户端上访问,发现依然可以访问,如下:
back-dr访问连接情况:
http://s5.运维网.com/wyfs02/M01/7D/1E/wKioL1bgMt3h7xwOAAAsPjfX-nc131.png
客户端访问情况:
http://s4.运维网.com/wyfs02/M00/7D/1F/wKiom1bgLYizfNbLAAAlfTgagUM007.png
到这里,整个keepalived+lvs实例部署就完成了
总结:keepalived+lvs实例部署有三种模式:nat模式(最简单)dr模式(应用最广泛)tun模式(适用跨区域,跨机房),这里只是dr模式,更多模式部署请参考lvs应用篇,如下:
页:
[1]