LVS负载均衡之lvs高可用实例部署2(案例篇)
在日常应用环境中,我们会遇到这样一种lvs部署环境,所有的dr以及的rs server都在一个局域网环境中,但只有一个公网ip,而又需要将应用发布到internet上,都知道lvs的最好的模式就是所有的server都有一个公网ip,但很多时候公网资源稀缺,当出现只有一个公网ip的时候,怎么实现lvs对外发布呢?Lvs(lvs/dr模式)单个公网ip高可用应用案例
如图所示为整体的拓扑图:
一.部署前说明:(1)系统版本: centos 6.6(64位)(2)角色及ip相关信息:
角色名称ip信息
客户端(CIP)192.168.1.128/24 && Gateway:192.168.1.254
主调度器(master-dr)eth0:172.51.96.119/24 && Gateway:172.51.97.1
备调度器(backup-dr)eth0:172.51.96.105/24 && Gateway:172.51.97.1
后端web1(rs rip1)eth0:172.51.96.235/24 && Gateway:172.51.96.1
后端web2(rs rip2)eth0:172.51.96.236/24 && Gateway:172.51.96.1
公网ip信息172.51.97.175/24 && Gateway:172.51.97.1
vip地址172.51.97.175/24
Route1eth0:172.51.97.1/24 & eth1:10.10.10.254
Route2eth0:172.51.96.1/24 eth1:10.10.1.254
Route3eth0:192.168.1.254/24 eth1:10.10.100.254
注意:实际应用环境中,互联网之间连路都是联通的,这里椭圆区域代表整个internet。
(3)相关中间件信息keepalive版本信息: keepalived-1.2.15httpd版本信息: httpd-2.2(提供http服务)ipvsadm版本信息: ipvsadm-1.2.1
二.部署操作:负载均衡器上配置操作(1)分别在Master_DR和backup_DR上安装Keepalive,ipvsadm所需要的相关依赖包:
# yum install openssl-devel popt-devel libnl-devel kernel-devel-y
(2)分别在Master_DR和backup_DR上安装Keepalive以及ipvsadm,如下:
1. 安装ipvsadm软件
# yum installipvsadm-y
2. 编译安装keepalive
1.1 keepalived的源码获取
keepalived源码包我们可以到keepalived的官网:http://www.keepalived.org/去下载,相关说明文档亦可在其官网查看,比如keepalived的使用,相关配置说明,这里演示的版本为:1.2.15
# cd ~
# wget http://www.keepalived.org/software/keepalived-1.2.15.tar.gz
1.2 编译安装keepalived
<--编译安装keepalived-->
# ln -s /usr/src/kernels/2.6.32-573.18.1.el6.x86_64/ /usr/src/linux
# tar zxvf keepalived-1.2.15.tar.gz -C /usr/local/src
# cd /usr/local/src/keepalived-1.2.15/
# ./configure \
--prefix=/usr/local/keepalived \
--with-kernel-dir=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64
# makemake install
<--对keepalived进行相关路径优化调整-->
<---拷贝keepalived相关启动命令--->
# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
<---将keepalived启动脚本添加到系统服务--->
# cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
# chkconfig --add keepalived
# chkconfig --level 2345 keepalived on
<---创建keepalived相关配置文件--->
# mkdir -p /etc/keepalived
# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived
备注说明:
1.keepalived安装完成后,相关路径如下:
安装目录为:/usr/lccal/keepalived, 配置文件路径为:/etc/keepalive/
2.安装完成后,需要将启动脚本复制到/etc/ini.d/下
3.注意一定要执行述上的相关操作,不然有可能导致keepalived服务起不来
(3)分别配置Lvs_Master_DR以及Lvs_Backup_DR上的keepalive实例,如下所示:
1. master_dr配置代码示例(主调度器)
vim /etc/keepalived/keepalived.conf
内容如下
! Configuration File for keepalived
global_defs {
notification_email {
admin@bluemobi.cn
}
notification_email_fromlvs_admin@bluemobi.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id DR_Master
}
#vrrp_script check_nginx {
# script "/etc/keepalived/scripts/check_nginx.sh"
# interval 3
# weight -5
#}
vrrp_instance http {
state BACKUP
interface eth0
lvs sync daemon interface eth0
dont_track_primary
nopreempt
track_interface {
eth0
eth1
}
mcast_src_ip 172.51.96.105
garp_master_delay 6
virtual_router_id 60
priority 110
advert_int 1
authentication {
auth_type PASS
autp_pass 1234
}
virtual_ipaddress {
172.51.97.175/24 brd 172.51.97.255 dev eth0 label eth0:1
}
virtual_routes {
172.51.97/24 dev eth0
}
# track_script {
# check_nginx
# }
notify_master /etc/keepalived/scripts/state_master.sh
notify_backup /etc/keepalived/scripts/state_backup.sh
notify_fault/etc/keepalived/scripts/state_fault.sh
}
virtual_server 172.51.97.175 80 {
delay_loop 1
lb_algo rr
lb_kind DR
persistence_timeout 30
nat_mask 255.255.255.0
protocol TCP
real_server 172.51.96.235 80 {
weight 1
notify_down /etc/keepalived/scripts/rs_state.sh
HTTP_GET {
url {
path /info.php
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.51.96.236 80 {
weight 1
notify_down /etc/keepalived/scripts/rs_state.sh
HTTP_GET {
url {
path /info.php
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
2 backup_dr配置示例(备调度器)
vim /etc/keepalived/keepalived.conf
内容如下
! Configuration File for keepalived
global_defs {
notification_email {
admin@bluemobi.cn
}
notification_email_fromlvs_admin@bluemobi.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id DR_BACKUP
}
#vrrp_script check_nginx {
# script "/etc/keepalived/scripts/check_nginx.sh"
# interval 3
# weight -5
#}
vrrp_instance http {
state BACKUP
interface eth0
lvs sync daemon interface eth0
dont_track_primary
track_interface {
eth0
eth1
}
mcast_src_ip 172.51.96.119
garp_master_delay 6
virtual_router_id 60
priority 105
advert_int 1
authentication {
auth_type PASS
autp_pass 1234
}
virtual_ipaddress {
172.51.97.175/24 brd 172.51.97.255 dev eth0 label eth0:1
}
virtual_routes {
172.51.97.175/24 dev eth0
}
# track_script {
# check_nginx
# }
notify_master /etc/keepalived/scripts/state_master.sh
notify_backup /etc/keepalived/scripts/state_backup.sh
notify_fault/etc/keepalived/scripts/state_fault.sh
}
virtual_server 172.51.97.175 80 {
delay_loop 1
lb_algo rr
lb_kind DR
persistence_timeout 30
nat_mask 255.255.255.0
protocol TCP
real_server 172.51.96.235 80 {
weight 1
notify_down /etc/keepalived/scripts/rs_state.sh
HTTP_GET {
url {
path /info.php
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.51.96.236 80 {
weight 1
notify_down /etc/keepalived/scripts/rs_state.sh
HTTP_GET {
url {
path /info.php
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
注意:关于vip,如果vip不在DIP所在的网段内,那么vip一定要配置在dr与后端上游RS Server直连的网卡上,不然就会出现无法访问的情况;
3.分别在主调度server和备调度server编写以下脚本,如
i 当调度器为切换master server时,记录切换时间日志
vim /etc/keepalived/scripts/state_master.sh
代码如下:
#!/bin/bash
echo -e>> $LOGFILE
host=CN-SH-DR01 #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}Starting to become master server...." >> $LOGFILE 2>&1
echo "Please run the “ipvsadm -Ln”check the keepalived state ..." >> $LOGFILE
echo ".........................................................................!">> $LOGFILE
echo >>$LOGFILE
ii 当调度器为切换backup server时,记录切换时间日志
vim /etc/keepalived/scripts/state_backup.sh
代码如下:
#!/bin/bash
echo -e >> $LOGFILE
host=CN-SH-DR01 #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}Starting to become Backup server...." >> $LOGFILE 2>&1
echo "Please run the “ipvsadm -Ln”check the state ..." >> $LOGFILE
echo "........................................................................!">> $LOGFILE
echo>> $LOGFILE
iii当调度器出现错误时,记录错误时间日志
vim /etc/keepalived/scripts/state_fault.sh
代码如下:
#!/bin/bash
echo -e >> $LOGFILE
host=CN-SH-DR01 #设置当前的主机名
LOGFILE="/var/log/keepalived-state.log"
echo "" >> $LOGFILE
date >> $LOGFILE
echo "The ${host}is fault error...." >> $LOGFILE 2>&1
echo "Please check the server state ..." >> $LOGFILE
echo "........................................................................!">> $LOGFILE
echo>> $LOGFILE
4. 重新启动keepalived服务
# service keepalived restart
后端RS server上配置操作
(1)分别在每个RIP(RIP1,RIP2)上新建一个shell脚本文件,如下操作所示:
# vim /etc/init.d/lvs-dr
脚本内容如下
#!/bin/sh
#
# Startup script handle the initialisation of LVS
# chkconfig: - 28 72
# description: Initialise the Linux Virtual Server for DR
#
### BEGIN INIT INFO
# Provides: ipvsadm
# Required-Start: $local_fs $network $named
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: Initialise the Linux Virtual Server
# Description: The Linux Virtual Server is a highly scalable and highly
# available server built on a cluster of real servers, with the load
# balancer running on Linux.
# description: start LVS of DR-RIP
LOCK=/var/lock/ipvsadm.lock
VIP=172.51.97.175
. /etc/rc.d/init.d/functions
start() {
PID=`ifconfig | grep lo:0 | wc -l`
if [ $PID -ne 0 ];
then
echo "The LVS-DR-RIP Server is already running !"
else
/sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP up
/sbin/route add -host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/eth0/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/eth0/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
/bin/touch $LOCK
echo "starting LVS-DR-RIP server is ok !"
fi
}
stop() {
/sbin/route del -host $VIP dev lo:0
/sbin/ifconfig lo:0 down>/dev/null
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/eth0/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/eth0/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
rm -rf $LOCK
echo "stopping LVS-DR-RIP server is ok !"
}
status() {
if [ -e $LOCK ];
then
echo "The LVS-DR-RIP Server is already running !"
else
echo "The LVS-DR-RIP Server is not running !"
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status
;;
*)
echo "Usage: $1 {start|stop|restart|status}"
exit 1
esac
exit 0
注意:关于arp仰制,最好也把RIP上同DIR直连的网卡物理网卡也设置arp仰制,如上eth1所示。
授权并启动该脚本
# chmod 777 /etc/init.d/lvs-dr
# service lvd-dr start
(2)分别在每个RIP上安装http服务,并创建测试页,如下分别为RIP上测试页面:
RIP1(172.51.96.235)上的测试页:
RIP2(172.51.96.236)上的测试页:
三.测试验证:我们可以通过messages查看vip抢夺情况,如下所示:从上面我们可以看到cn-sh-sq-web01由于优先级为110>cn-sh-sq-web02的优先级(105),顺利竞选成为master,这里我们可以在cn-sh-sq-web01上可以观察到vip的存在,如下图所示:同时日志记录脚本也会记录相关信息:# tail -f /var/log/keepalived-state.log
Wed Mar9 21:56:25 CST 2016
The CN-SH-DR01Starting to become Backup server....
Please run the “ipvsadm -Ln”check the state ...
...............................................................................!
Wed Mar9 21:56:28 CST 2016
The CN-SH-DR01Starting to become master server....
Please run the “ipvsadm -Ln”check the keepalived state ...
...............................................................................!
这时在CIP访问http://172.51.97.175即可访问页面
这里我们用VMware来模拟客户端,其中客户端的ip信息如下:
访问http://172.51.97.175可以看到后端两台rs sever页面轮询出现如下所示:
我们在maste-dr运行“ipvsadm -Ln -c”可以看到连接情况,:至此,整个单个公网ip部署lvs高可用基本上部署完成了
总结:lvs DR模式中可以存在vip不与DIP同一网段,比如当整个应用环境中只有一个公网ip的情况下,我们可以把公网ip充当vip,绑定到dr与上游各RS server直连的那个网卡上,这样就可以顺利实现lvs对外提供服务,需要注意的是lvs对整个网络的环境要求特别高。
页:
[1]