一)概述
在本篇文章里,我们会涉及两部份内容,一个是LVS,另一个则是keepalived.
即我们用LVS和keepalived实现了负载均衡及高可用的服务器.
LVS有实现三种IP负载均衡技术和八种连接调度算法.并且LVS集群采用三层结构,即负载调度器,服务器池,共享存储.
1)负载调度器
负载调度器是LVS集群的唯一入口,它采用IP负载均衡技术,基于内容分发技术或两者并结合.
在IP负载均衡技术中,需要服务器池拥有相同的内容提供相同的服务.当客户请求到达时,调度器只根据服务器负载情况和设定调度算法从服务器池中选出一台机器,将请求转发给选出的机器,并记录这个调度.当这个请求的其他报文到达,也会被转发到前面选出的服务器.
在基于内容分发技术中,服务器可以提供不同的服务,当客户请求到达时,调度器可根据请求的内容选择服务器执行请求.
2)服务器池
服务器池也就是real server,是真正处理应用的服务器.
3)共享存储
它为服务器池提供一个共享的存储区,这样很容易使得服务器池拥有相同的内容,提供相同的服务.
keepalive
Keepalived在这里主要用作RealServer的健康状态检查以及Master主机和Backup主机之间failover的实现.
二)测试环境介绍
负载调度服务器(master): 10.1.1.160
负载调度服务器(slave): 10.1.1.162
vip为10.1.1.166
real server1:10.1.1.163
real server2:10.1.1.164
测试机:10.1.1.165
以上5台服务器我们均安装debian 5.0.
我们首先在负载调度服务器10.1.1.160及10.1.1.162安装lvs及keepalived
在real server安装apache2.0
三)keepalived/lvs的安装配置
1)在负载调度服务器(10.1.1.160)安装keepalived和ipvsadm,如下:
安装keepalived
apt-get install keepalived
安装ipvsadm
apt-get install ipvsadm
修改并创建keepalived配置文件如下:
vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.166
}
}
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo rr
lb_kind DR
# persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
注:这里我们采用的IP负载均衡技术是DR.
2)在负载调度服务器(10.1.1.162)安装keepalived和ipvsadm,如下:
安装keepalived
apt-get install keepalived
安装ipvsadm
apt-get install ipvsadm
! Configuration File for keepalived
global_defs {
router_id LVS_2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.166
}
}
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo rr
lb_kind DR
# persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
3)配置real server
3.1)在real server上创建新的网络介质,这里为lo:0 10.1.1.166
ifconfig lo:0 10.1.1.166 broadcast 10.1.1.166 netmask 255.255.255.255 up
3.2)关闭ARP广播响应
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
3.3)安装apache
apt-get install apache2
echo "real server no1" >> /var/www/index.html
注:两台real server执行同样的操作.
5)测试
5.1)启动keepalived服务:
lvs1:
/etc/init.d/keepalived restart
lvs2:
/etc/init.d/keepalived restart
5.2)测试机测试:
ping 10.1.1.166
PING 10.1.1.166 (10.1.1.166) 56(84) bytes of data.
64 bytes from 10.1.1.166: icmp_req=1 ttl=64 time=0.225 ms
64 bytes from 10.1.1.166: icmp_req=2 ttl=64 time=0.179 ms
64 bytes from 10.1.1.166: icmp_req=3 ttl=64 time=0.163 ms
64 bytes from 10.1.1.166: icmp_req=4 ttl=64 time=0.226 ms
64 bytes from 10.1.1.166: icmp_req=5 ttl=64 time=0.218 ms
在lvs1上抓包如下:
tcpdump -p icmp -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture>
09:45:12.572695 IP 10.1.1.165 > 10.1.1.166: ICMP echo request,>
09:45:12.572713 IP 10.1.1.166 > 10.1.1.165: ICMP echo reply,>
09:45:13.572693 IP 10.1.1.165 > 10.1.1.166: ICMP echo request,>
09:45:13.572708 IP 10.1.1.166 > 10.1.1.165: ICMP echo reply,>
09:45:14.572724 IP 10.1.1.165 > 10.1.1.166: ICMP echo request,>
09:45:14.572741 IP 10.1.1.166 > 10.1.1.165: ICMP echo reply,>
09:45:15.572738 IP 10.1.1.165 > 10.1.1.166: ICMP echo request,>
09:45:15.572756 IP 10.1.1.166 > 10.1.1.165: ICMP echo reply,>
09:45:16.572694 IP 10.1.1.165 > 10.1.1.166: ICMP echo request,>
09:45:16.572710 IP 10.1.1.166 > 10.1.1.165: ICMP echo reply,> 说明现在lvs是在lvs1的服务器.
四)keepalived主/从通讯分析
1)vrrp协议与主/从切换机制
keepalived的master与slave是通过vrrp2协议进行通讯.以决定各自的状态及vip等相关信息,MASTER会发送广播包,广播地址为224.0.0.18.
我们通过抓包如下:
tcpdump -X -n -vvv 'dst 224.0.0.18'
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture>
09:43:04.295639 IP (tos 0x0, ttl 255,> 10.1.1.160 > 224.0.0.18: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20, addrs: 10.1.1.166 auth "1111^@^@^@^@"
0x0000: 4500 0028 c934 0000 ff70 067e 0a01 01a0 E..(.4...p.~....
0x0010: e000 0012 2133 c801 0101 a7c0 0a01 01a6 ....!3..........
0x0020: 3131 3131 0000 0000 1111....
09:43:05.295686 IP (tos 0x0, ttl 255,> 10.1.1.162 > 224.0.0.18: VRRPv2, Advertisement, vrid 52, prio 100, authtype simple, intvl 1s, length 20, addrs: 10.1.1.166 auth "1111^@^@^@^@"
0x0000: 4500 0028 da17 0000 ff70 f598 0a01 01a2 E..(.....p......
0x0010: e000 0012 2134 6401 0101 0bc0 0a01 01a6 ....!4d.........
0x0020: 3131 3131 0000 0000 1111....
09:43:05.296837 IP (tos 0x0, ttl 255,> 10.1.1.160 > 224.0.0.18: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20, addrs: 10.1.1.166 auth "1111^@^@^@^@"
0x0000: 4500 0028 c935 0000 ff70 067d 0a01 01a0 E..(.5...p.}....
0x0010: e000 0012 2133 c801 0101 a7c0 0a01 01a6 ....!3..........
0x0020: 3131 3131 0000 0000 1111....
以10.1.1.160服务器发广播数据为例,如下:
10.1.1.160 > 224.0.0.18: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20, addrs: 10.1.1.166 auth "1111^@^@^@^@"
0x0000: 4500 0028 c934 0000 ff70 067e 0a01 01a0 E..(.4...p.~....
0x0010: e000 0012 2133 c801 0101 a7c0 0a01 01a6 ....!3..........
0x0020: 3131 3131 0000 0000 1111....
vrrpv2的协议的消息从这里开始:
0x0014: 2133 c801 0101 a7c0 0a01 01a6 ....!3..........
0x0020: 3131 3131 0000 0000
version: 版本号4位,在RFC中定义为2,所以这里是2.
type: 类型,4位,目前只定义一种类型,就是Advertisement,表示通告信息,取值为1.所以这里是1
Virtual> Priority:优先级,8位,因为在lvs1中的keepalived定义的Priority为200,所以转换为16进制就是C8
count ip addrs:VRRP包中的IP地址数量,8位.这里只有一个ip地址,所以就是01
auth type:认证类型,8位,在RFC3768中认证功能已经取消.所以该字段为01,其实这样只对老版本的兼容.如果取消则为00.
adver int:通告包的发送间隔时间,缺省为1秒,我们的配置也是1秒,所以这里的值为01
checksum:检验和,16位.这里的校验数据范围只是VRRP数据,并不包括IP头.
ip address:vip地址,这里是16位,我们的vip地址为10.1.1.166,所以转换为十六进制就是0a01 01a6
auth data:验证的密码,密码的最大长度为8个字符,也就是32位,不足32位的,以0补全,所以这里就是3131 3131 0000 0000
2)keepalived的vrrp配置
这里是master的配置,如下:
! Configuration File for keepalived
global_defs {
router_id LVS1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.166
}
}
这里是backup的配置,如下:
! Configuration File for keepalived
global_defs {
router_id LVS2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.166
}
}
注:
global_defs{}是全局配置.
router_id是虚拟路由器ID,可以是任意值,建议是当前的主机名.
vrrp_instance 实例名{}是配置VRRP的实例,我们这里只做最基本的介绍.
state MASTER:代表当前的keepalived所在的服务器是主机还是备用机.如果是备用机则用BACKUP.
问题一:
如果我们这里两台机器都是MASTER,谁是主谁是备呢?
答案是要看两台机器的优先级(priority配置项).state并不在vrrp协议中定义,所以决定权在priority配置项.
下面是把两台机器的keepalived都改成MASTER.如下:
lvs1:
Sep 6 13:45:45 10 kernel: [ 7290.447277] IPVS: sync thread started: state = MASTER, mcast_ifn = eth0, syncid = 51
Sep 6 13:45:46 10 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Sep 6 13:45:47 10 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Sep 6 14:44:57 10 Keepalived_vrrp: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
lvs2:
Sep 6 14:44:56 debian kernel: [536121.748395] IPVS: sync thread started: state = MASTER, mcast_ifn = eth0, syncid = 51
Sep 6 14:44:57 debian Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Sep 6 14:44:57 debian Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Sep 6 14:44:57 debian Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
注意:
我们的MASTER在lvs1上,这时将lvs2更改为MASTER,并重启keepalived,导致有两个MASTER使用同一个virtul_router_id,所以要通过优先级决定,谁是主,谁是备.
就有了下面的日志输出:
lvs1:
Sep 6 14:44:57 10 Keepalived_vrrp: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
lvs2:
Sep 6 14:44:57 debian Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Sep 6 14:44:57 debian Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
如果优先级再相同呢?
答案是两个keepalived都将成为MASTER,并且也都会配置VIP.这样会导致地址冲突.
问题二:
如果MASTER的keepalived被停掉,BACKUP是如何接管的?
首先MASTER在运行时会向本网段发送VRRPv2组播报文,如下:
tcpdump -X -n -vvv 'dst 224.0.0.18'
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture>
16:54:47.816024 IP (tos 0x0, ttl 255,> 10.1.1.160 > 224.0.0.18: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20, addrs: 10.1.1.166 auth "1111^@^@^@^@"
0x0000: 4500 0028 08ca 0000 ff70 c6e8 0a01 01a0 E..(.....p......
0x0010: e000 0012 2133 6401 0101 0bc1 0a01 01a6 ....!3d.........
0x0020: 3131 3131 0000 0000 1111....
注:
组播报文我们之前分析过.这里要说明的是BACKUP是不发组播报文的.
但是如果MASTER当掉,这时BACKUP在确认没有收到MASTER的组播报文后,会主动发送组播报文,声明自己的keepalived状态,随后启用VIP.正式接管keepliaved.
问题三:
在MASTER被当掉,而又再次启用后,BACKUP处于什么状态,keepalived如何处理?
在上面的配置中,如果lvs1当掉,lvs2会接管vip,状态升级为MASTER,但如果之前的lvs1恢复后,它会重新接管VIP,并更新状态为MASTER.
而lvs2会降级为BACKUP.
有办法在lvs1恢复后,不切换系统吗?
答案是肯定的.
nopreempt选项会解决这个问题.
修改lvs1相关配置如下:
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS1
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.166
}
}
这里修改state为BACKUP,也就是说两台keepalived有两个BACKUP.
修改lvs2相关配置如下:
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 150
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.166
}
}
在这里加入nopreempt选项,同时将优先级调整为150,即高于lvs1的优先级100.
下面我们模拟backup的切换.
现在MASTER在lvs1上,日志如下:
Sep 7 10:54:10 10 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Sep 7 10:54:11 10 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Sep 7 10:54:11 10 kernel: [80003.605718] IPVS: stopping backup sync thread 5160 ...
Sep 7 10:54:11 10 kernel: [80003.606177] IPVS: sync thread started: state = MASTER, mcast_ifn = eth0, syncid = 51
我们关闭lvs1的keepalived服务如下:
/etc/init.d/keepalived stop
观察lvs2的message日志,如下:
tail -f /var/log/message
Sep 7 10:53:58 debian Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Sep 7 10:53:59 debian Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Sep 7 10:54:06 debian Keepalived_vrrp: Terminating VRRP child process on signal
Sep 7 10:54:06 debian Keepalived_healthcheckers: Terminating Healthchecker child process on signal
注:我们看到lvs2由BACKUP的状态变为MASTER.
此时我们开启lvs1的keepalived服务,如下:
/etc/init.d/keepalived start
查看lvs1的日志,如下:
Sep 7 11:08:52 10 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Sep 7 11:08:52 10 Keepalived_healthcheckers: Using LinkWatch kernel netlink reflector...
Sep 7 11:08:52 10 kernel: [80885.206211] IPVS: sync thread started: state = BACKUP, mcast_ifn = eth0, syncid = 51
注:我们看到lvs1的状态在重启keepalived之后依然是BACKUP.
这里理一下思路:
为什么要配置两个BACKUP状态呢?因为要保证互不抢占.
而为什么一台要比另一个的优先级高呢?因为我们在高优先级的服务器上配置了nopreempt,导致高的优先级也不会抢占低的优先级.
也就是说只有在一台keepalived失败的时候,另一台才会接管.
interface eth0:代表当前进行vrrp通讯的网络接口卡.
virtual_router_id:代表组播ID.
事实上在一组MASTER/BACKUP实例中,virtual_router_id一定要相同,如果不同,则MASTER/BACKUP都会发送组播数据包.
即vip在两台机器上都会生效.导致地址冲突.
priority 100:代表优先级,即高优先级成为MASTER.
如果state为MASTER,而优先级还比另一台为BACKUP的低,那么它就直接降级为BACKUP.
优先级不能相同,如果相同,则两个keepalived都会生效.并发送组播包.
advert_int 1:VRRP组播周期秒数.
将advert_int调整为5秒,即5秒发一次组播包,如下:
tcpdump vrrp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture> 14:14:51.683320 IP 10.1.1.160 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 150, authtype simple, intvl 5s, length 20
14:14:56.684241 IP 10.1.1.160 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 150, authtype simple, intvl 5s, length 20
14:15:01.685193 IP 10.1.1.160 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 150, authtype simple, intvl 5s, length 20
14:15:06.686163 IP 10.1.1.160 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 150, authtype simple, intvl 5s, length 20
14:15:11.687132 IP 10.1.1.160 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 150, authtype simple, intvl 5s, length 20
这里注意,如果master/backup的组播周期不一至,比如master为5秒,backup为1秒,结果是backup生效,master的keepalived失效,此时只有backup在发组播包.
在master端查看日志如下:
tail -f /var/log/message
Sep 7 14:21:16 10 Keepalived_vrrp: advertissement interval mismatch mine=5000000 rcved=1
Sep 7 14:21:16 10 Keepalived_vrrp: Sync instance needed on eth0 !!!
Sep 7 14:21:16 10 Keepalived_vrrp: VRRP_Instance(VI_1) Dropping received VRRP packet...
authentication {
auth_type PASS
auth_pass 1111
}
确认MASTER/BACKUP的验证方式及口令.
注意:如果MASTER/BACKUP口令不一致,会导致keepalived处理失败,如下:
ep 7 14:34:43 debian Keepalived_vrrp: bogus VRRP packet received on eth0 !!!
Sep 7 14:34:43 debian Keepalived_vrrp: VRRP_Instance(VI_1) Dropping received VRRP packet...
Sep 7 14:34:44 debian Keepalived_vrrp: receive an invalid passwd!
virtual_ipaddress {
10.1.1.166
}
VRRP HA虚拟地址,也就是vip.
这里要注意的是,VIP在定义域里可以有多个,如下:
virtual_ipaddress {
10.1.1.166
10.1.1.167
}
查看vip地址,如下:
ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 6c:62:6d:4c:3a:5d brd ff:ff:ff:ff:ff:ff
inet 10.1.1.162/24 brd 10.1.1.255 scope global eth0
inet 10.1.1.166/32 scope global eth0
inet 10.1.1.167/32 scope global eth0
inet6 fe80::6e62:6dff:fe4c:3a5d/64 scope link
valid_lft forever preferred_lft forever
五)通过自定义脚本检查
vrrp_script 脚本名称 {}
我们可以通过脚本/命令检查系统,如果发现执行失败,则进行master/backup的切换.
下面是加了脚本的lvs1,如下:
! Configuration File for keepalived
global_defs {
router_id LVS1
}
vrrp_script chk_nfs {
script "/bin/pidof nfsd"
interval 10
weight -90
fall 3
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
preempt_delay 300
track_script {
chk_nfs
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.166
}
}
下面是加了脚本的lvs2,如下:
! Configuration File for keepalived
global_defs {
router_id LVS1
}
vrrp_script chk_nfs {
script "/bin/pidof nfsd"
interval 10
weight -90
fall 3
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
track_script {
chk_nfs
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.166
}
}
注:
1)我们通过/bin/pidof nfsd检查系统中是否运行了nfsd服务,检查的时间间隔为10秒.
2)如果lvs1(master)脚本运行3次都失败,keepalived在当前的优先级下减90,如果脚本执行成功,则恢复优先级.
测试如下:
我们在lvs1上关闭nfs服务.
/etc/init.d/nfs-kernel-server stop
查看lvs1日志,如下:
Sep 7 16:41:22 10 Keepalived_vrrp: VRRP_Instance(VI_1) forcing a new MASTER election
Sep 7 16:41:23 10 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Sep 7 16:41:24 10 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Sep 7 16:49:16 10 kernel: [ 5736.924654] nfsd: last server has exited, flushing export cache
Sep 7 16:49:42 10 Keepalived_vrrp: VRRP_Script(chk_nfs) failed
Sep 7 16:49:43 10 Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Sep 7 16:49:43 10 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
此时查看lvs2上面的日志,如下:
Sep 7 16:49:08 debian Keepalived_vrrp: VRRP_Script(chk_nfs) succeeded
Sep 7 16:49:43 debian Keepalived_vrrp: VRRP_Instance(VI_1) forcing a new MASTER election
Sep 7 16:49:44 debian Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Sep 7 16:49:45 debian Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
此时启动lvs1上面的nfs,如下:
/etc/init.d/nfs-kernel-server start
查看lvs1日志,如下:
Sep 7 17:21:52 10 Keepalived_vrrp: VRRP_Script(chk_nfs) succeeded
Sep 7 17:21:52 10 Keepalived_vrrp: VRRP_Instance(VI_1) forcing a new MASTER election
Sep 7 17:21:53 10 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Sep 7 17:21:54 10 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
注:我们看到lvs1在这里提升优先级升级为MASTER.
六)虚拟服务器配置分析
virtual_server vip 端口{}是虚拟服务器配置定义部份.
下面的示例是lvs1中的配置:
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
delay_loop 6表示健康检查时间间隔,单位是秒,这里表示6秒检查一下real server.
示例:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr persistent 60
-> 10.1.1.163:www Route 1 0 0
-> 10.1.1.164:www Route 1 0 0
停止10.1.1.163上面的apache服务,如下:
/etc/init.d/apache2 stop
再次检查lvs1上面的real server,如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr persistent 60
-> 10.1.1.164:www Route 1 0 0
我们看到10.1.1.163被踢出了lvs.
lb_algo wrr表示负载均衡调度算法,互联网应用常使用wlc或rr.这里我们使用wrr,关于调度算法后面再进行分析.
lb_kind负载均衡转发规则,一般包括DR,NAT,TUN这3种,这部份内容我们后面进行分析.
persistence_timeout,会话保持时间,单位是秒
这个选项对动态网站很有用处:当用户从远程用帐号进行登陆网站时,有了这个会话保持功能,就能把用户的请求转发给同一个应用服务器.
在这里,我们来做一个假设,假定现在有一个lvs 环境,使用DR转发模式,真实服务器有2个,如果负载均衡器不启用会话保持功能.当用户第一次访问的时候,他的访问请求被负载均衡器转给某个真实服务器,这样他看到一个登陆页面,第一次访问完毕;
接着他在登陆框填写用户名和密码,然后提交,这时候,问题就可能出现了—登陆不能成功.因为没有会话保持,负载均衡器可能会把第2次的请求转发到其他的服务器.
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo wrr
lb_kind TUN
persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
ab -c 100 -n 10000 http://10.1.1.166/index.html
查看lvs调度情况,如下:
ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr persistent 60
-> 10.1.1.164:www Tunnel 5 44 2131
-> 10.1.1.163:www Tunnel 5 0 0
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr persistent 60
-> 10.1.1.164:www Tunnel 5 71 2667
-> 10.1.1.163:www Tunnel 5 0 0
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr persistent 60
-> 10.1.1.164:www Tunnel 5 54 3563
-> 10.1.1.163:www Tunnel 5 0 0
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr persistent 60
-> 10.1.1.164:www Tunnel 5 40 4406
-> 10.1.1.163:www Tunnel 5 0 0
注:我们看到请求都转发到了10.1.1.164,证明了保持会话功能的作用.
protocol是转发协议,分为有tcp和udp两种,这里不做详细分析.
real_server 10.1.1.163 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
这里是定义real server服务器池,weight表示权重,权重值在有权重的调试算法的策略中才有意义,比如wrr,wlc等.
inhibit_on_failure来设置如果服务器健康检查失败,将其weight设置为0,而不是直接从ipvs里面删除.
下面我们来做一下这个测试
修改lvs配置如下:
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo wrr
lb_kind TUN
#persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 5
inhibit_on_failure
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
inhibit_on_failure
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
查看lvs权重,如下:
ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Tunnel 0 0 0
-> 10.1.1.163:www Tunnel 5 0 0
客户端测试:
ab -c 100 -n 10000 http://10.1.1.166/index.html
查看负载情况,我们看到权重为0的real server不会接收任何访问,如下:
ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Tunnel 0 0 0
-> 10.1.1.163:www Tunnel 5 66 2842
TCP_CHECK {}是TCP方式的健康检查.
connect_timeout 10表示连接超时时间,这里连接real server的端口超过10秒没有反映,则将real server踢出lvs,或将权值清0.
nb_get_retry 3表示连接的重试次数,这里如果3次都连接失败,则将real server踢出lvs,或将权值清0.
delay_before_retry 3表示每次连接重试的间隔,这里的间隔是3秒.
connect_port 80表示连接测试的端口.
七)负载调度算法
1)轮叫调度(Round Robin)(简称rr)
调度器通过"轮叫"调度算法将外部请求按顺序轮流分配到集群中的真实服务器上,它均等地对待每一台服务器,而不管服务器上实际的连接数和系统负载.
下面看一下轮叫调度的效果,如下:
while ((1)); do ipvsadm -l; sleep 1; done
客户端测试:
ab -n 1000 -c 100 http://10.1.1.166/
注:
这里用ab连接10.1.1.166首页1000页,而且是并发100个连接.
lvs表现如下:
while ((1)); do ipvsadm -l; sleep 1; done
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www rr
-> 10.1.1.164:www Route 1 0 0
-> 10.1.1.163:www Route 1 0 0
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www rr
-> 10.1.1.164:www Route 1 5 95
-> 10.1.1.163:www Route 1 6 94
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www rr
-> 10.1.1.164:www Route 1 6 138
-> 10.1.1.163:www Route 1 5 140
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www rr
-> 10.1.1.164:www Route 1 38 174
-> 10.1.1.163:www Route 1 37 176
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www rr
-> 10.1.1.164:www Route 1 5 290
-> 10.1.1.163:www Route 1 3 293
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www rr
-> 10.1.1.164:www Route 1 19 483
-> 10.1.1.163:www Route 1 19 483
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www rr
-> 10.1.1.164:www Route 1 0 502
-> 10.1.1.163:www Route 1 0 502
我们看到1000个连接平均的分发到两台机器上.
注L
ActiveConn是活动连接数,也就是tcp连接状态的ESTABLISHED.
InActConn是指除了ESTABLISHED以外的,所有的其它状态的tcp连接.
2)加权轮叫(Weighted Round Robin)(简称wrr)
调度器通过"加权轮叫"调度算法根据real server的加权值(weight)来计算权值比例,最终将请求更多的发向哪台real server.
如果没有定义加权值,也就是加权值默认为1,这时的wrr同rr调度算法一样.
我们下面修改weight,如下:
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
#persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 10
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
这里我们再次进行测试,如下:
客户端:
ab -n 1000 -c 100 http://10.1.1.166/
lvs:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 10 0 0
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Route 5 0 31
-> 10.1.1.163:www Route 10 1 60
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Route 5 0 39
-> 10.1.1.163:www Route 10 0 77
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Route 5 24 172
-> 10.1.1.163:www Route 10 49 344
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Route 5 3 251
-> 10.1.1.163:www Route 10 6 502
我们这里看到lvs会把请求通过加权值分布到两台real server上,10.1.1.163的加权值是10,而10.1.1.164的加权值是5,所以基本上10.1.1.163处理的请求是10.1.1.164的两倍.
3)最小连接调度(lc)
该算法是把新的连接请求分配到当前连接数最小的服务器.最小连接调度是一种动态调度算法,它通过服务器当前所活跃的连接数来估计服务器的负载情况.
我们调整lvs的调度算法,如下:
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo lc
lb_kind DR
#persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
下面在客户端进行测试,如下:
ab -c 100 -n 10000 http://10.1.1.166/index.html
观察lvs的连接状态,如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www lc
-> 10.1.1.164:www Route 5 54 2717
-> 10.1.1.163:www Route 5 19 2730
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www lc
-> 10.1.1.164:www Route 5 9 3038
-> 10.1.1.163:www Route 5 35 2981
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www lc
-> 10.1.1.164:www Route 5 45 3533
-> 10.1.1.163:www Route 5 18 3579
我们看到lvs1服务器上面的连接数高上去后,调度去转而更多的向lvs2服务器发送请求.通上以上的反复调度来达到平衡.
4)加权最小连接调度(wlc)
该算法是最小连接调度的超集,各个服务器用相应的权值表示其处理性能.所以这里可以更好的处理real server处理能力不一致的情况.
我们调整lvs的调度算法,如下:
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
#persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 10
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
下面在客户端进行测试,如下:
ab -c 100 -n 10000 http://10.1.1.166/index.html
观察lvs的连接状态,如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wlc
-> 10.1.1.164:www Route 5 42 1914
-> 10.1.1.163:www Route 10 22 2489
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wlc
-> 10.1.1.164:www Route 5 9 2067
-> 10.1.1.163:www Route 10 35 2723
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wlc
-> 10.1.1.164:www Route 5 53 2342
-> 10.1.1.163:www Route 10 2 3159
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wlc
-> 10.1.1.164:www Route 5 58 2350
-> 10.1.1.163:www Route 10 0 3153
我们看到在加权最小连接调度算法中,权值高的服务器仍然被分配到了更多的连接,但它还是在最小连接算法的基础上实现的,所以它表现的不如wrr算法明显.
5)基于局部性的最少链接调度算法(lblc)
该算法的前提假设是:任意一台服务器都可以处理任一请求.算法的设计目标是在服务器的负载基本平衡情况下,
将相同目标IP地址的请求调度到同一台服务器,来提高各台服务器的访问局部性和主存Cache命中率,从而整个集群系统的处理能力.
若被选择服务器超载则用”最少链接”的原则选出一个可用的服务器,将请求发送到该服务器.
这里调整lvs的调度算法,如下:
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo lblc
lb_kind DR
#persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
测试:
ab -c 1 -n 10000 http://10.1.1.166/index.html
查看lvs服务端:
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www lblc
-> 10.1.1.164:www Route 5 0 2182
-> 10.1.1.163:www Route 5 0 0
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www lblc
-> 10.1.1.164:www Route 5 1 2436
-> 10.1.1.163:www Route 5 0 0
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www lblc
-> 10.1.1.164:www Route 5 1 2622
-> 10.1.1.163:www Route 5 0 0
注:
我们看到lblc调度算法把对vip的请求都转发给了10.1.1.164,这是根据相同ip地址的请求分配到同一台机器的原则.
如果我们将并发数调整到100,请求则会分散到两台real server,这是因为选择的服务器超载会采用"最少使用"的原则使用另外可以用的机器进行处理请求.
6)带复制的基于局部性最少链接(lblcr)
它与LBLC算法基本相同,唯一的不同之处是它要维护从一个目标IP地址到一个服务器组的映射,而LBLC算法维护从一个目标IP地址到一台服务器的映射.
LBLC算法的主要缺点是:对于一个"热门"站点的服务请求,一台Cache服务器可能会忙不过来处理这些请求.这时,LBLC调度算法会从所有的Cache服务器中按"最小连接"原则选出一台Cache服务器,
映射该“热门”站点到这台Cache服务器,很快这台Cache服务器也会超载,就会重复上述过程选出新的Cache服务器.这样,可能会导致该“热门”站点的映像会出现在所有的Cache服务器上,降低了Cache服务器的使用效率
测试方法同lblc算法,在此不列举.
7)目标地址散列调度(dh)
该算法也是针对目标IP地址的负载均衡,但它是一种静态映射算法,通过一个散列(Hash)函数将一个目标IP地址映射到一台服务器.
目标地址散列调度算法先根据请求的目标IP地址,作为散列键(Hash Key)从静态分配的散列表找出对应的服务器,若该服务器是可用的且未超载,将请求发送到该服务器,否则返回空.
测试:
ab -c 1 -n 10000 http://10.1.1.166/index.html
查看lvs服务端:
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www dh
-> 10.1.1.164:www Route 5 0 1924
-> 10.1.1.163:www Route 5 0 0
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www dh
-> 10.1.1.164:www Route 5 0 2086
-> 10.1.1.163:www Route 5 0 0
10.1.1.160:/etc/keepalived# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www dh
-> 10.1.1.164:www Route 5 0 2240
-> 10.1.1.163:www Route 5 0 0
注:我们看到lvs调度器将请求都转发给了10.1.1.164,这是因为通过散列函数将10.1.1.166(vip)这个的请求都转发给了10.1.1.164.
如果我们使用10.1.1.167做为vip,转发的结果会是10.1.1.163.这可能和ip地址的奇/偶数有关.
我们下面增加10.1.1.167(vip)再进行测试如下
在virtual_ipaddress选项中增加10.1.1.167(vip),如下:
virtual_ipaddress {
10.1.1.166
10.1.1.167
}
}
新增加virtual_server选项,如下:
virtual_server 10.1.1.167 80 {
delay_loop 6
lb_algo dh
lb_kind DR
#persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
客户端测试:
ab -c 100 -n 10000 http://10.1.1.167/index.html
测试结果如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www dh
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 5 0 0
TCP 10.1.1.167:www dh
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 5 65 2930
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www dh
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 5 0 0
TCP 10.1.1.167:www dh
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 5 0 10054
我们看到这回lvs通过dh算法把连接都转发给了10.1.1.163.
8)源地址散列调度(sh)
该算法正好与目标地址散列调度算法相反,它根据请求的源IP地址,作为散列键(Hash Key)从静态分配的散列表找出对应的服务器,若该服务器是可用的且未超载,将请求发送到该服务器,否则返回空,
测试:
ab -c 100 -n 10000 http://10.1.1.166/index.html
查看结果:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www dh
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 5 0 0
TCP 10.1.1.167:www dh
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 5 65 2930
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www dh
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 5 0 0
TCP 10.1.1.167:www dh
-> 10.1.1.164:www Route 5 0 0
-> 10.1.1.163:www Route 5 0 10054
注:我们看到通过sh调度算法,散列函数计算源ip地址(10.1.1.165)将数据转发给10.1.1.163服务器.
如果我们在10.1.1.22上发请求呢?
结果是把数据转发给10.1.1.164服务器,如下:
客户端测试:
ab -c 100 -n 10000 http://10.1.1.166/index.html
显示结果如下:
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www sh
-> 10.1.1.164:www Route 5 2 3629
-> 10.1.1.163:www Route 5 0 0
10.1.1.160:~# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www sh
-> 10.1.1.164:www Route 5 0 10033
-> 10.1.1.163:www Route 5 0 0
在10.1.1.22上的请求这回转发到了10.1.1.164服务器.
八)IP负载均衡技术
我们前面都用的是DR模式,所以这里不再介绍DR模式的配置过程。
DR模式
1)VS/DR的负载调度器中只负责调度请求,而服务器直接将响应返回给客户.
2)调度器和服务器组必须在一个网段,不能被路由器分隔.
3)VIP地址为调度器和服务器组共享,调度器的VIP地址是对外可见的,用于接收请求报文,服务器组的要把VIP配置对外不可见,只用于处理目标地址为VIP的网络请求.
4)VS/DR负责调度器只处于从客户端到服务端的半连接,按照半连接的tcp有限状态进行状态迁移.
DR模式的收发流程:
1)客户端发起请求到调度器的vip.
2)调度器根据调度算法在各个服务器中动态选择一台服务器,但它不修改ip报文,而是将数据帧的MAC地址改为选出服务器的MAC地址.再将修改名的数据帧在服务器的局域网发送.
3)被选中服务器收到这个数据帧,从中获得ip报文,在解包的过程中它发现目标地址VIP是在本地的网络设备上,所以它会处理这个报文,然后根据路由表将响应报文直接返回客户端.
在使用DR模式时,为什么要在real server上做如下的配置呢?
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
要想搞清理上面配置的作用,首先要了解arp的工作原理:
ARP即地址解析协议,实现通过IP地址得知其物理地址,例如主机A要向主机B发送数据,它首先要查找arp表,寻找对映主机B的mac地址.知道了主机B的MAC地址,直接把目标MAC地址写入帧里面发送就可以了.
如果在ARP缓存表中没有找到目标IP地址,主机A就会在网络上发送一个广播,告之局域网中所有的主机自己的IP地址及mac地址,并询问主机B的MAC地址是什么?
网络上其他主机并不响应ARP询问,只有主机B接收到这个帧时,才向主机A做出这样的回应.
这样,主机A就知道了主机B的MAC地址,它就可以向主机B发送信息了.同时A和B还同时都更新了自己的ARP缓存表(因为A在询问的时候把自己的IP和MAC地址一起告诉了B).
下次A再向主机B或者B向A发送信息时,直接从各自的ARP缓存表里查找就可以了.
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
系统只回答目的IP为是本地IP的包,也就是对广播包不做响应.
默认是0,只要这台机器上面任何一个设备上面有这个ip,就响应arp请求,并发送mac地址应答.
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
当内网的机器要发送一个到外部的ip包,那么它就会请求路由器的Mac地址,发送一个arp请求,这个arp请求里面包括了自己的ip地址和Mac地址,
而linux默认是使用ip的源ip地址作为arp里面的源ip地址,而不是使用发送设备上面的,这样在lvs这样的架构下,所有发送包都是同一个VIP地址,那么arp请求就会包括VIP地址和设备Mac,
而路由器收到这个arp请求就会更新自己的arp缓存,这样就会造成ip欺骗了,VIP被抢夺,所以设置arp_announce为2,即只使用真正发送数据包的网络设备上的ip,而不使用vip.
NAT模式
nat模式的原理:
客户通过vip访问服务,调度器根据连接调度算法从一组真实服务器中选出一台服务器.
将报文的目标地址(vip)改写成选定服务器的地址,报文的目标端口改写成选定服务器的相应端口,最后将修改后的报文发送给选出的服务器.
当来自真实服务器的响应报文经过调度器时,调度器将报文的源地址和源端口改为vip和相应的端口,再把报文发给用户.
如果采用nat模式,要打开调度器的路由转发功能,如下:
echo 1 > /proc/sys/net/ipv4/ip_forward
修改keepalived配置文件,如下:
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo wrr
lb_kind NAT
#persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
注:
其实就是修改了lb_kind 为NAT
重启keepalived服务,并查看lvs配置,如下:
/etc/init.d/keepalived restart
ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Masq 5 0 0
-> 10.1.1.163:www Masq 5 0 0
在两台real server中增加默认路由,如下:
auto eth0
iface eth0 inet static
address 10.1.1.163
netmask 255.255.255.0
gateway 10.1.1.166
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
0.0.0.0 10.1.1.166 0.0.0.0 UG 0 0 0 eth0
测试:
curl -v http://10.1.1.166/index.html
* About to connect() to 10.1.1.166 port 80 (#0)
* Trying 10.1.1.166... connected
* Connected to 10.1.1.166 (10.1.1.166) port 80 (#0)
> GET /index.html HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15
> Host: 10.1.1.166
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Fri, 16 Sep 2011 17:54:32 GMT
< Server: Apache/2.2.9 (Debian)
< Last-Modified: Fri, 09 Sep 2011 06:17:13 GMT
< ETag: "cc2e1-2e-4ac7c20cbec40"
< Accept-Ranges: bytes
< Content-Length: 46
< Vary: Accept-Encoding
< Content-Type: text/html
<
10.1.1.164
* Connection #0 to host 10.1.1.166 left intact
* Closing connection #0
IP隧道
IP隧道是在LVS调度器和real server之间建立一个遂道,通过遂道将负载请求分配到real server机器上.
遂道技术是将一个IP报文封装在另一个IP报文的技术,这可以使得目标为一个IP地址的数据报文能被封装和转发到另一个IP地址.
调度器根据各个服务器的负载情况,动态地选择一台服务器,将请求报文封装在另一个IP报文中.
再将封装后的IP报文转发给选出的服务器,服务器收到报文后,先将报文解封获得原来目标地址为VIP的报文,服务器发现VIP地址被配置在本地的IP隧道设备上,所以就处理这个请求.
然后根据路由表将响应报文直接返回给客户.
virtual_server 10.1.1.166 80 {
delay_loop 6
lb_algo wrr
lb_kind TUN
#persistence_timeout 60
protocol TCP
real_server 10.1.1.163 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.1.1.164 80 {
weight 5
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.166:www wrr
-> 10.1.1.164:www Tunnel 5 0 0
-> 10.1.1.163:www Tunnel 5 0 0
在real server上做如下的配置:
/sbin/ifconfig tunl0 up
echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/tunl0/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
ifconfig tunl0 10.1.1.166 broadcast 255.255.255.255 up
route add -host 10.1.1.166 dev tunl
注:隧道模式与直接路由模式比较相似,都要在real server上设置vip,所以它们也要关闭arp响应.
客户端测试:
curl http://10.1.1.166/index.html
10.1.1.164
|