Haproxy+keepalived实现双主负载均衡高可用集群
项目说明1、 使用LVS负载均衡用户请求到后端web服务器,并且实现健康状态检查2、 使用keepalived高可用LVS,避免LVS单点故障
3、 集群中分别在LK-01和LK-02运行一个VIP地址,实现LVS双主
4、 用户通过DNS轮训的方式实现访问集群的负载均衡(不演示)
实验拓扑
环境介绍:
IP地址
功能描述
HK-01
172.16.4.100
调度用户请求到后端web服务器,并且和LK-02互为备份
HK-02
172.16.4.101
调度用户请求到后端web服务器,并且和LK-01互为备份
WEB-01
172.16.4.102
提供web服务
WEB-02
172.16.4.103
提供web服务
DNS
172.16.4.10
实现DNS轮训解析用户请求域名地址
VIP1
172.16.4.1
用户访问集群的入口地址,可能在LK-01,也可能在LK-02
VIP2
172.16.4.2
用户访问集群的入口地址,可能在LK-01,也可能在LK-02
配置示例后端WEB服务器配置Web服务器的配置极其简单,只需要提供测试页面启动web服务即可,配置如下:
Web-01配置
1
2
3
4
5
# echo "web-01" >/var/www/html/index.html
# service httpd start
Web-02配置
# echo "web-02" >/var/www/html/index.html
# service httpd start
LVS访问后端web服务器,验证web服务提供成功
1
2
3
4
# curl 172.16.4.102
web-01
# curl 172.16.4.103
web-02
出现设置的页面,就说明web服务是正常
Haproxy+keepalived配置两个HK节点都安装haproxy和keepalived
1
2
3
4
# yum -y install haproxy
# yum -y install haproxy
# yum -y install keepalived
# yum -y install keepalived
修改内核参数设置,设置haproxy启动的时候不管有没有vip地址都可以启动
1
2
3
4
# echo"net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
# sysctl –p
# echo "net.ipv4.ip_nonlocal_bind= 1" >> /etc/sysctl.conf
# sysctl -p
设置haproxy两个haproxy节点的配置文件一模一样,所以只放出一个
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# vim /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
statssocket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
optionhttp-server-close
optionforwardfor except 127.0.0.0/8
option redispatch
retries 3
timeouthttp-request 10s
timeoutqueue 1m
timeoutconnect 10s
timeoutclient 1m
timeoutserver 1m
timeouthttp-keep-alive 10s
timeoutcheck 10s
maxconn 3000
statsenable
statsuri /admin?stats
statsauth proxy:proxy
listen www1
bind172.16.4.1:80
modehttp
option forwardfor
server www01172.16.4.102:80check
server www02172.16.4.103:80check
listen www2
bind172.16.4.2:80
modetcp
option forwardfor
server www01172.16.4.102:80check
server www01172.16.4.103:80check
keepalived设置HK-01配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# vim /etc/keepalived/keepalived.conf
global_defs {
router_idLVS_DEVEL
}
vrrp_script chk_mt_down {
script"[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -5
}
vrrp_instance VI_1 {
stateMASTER
interfaceeth0
virtual_router_id 51
priority100
advert_int 1
authentication {
auth_type PASS
auth_pass asdfgh
}
virtual_ipaddress {
172.16.4.1/32 brd 172.16.4.1 dev eth0 label eth0:0
}
track_script {
chk_mt_down
}
}
vrrp_instance VI_2 {
stateBACKUP
interfaceeth0
virtual_router_id 52
priority99
advert_int 1
authentication {
auth_type PASS
auth_pass qwerty
}
virtual_ipaddress {
172.16.4.2
}
track_script {
chk_mt_down
}
}
HK-02配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# vim /etc/keepalived/keepalived.conf
global_defs {
router_idLVS_DEVEL
}
vrrp_script chk_mt_down {
script"[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -5
}
vrrp_instance VI_1 {
stateBACKUP
interfaceeth0
virtual_router_id 51
priority99
advert_int 1
authentication {
auth_type PASS
auth_pass asdfgh
}
virtual_ipaddress {
172.16.4.1/32 brd 172.16.4.1 dev eth0 label eth0:0
}
track_script {
chk_mt_down
}
}
vrrp_instance VI_2 {
stateMASTER
interfaceeth0
virtual_router_id 52
priority100
advert_int 1
authentication {
auth_type PASS
auth_pass qwerty
}
virtual_ipaddress {
172.16.4.2
}
track_script {
chk_mt_down
}
}
设置完成之后两个节点分别启动haproxy和keepalived服务,集群就配置完成了
1
2
3
4
# service haproxy start
# service keepalived start
# service haproxy start
# service keepalived start
验证访问状态页面,www1,和www2示例都显示正常
VIP地址也可以正常启动
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:22:c5:c2 brd ff:ff:ff:ff:ff:ff
inet172.16.4.100/16 brd 172.16.255.255 scope global eth0
inet172.16.4.1/32 brd 172.16.4.1 scope global eth0:0
inet6fe80::20c:29ff:fe22:c5c2/64 scope link
valid_lft forever preferred_lft forever
# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:f1:dd:b2 brd ff:ff:ff:ff:ff:ff
inet172.16.4.101/16 brd 172.16.255.255 scope global eth0
inet172.16.4.2/32 scope global eth0
inet6fe80::20c:29ff:fef1:ddb2/64 scope link
valid_lft forever preferred_lft forever
负载均衡测试分别访问两个vip地址均实现了负载均衡的效果
1
2
3
4
5
6
7
8
# curl 172.16.4.1
web-02
# curl 172.16.4.1
web-01
# curl 172.16.4.2
web-01
# curl 172.16.4.2
web-02
高可用验证手动关闭其中HK-02,验证vip地址是否会自动漂移到HK-01服务器
# touch /etc/keepalived/down
两个vip地址均正常漂移到HK-02
1
2
3
4
5
6
7
8
# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:22:c5:c2 brd ff:ff:ff:ff:ff:ff
inet172.16.4.100/16 brd 172.16.255.255 scope global eth0
inet172.16.4.1/32 brd 172.16.4.1 scope global eth0:0
inet172.16.4.2/32 scope global eth0
inet6fe80::20c:29ff:fe22:c5c2/64 scope link
valid_lft forever preferred_lft forever
健康状态检测验证手动关闭web-02,查看是否会自动下线web-02
1
# service httpd stop
查看web状态页面,WEB-02已经自动下线了
访问验证,调度请求不会转发到web-02
1
2
3
4
5
6
7
8
# curl 172.16.4.1
web-01
# curl 172.16.4.1
web-01
# curl 172.16.4.2
web-01
# curl 172.16.4.2
web-01
页:
[1]