Centos6下haproxy+keepalived构建高可用web集群
1)实验拓扑信息http://s5.运维网.com/wyfs02/M01/89/A1/wKioL1gYoFjzQr9jAAEd09D24eo089.png
说明:
a、在客户端使用www.wanwan.com,将由负载均衡器提交给vip1所对应的集群进行处理
b、在客户端使用img.wanwan.com,将由负载均衡器提交给vip1所对应的集群进行处理
c、10.10.10.129和10.10.10.130两台负载均衡器,其中某一台出现故障均不影响整个系统运行
2)haproxy的安装与启动脚本配置
参考我的上一篇博客:http://molewan.blog.运维网.com/287340/1866746
3)haproxy的配置(两台负载均衡器10.10.10.129/10.10.10.130均需配置)
# adduser haproxy -s /sbin/nologin -M
# cd /usr/local/haproxy/conf/
# cat haproxy.cfg
global
log 127.0.0.1 local0 info
maxconn 4096
user haproxy
group haproxy
daemon
nbproc 1
pidfile /usr/local/haproxy/logs/haproxy.pid
defaults
mode http
retries 3
timeout connect 10s
timeout client 20s
timeout server 30s
timeout check 5s
frontend www
bind *:80
mode http
optionhttplog
optionforwardfor
optionhttpclose
log global
#来自www.wanwan.com的请求,均交给htmpool进行处理,来自img.wanwan.com的请求,则提交给imgpool进行处理。默认不指定的话,交给htmpool进行处理
acl host_www hdr_dom(host) -i www.wanwan.com
acl host_img hdr_dom(host) -i img.wanwan.com
use_backend htmpool if host_www
use_backend imgpool if host_img
default_backendhtmpool
backend htmpool
mode http
option redispatch
option abortonclose
balancestatic-rr
cookie SERVERID
option httpchk GET /index.html
serverweb01 10.10.10.128:80 cookie server1 weight 6 check inter 2000 rise 2 fall 3
serverweb02 10.10.10.132:80 cookie server2 weight 6 check inter 2000 rise 2 fall 3
backend imgpool
mode http
option redispatch
option abortonclose
balancestatic-rr
cookie SERVERID
option httpchk GET /index.html
serverimg1 10.10.10.131:80 cookie server1 weight 6 check inter 2000 rise 2 fall 3
serverimg2 10.10.10.133:80 cookie server2 weight 6 check inter 2000 rise 2 fall 3
# 配置haproxy的web监控界面
listen admin_stats
bind 0.0.0.0:9188
mode http
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm welcome login\ Haproxy
stats auth admin:admin~!@
stats hide-version
stats admin if TRUE 4)keepalived的配置
负载均衡器配置两个vip:10.10.10.188以及10.10.10.189(在两台负载均衡器上互为主备)
# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
314324506@qq.com
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server smtp.qq.com
smtp_connect_timeout 30
router_id LVS_7
}
# 配置这个脚本的作用是为了避免haproxy服务停止后,keepalived不释放vip
vrrp_script chk_http_port {
script "/opt/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_188 {
state MASTER # 主服务器
interface eth0
virtual_router_id 188
priority 150 # slave上的数值更小,数值越大,代表优先级越高
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.10.10.188/24 # vip地址,在系统里面通过ip add list可以查看
}
}
vrrp_instance VI_189 {
state BACKUP # 从服务器
interface eth0
virtual_router_id 189
priority 100 # master上的数值更大,数值越大,代表优先级越高
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.10.10.189/24 # vip地址,在系统里面通过ip add list可以查看
}
} 5)keepalived的启动脚本
#!/bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived
# Source function library
. /etc/rc.d/init.d/functions
# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived
RETVAL=0
prog="keepalived"
start() {
echo -n $"Starting $prog: "
daemon keepalived ${KEEPALIVED_OPTIONS}
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
}
stop() {
echo -n $"Stopping $prog: "
killproc keepalived
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
}
reload() {
echo -n $"Reloading $prog: "
killproc keepalived -1
RETVAL=$?
echo
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
reload)
reload
;;
restart)
stop
start
;;
condrestart)
if [ -f /var/lock/subsys/$prog ]; then
stop
start
fi
;;
status)
status keepalived
;;
*)
echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"
exit 1
esac
exit $RETVAL 6)效果测试
a、在客户端测试www.wanwan.com
http://s3.运维网.com/wyfs02/M02/89/A6/wKiom1gZJxWwuWznAAA8Ah9SF5I775.png-wh_500x0-wm_3-wmp_4-s_1642477365.png
http://s3.运维网.com/wyfs02/M02/89/A4/wKioL1gZJxWB5XzGAABBmmVmU_o912.png-wh_500x0-wm_3-wmp_4-s_791272332.png
b、测试img.wanwan.com
http://s5.运维网.com/wyfs02/M00/89/A6/wKiom1gZJxaS9BRsAAA-Tvd59Gw368.png-wh_500x0-wm_3-wmp_4-s_2656105157.png
http://s5.运维网.com/wyfs02/M02/89/A4/wKioL1gZJxbRSm5vAAA534ZI-Jg155.png-wh_500x0-wm_3-wmp_4-s_2744649661.png
如上:负载均衡调度的作用已经实现,那么我们在测试下keepalived的相关功能
c、测试keepalived的相关功能
# ip add list
1: lo:mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:67:b3:45 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.129/24 brd 10.10.10.255 scope global eth0
inet 10.10.10.188/24 scope global secondary eth0
inet6 fe80::20c:29ff:fe67:b345/64 scope link
valid_lft forever preferred_lft forever
# ip add list
1: lo:mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:53:cf:52 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.130/24 brd 10.10.10.255 scope global eth0
inet 10.10.10.189/24 scope global secondary eth0
inet6 fe80::20c:29ff:fe53:cf52/64 scope link
valid_lft forever preferred_lft forever 我们可以看到,两个vip地址,10.10.10.188以及10.10.10.189分别在两台负载均衡器上(仔细看下keepalived的配置可以发现,我们配置两台负载均衡互为主从)
模拟10.10.10.129-主负载均衡器宕机,然后观察ip地址切换以及负载均衡是否正常
# ip add list
1: lo:mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:53:cf:52 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.130/24 brd 10.10.10.255 scope global eth0
inet 10.10.10.189/24 scope global secondary eth0
inet 10.10.10.188/24 scope global secondary eth0
inet6 fe80::20c:29ff:fe53:cf52/64 scope link
valid_lft forever preferred_lft forever
如上,我们可以观察到vip已经切换到另外一台负载均衡上了,然后我们观察负载均衡器的使用情况http://s5.运维网.com/wyfs02/M00/89/A6/wKiom1gZLVejz0BsAAA8v6e5pao217.png-wh_500x0-wm_3-wmp_4-s_2529628401.png
http://s5.运维网.com/wyfs02/M00/89/A4/wKioL1gZLVeBRc95AAA-x-HVzFI895.png-wh_500x0-wm_3-wmp_4-s_2174989764.png
负载正常,keepalived的功能也实现了,我们重新开启主负载均衡器
# ip add list
1: lo:mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:67:b3:45 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.129/24 brd 10.10.10.255 scope global eth0
inet 10.10.10.188/24 scope global secondary eth0
inet6 fe80::20c:29ff:fe67:b345/64 scope link
valid_lft forever preferred_lft forever
vip地址10.10.10.188已经切换回来了
# ip add list
1: lo:mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:53:cf:52 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.130/24 brd 10.10.10.255 scope global eth0
inet 10.10.10.189/24 scope global secondary eth0
inet6 fe80::20c:29ff:fe53:cf52/64 scope link
valid_lft forever preferred_lft forever 到此,keepalived+haproxy的基本功能已经实现了,由于haproxy配置比较多,这里我就不细讲了,后面会整理一篇关于haproxy常用的参数配置。
附件:http://down.运维网.com/data/2368331
页:
[1]