1、介绍: 【HAProxy】是高性能的代理服务器,其可以提供7层和4层代理,具有healthcheck,负载均衡等多种特性,性能卓越
【KeepAlived】是一个高可用方案,通过VIP(即虚拟IP)和心跳检测来实现高可用,也通过该vip实现访问。其原理是存在一组(两台)服务器 默认情况下Master会绑定VIP到自己的网卡上,对外提供服务。如果Backup发现Master宕机,那么Backup会发送ARP包到网关,把VIP绑定到自己的网卡,此时Backup对外提供服务,实现自动化的故障转移,当Master恢复的时候会重新接管服务。
2、环境: web1:192.168.1.78 web2:192.168.1.241 web3:192.168.1.133 web4:192.168.1.244 haproxy+keepalived1:192.168.1.22 haproxy+keepalived1:192.168.1.9 vip1:192.168.1.189(www.inbank.com) vip2:192.168.1.199(image.inbank.com)
3、需求: 默认情况下,第一台负载均衡器主要分发www.baison.com.cn的请求,第二台负载均衡器主要分发img.baison.com.cn的请求。任意一台宕机都不会影响网站分发。这样不会导致服务器浪费。
4、keepavlied+haproxy安装略,直接进入haproxy和keepalived配置 【192.168.1.22】上的haproxy配置: [iyunv@master etc]# cat /usr/local/haproxy/etc/haproxy.cfg |grep -v "#"|sed '/^$/d'
global log 127.0.0.1 local1 notice maxconn 4096 chroot /usr/share/haproxy uid 99 gid 99 daemon pidfile /usr/local/haproxy/haproxy.pid defaults log global mode http retries 3 maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats uri /haproxy-stats balance roundrobin frontend WEB_SITE bind :80 acl web hdr(host) -i www.inbank.com acl img hdr(host) -i image.inbank.com use_backend webserver if web use_backend imgserver if img backend webserver mode http balance roundrobin server web_1 192.168.1.78:80 check inter 2000 fall 5 weight 1 server web_2 192.168.1.241:80 check inter 2000 fall 5 weight 1 backend imgserver mode http option httpchk /index.php balance roundrobin server web_1 192.168.1.133:80 check inter 2000 fall 5 weight 1 server web_1 192.168.1.244:80 check inter 2000 fall 5 weight 1
192.168.1.22上的keepalived配置 [iyunv@master keepalived]# cat keepalived.conf|grep -v "#"|sed '/^$/d'
!Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_haproxy { script "/usr/local/keepalived/check_haproxy.sh" interval 2 weight 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 88 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 5555 } track_script { chk_haproxy } virtual_ipaddress { 192.168.1.189 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 89 ---#id不能跟VI_1一样 priority 99---# advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.199 } }
检测脚本,为了防止haproxy服务关闭导致keepalived不自动切换。 #vim /usr/local/keepalived/check_haproxy.sh #!/bin/bash if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/etc/haproxy.cfg fi sleep 2 if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then /etc/init.d/keepalived stop fi
4、启动keepalived服务和haproxy,然后查看日志,看看是否有2个vip tail -n 30 /var/log/messages
Oct 23 13:49:13 master Keepalived_vrrp: VRRP_Instance(VI_2) Entering BACKUP STATE Oct 23 13:49:13 master Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(7,8)] Oct 23 13:49:13 master Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE Oct 23 13:49:14 master Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE Oct 23 13:49:14 master Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs. Oct 23 13:49:14 master Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.189 Oct 23 13:49:14 master avahi-daemon[2879]: Registering new address record for 192.168.1.189 on eth0. Oct 23 13:49:19 master Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.189
##从192.168.1.22上可以看出,VI_2已经进入了backup角色,VI_1已经进入了master,并且已经绑定了192.168.1.189在eth0网卡
[iyunv@master keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:06:ed:78 brd ff:ff:ff:ff:ff:ff inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.189/32 scope global eth0 inet6 fe80::20c:29ff:fe06:ed78/64 scope link valid_lft forever preferred_lft forever 3: sit0: <NOARP> mtu 1480 qdisc noop link/sit 0.0.0.0 brd 0.0.0.0
####从 192.168.1.9 haproxy和keepalived配置 haproxy不变,只是keepalived配置里的角色,权限需要跟master兑换下就可以了 vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 88 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 5555 } track_script { chk_haproxy } virtual_ipaddress { 192.168.1.189 } }
vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 89 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.199 } }
启动haproxy和keepalived服务,查看vip是否已经绑定在eth0网卡 Oct 23 14:00:26 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE Oct 23 14:00:26 localhost Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)] Oct 23 14:00:27 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Transition to MASTER STATE Oct 23 14:00:28 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Entering MASTER STATE Oct 23 14:00:28 localhost Keepalived_vrrp: VRRP_Instance(VI_2) setting protocol VIPs. Oct 23 14:00:28 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 192.168.1.199
[iyunv@localhost keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:2b:be:1a brd ff:ff:ff:ff:ff:ff inet 192.168.1.9/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.199/32 scope global eth0 inet6 fe80::20c:29ff:fe2b:be1a/64 scope link valid_lft forever preferred_lft forever 3: sit0: <NOARP> mtu 1480 qdisc noop link/sit 0.0.0.0 brd 0.0.0.0
四、web上的配置 在对应web上新增相关虚拟机 www.inbank.com,image.inbank.com 此处为了实验方便,就直接yum 安装了apache,在apache配置上的配置如下:
##web1 web2 NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin webmaster@dummy-host.example.com DocumentRoot /var/www/html/inbank ServerName www.inbank.com ErrorLog logs/dummy-host.example.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost>
###web3 web4类似 在客户端修改hosts,新增以下2条记录 192.168.1.189 www.inbank.com 192.168.1.199 image.inbank.com 然后分别访问这2个网址,如下: [iyunv@master keepalived]# for i in `seq 1 4`;do curl http://image.inbank.com;done img_133 img_244 img_133 img_244 [iyunv@master keepalived]# for i in `seq 1 4`;do curl http://www.inbank.com;done inbank_78 inbank_241 inbank_78 inbank_241
--如果出现以上的话,那就木有问题了
接下来测试高可用,停止master上的keepalived服务 [iyunv@master keepalived]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ]
然后查看backup上的状态 Oct 23 14:00:33 localhost Keepalived_vrrp: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 192.168.1.199 Oct 23 14:08:04 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE Oct 23 14:08:05 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE Oct 23 14:08:05 localhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs. Oct 23 14:08:05 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.189
说明BACKUP已经接替MASTER了,然后再访问这2个网址,没问题的话,就大功告成了
##查看haproxy web 监控页面 http://192.168.1.189/haproxy-stats 或者http://192.168.1.199/haproxy-stats
#另,内核优化: #haproxy+keepalived做前端,基本是tcp相关的内核优化,如下优化是一朋友线上环境用的,同样也适合lvs net.ipv4.tcp_fin_timeout = 2 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_keepalive_time = 600 net.ipv4.ip_local_port_range = 4000 65000 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.route.gc_timeout = 100 net.ipv4.tcp_syn_retries = 1 net.ipv4.tcp_synack_retries = 1 net.ipv4.ip_conntrack_max = 25000000 net.ipv4.netfilter.ip_conntrack_max=25000000 net.ipv4.netfilter.ip_conntrack_max=6553600
#####【另】 1、在此HAProxy+Keepalivp负载均衡高可用架构中,我们是如何解决session的问题呢?我们这里采用的是它自身的balance source机制,它跟Nginx的ip_hash机制原理类似,是让客户机访问时始终访问后端的某一台真实的web服务器,这样让session就固定下来了;
2、option httpchk HEAD /index.jsp HTTP/1.0 是网页监控,如果HAProxy检测不到Web的根目录下没有index.jsp,就会产生503报错。
3、有网友配置HAProxy时喜欢用listen IP:80这样的格式,这样其实不好,做负载均衡高可用时由于从机分配不到VIP地址,会导致从机启动不了,我建议用bind *:80的方式代替。
|