della0887 发表于 2015-9-4 08:51:54

LVS+keepalived搭建负载均衡

安装环境:环境 centos4.4
LB:192.168.2.158(VIP:192.168.2.188)
real-server1:192.168.2.187
real-server2:192.168.2.189
重点:关于LVS的keepalvied的HA方案,完全由keepalived.conf一个文件搞定,keepalived用到的是vrrp协议,以下是解释:
VRRP(Virtual Router Redundancy Protocol,虚拟路由冗余协议)是一种容错协议,它可以把一个虚拟路由器的责任动态分配到局域网上的 VRRP 路由器中的一台。控制虚拟路由器 IP 地址的 VRRP 路由器称为主路由器,它负责转发数据包到这些虚拟 IP 地址。一旦主路由器不可用,这种选择过程就提供了动态的故障转移机制,这就允许虚拟路由器的 IP 地址可以作为终端主机的默认第一跳路由器。使用 VRRP 的好处是有更高的默认路径的可用性而无需在每个终端主机上配置动态路由或路由发现协议。 VRRP 包封装在 IP 包中发送。
现在开始安装:
一.在VIP机器上安装ipvsadm
wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.24.tar.gz
安装前要建立一个连接文件,否则会出错
ln -s /usr/src/kernels/2.6.9-42.EL-i686/ /usr/src/linux 一定要与当前的运行的内核相一致
tar -zxvfipvsadm-1.24.tar.gz
cd ipvsadm-1.24
make &&makeinstall
至此 ipvsadm就算安装成功了
以下对安装做一些验证:
1.先执行ipvsadm命令
2.lsmod |grep ip_vs
ip_vs_rr                59531
ip_vs                  831373 ip_vs_rr
验证完成,VIP机器的ipvsadm没有问题。

二.接下来就是重要的keepalived的安装:
wget http://www.keepalived.org/software/keepalived-1.1.17.tar.gz
tar -zxvf keepalived-1.1.17.tar.gz
cd keepalived-1.1.17
./configure --prefix=/ --mandir=/usr/local/share/man/ --with-kernel-dir=/usr/src/kernels/2.6.9-42.EL-smp-i686/
configure如果正确会显示:
Keepalived configuration
------------------------
Keepalived version       : 1.1.15
Compiler               : gcc
Compiler flags         : -g -O2
Extra Lib                : -lpopt -lssl -lcrypto
Use IPVS Framework       : Yes    #支持lvs
IPVS sync daemon support : Yes
Use VRRP Framework       : Yes
Use LinkWatch            : No
Use Debug flags          : No

make &&make install

至此lvs+keepalived安装完成。

三.接下来就要配置keepalived.conf:
vi/etc/keepalived/keepalived.conf
以下是我的配置:

! Configuration File for keepalived
#全局配置:
global_defs {
   notification_email {
   admin@xx.com                         #邮件地址,需要本机开启SMTP服务
   }
   notification_email_from root@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL                  #负载均衡器标示:同一局域网内,应该是唯一的。
}
#VRRP配置:
vrrp_sync_group VGM {      
   group {
      VI_1
   }
}
#VRRP实例配置
vrrp_instance VI_1 {                     #定义一个实例
    state MASTER                         #设置为MASTER
    interface eth0
    virtual_router_id 51               #主、备机的virtual_router_id一定要相同
    priority 100                         #主机值较大,备份机值较小
    advert_int 5                         #VRRP Multicast广播周期秒数,检查间隔
    authentication {
      auth_type PASS                   #VRRP认证方式
      auth_pass 1111                   #VRRP口令字
    }
    virtual_ipaddress {
      192.168.2.188
      .............                   #(如果有多个VIP,继续换行填写.)
    }
}
#LVS配置:
virtual_server 192.168.2.188 80 {
    delay_loop 6                           #(每隔6秒查询realserver状态)
    lb_algo rr                           #(负载均衡调度 算法,常用wlc,rr)
    lb_kind DR                           #(负载均衡转发规则,一般包括DR,NAT,TUN)
    persistence_timeout 50               #(会话保持,同一IP的连接50秒内被分配到同一台realserver)
    protocol TCP                           #(用TCP协议检查realserver状态)   
    sorry_server 127.0.0.1 80            #realserver全部失败,vip指向本机80端口。
real_server 192.168.2.187 80 {
    weight 3                               #(权重)
    TCP_CHECK {                            #通过tcpcheck判断RealServer的健康状态
    nb_get_retry 3                         #重连次数
    delay_before_retry 3                   #重练间隔时间
    connect_port 80                        #健康检查端口
    connect_timeout 3                      #连接超时时间
      }
    }
real_server 192.168.2.189 80 {
   weight 1
   TCP_CHECK {
   nb_get_retry 3
   delay_before_retry 3
   connect_port 80
   connect_timeout 3

      }
    }
}
至此:keepalived配置完成。
执行/etc/init.d/keepalived start 启动

四.接下来配置real-server,两台上执行相同的脚本即可,脚本如下:
# more /usr/local/bin/lvs_real
#!/bin/sh
VIP=192.168.2.188            (直接路由模式的vip必须跟服务器对外提供服务的ip地址在同一个网段)
/etc/rc.d/init.d/functions
case "$1" in
start)
    echo " start tunl port"
    ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP up
   (如果有多VIP,依次往下写)
    echo "2">/proc/sys/net/ipv4/conf/all/arp_announce
    echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
    ;;
stop)
    echo " stop tunl port"
    ifconfig lo:0 down
    echo "0">/proc/sys/net/ipv4/conf/all/arp_announce
    echo "0">/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "0">/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "0">/proc/sys/net/ipv4/conf/lo/arp_ignore
    ;;

*)
    echo "Usage: $0 {start|stop}"
    exit 1
esac
下面是田老师对以上脚本的说明:

  1、vip(virtual ip)。直接路由模式的vip必须跟服务器对外提供服务的ip地址在同一个网段,并且lvs 负载均衡器和其他所有提供相同功能的服务器都使用这个vip.
  2、vip被绑定在环回接口lo0:0上,其广播地址是其本身,子网掩码是255.255.255.255。这与标准的网络地址设置有很大的不同。采用这种可变长掩码方式把网段划分成只含一个主机地址的目的是避免ip地址冲突。
  3、echo “1”,echo “2” 这段的作用是抑制arp广播。如果不做arp抑制,将会有众多的机器向其他宣称:“嗨!我是奥巴马,我在这里呢!”,这样就乱套了。
  
解释:
1 -允许多个网络介质位于同一子网段内,每个网络界面依据是否内核指派路由该数据包经过此界面来确认是否回答ARP查询(这个实现是由来源地址确定路由的时候决定的),换句话说,允许控制使用某一块网卡(通常是第一块)回应arp询问。(做负载均衡的时候,可以考虑用
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
这样的方式就可以解决,当然利用:
echo 2 /proc/sys/net/ipv4/conf/all/arp_announcearpecho 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
两条命令配合使用更好,因为arp_announcearp和arp_ignore 似乎是对arp_filter的更细节控制的实现。)

使用/usr/local/bin/lvs_real start|stop 来启动和关闭
启动后:
# ip add
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
    inet 192.168.2.188/32 brd 192.168.2.188 scope global lo:0
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:72:73:b5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.187/24 brd 192.168.2.255 scope global eth0
    inet6 fe80::20c:29ff:fe72:73b5/64 scope link
       valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0
关闭后:
# ip add
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:72:73:b5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.187/24 brd 192.168.2.255 scope global eth0
    inet6 fe80::20c:29ff:fe72:73b5/64 scope link
       valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0

五:接下来验证lvs+keepalived
VIP上/etc/init.d/keepalived start
两台realserver上执行上述的lvs_real start

在VIP机器上查看:
# ipvsadm
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port         Forward Weight ActiveConn InActConn
TCP192.168.2.188:http rr persistent 50
-> 192.168.2.187:http         Route   3      2          0      
-> 192.168.2.189:http         Route   1      0          0   
这时访问web也是正常的。                                                                        其中 ActiveConn表示活跃连接数:ESTABLISHED状态                                                         InActConn 是不活跃连接数:除了ESTABLISHED的状态(SYN_RECV,TIME_WAIT,FIN_WAIT1等)

然后我关掉一台realserver的apache,再次查看
# ipvsadm
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port         Forward Weight ActiveConn InActConn
TCP192.168.2.188:http rr persistent 50
-> 192.168.2.189:http         Route   1      0          0      
可见keepalived已经发现一台web挂掉,并将其踢出负载均衡。

六.以下附上一些日志:
keepalived启动日志:
Oct 22 17:07:39 YuHao-linux Keepalived: Starting Keepalived v1.1.17 (10/22,2009)
Oct 22 17:07:39 YuHao-linux Keepalived: Remove a zombie pid file /var/run/vrrp.pid
Oct 22 17:07:39 YuHao-linux Keepalived: Remove a zombie pid file /var/run/checkers.pid
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Using MII-BMSR NIC polling thread...
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.158 added
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.188 added
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Registering Kernel netlink reflector
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Registering Kernel netlink command channel
Oct 22 17:07:39 YuHao-linux Keepalived: Starting Healthcheck child process, pid=5972
Oct 22 17:07:39 YuHao-linux Keepalived_vrrp: Using MII-BMSR NIC polling thread...
Oct 22 17:07:39 YuHao-linux Keepalived_vrrp: Netlink reflector reports IP 192.168.2.158 added
Oct 22 17:07:39 YuHao-linux Keepalived_vrrp: Netlink reflector reports IP 192.168.2.188 added
Oct 22 17:07:39 YuHao-linux Keepalived: Starting VRRP child process, pid=5973
Oct 22 17:07:39 YuHao-linux keepalived: keepalived startup succeeded
Oct 22 17:07:39 YuHao-linux Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Oct 22 17:07:39 YuHao-linux Keepalived_vrrp: Registering Kernel netlink reflector
Oct 22 17:07:40 YuHao-linux Keepalived_healthcheckers: Configuration is using : 7482 Bytes
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: Registering Kernel netlink command channel
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: Registering gratutious ARP shared channel
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: Configuration is using : 37230 Bytes
Oct 22 17:07:40 YuHao-linux Keepalived_vrrp: VRRP sockpool:
Oct 22 17:07:45 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: Netlink: error: File exists, type=(20), seq=1256202461, pid=0
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.188
Oct 22 17:07:50 YuHao-linux Keepalived_vrrp: VRRP_Group(VGM) Syncing instances to MASTER state
Oct 22 17:07:55 YuHao-linux Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.188

停掉一台realserver后:
Oct 22 17:12:57 YuHao-linux Keepalived_healthcheckers: TCP connection to failed !!!
Oct 22 17:12:57 YuHao-linux Keepalived_healthcheckers: Removing service from VS

将停掉的realserver重启后:
Oct 22 17:16:01 YuHao-linux Keepalived_healthcheckers: TCP connection to success.
Oct 22 17:16:01 YuHao-linux Keepalived_healthcheckers: Adding service to VS
Oct 22 17:16:01 YuHao-linux Keepalived_healthcheckers: Gained quorum 1+0=1 <= 4 for VS


初学lvs,所以没有做主备,再此真诚感谢田逸老师的指导,以上内容全部是在田逸老师的blog和他给我的pdf中学会,如转载请指明原作者sery,田老师!
老师blog:http://sery.blog.iyunv.com/all/10037/page/1

  上次做了LVS+keepalived的负载均衡,效果还不错,但是没有做主备,这次补上:
安装环境:环境 centos4.4
一共准备四台机器:
负载均衡机器:
主:192.168.2.158            #安装lvs+keepalived
备:192.168.2.159            #安装lvs+keepalived
VIP:192.168.2.188
real-server1:192.168.2.187    #仅执行脚本
real-server2:192.168.2.189    #仅执行脚本

安装过程:
在192.168.2.159上安装ipvsadm和keepalived,方法不再赘述,主要看keepalived的配置:


! Configuration File for keepalived

global_defs {
   notification_email {
   admin@xx.com                        #邮件地址,需要本机开启SMTP服务
   }
   notification_email_from root@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL2                  #负载均衡器标示:同一局域网内,应该是唯一的。
}


vrrp_sync_group VGM {      
   group {
      VI_1
   }
}


vrrp_instance VI_1 {
    state BACKUP                     #设置为BACKUP,通过priority控制那台提升为主。
    interface eth0
    virtual_router_id 51
    priority 70
    advert_int 5
    authentication {
      auth_type PASS
      auth_pass 1111
    }
    virtual_ipaddress {
      192.168.2.188
    }
}

virtual_server 192.168.2.188 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

real_server 192.168.2.187 80 {
      weight 3
      TCP_CHECK {
      nb_get_retry 3
      delay_before_retry 3
      connect_port 80
      connect_timeout 3
      }
    }
    real_server 192.168.2.189 80 {
      weight 1
      TCP_CHECK {
      nb_get_retry 3
      delay_before_retry 3
      connect_port 80
      connect_timeout 3

      }
    }
}
红色标记为需要修改的地方。
执行/etc/init.d/keepalived start

附上备机keepalived的启动日志:
Oct 22 23:40:35 test1 Keepalived: Starting Keepalived v1.1.17 (10/22,2009)
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Using MII-BMSR NIC polling thread...
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.187 added
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Registering Kernel netlink reflector
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Registering Kernel netlink command channel
Oct 22 23:40:35 test1 keepalived: keepalived startup succeeded
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Configuration is using : 10675 Bytes
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Activating healtchecker for service
Oct 22 23:40:35 test1 Keepalived_healthcheckers: Activating healtchecker for service
Oct 22 23:40:35 test1 Keepalived: Starting Healthcheck child process, pid=5387
Oct 22 23:40:35 test1 Keepalived_vrrp: Using MII-BMSR NIC polling thread...
Oct 22 23:40:35 test1 Keepalived_vrrp: Netlink reflector reports IP 192.168.2.187 added
Oct 22 23:40:35 test1 Keepalived_vrrp: Registering Kernel netlink reflector
Oct 22 23:40:35 test1 Keepalived: Starting VRRP child process, pid=5389
Oct 22 23:40:35 test1 Keepalived_vrrp: Registering Kernel netlink command channel
Oct 22 23:40:35 test1 Keepalived_vrrp: Registering gratutious ARP shared channel
Oct 22 23:40:35 test1 Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Oct 22 23:40:35 test1 Keepalived_vrrp: Configuration is using : 37155 Bytes
Oct 22 23:40:35 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Oct 22 23:40:35 test1 Keepalived_vrrp: VRRP sockpool:

当停掉主负载均衡时,备机负载均衡的接管日志:
Oct 22 23:42:18 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 22 23:42:18 test1 Keepalived_vrrp: VRRP_Group(VGB) Syncing instances to MASTER state
Oct 22 23:42:23 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Oct 22 23:42:23 test1 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Oct 22 23:42:23 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.188
Oct 22 23:42:23 test1 Keepalived_vrrp: Netlink reflector reports IP 192.168.2.188 added
Oct 22 23:42:23 test1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.188 added
Oct 22 23:42:28 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.2.188  
  当重启主负载均衡时,备机负载均衡的日志:
  Oct 22 23:43:18 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Oct 22 23:43:18 test1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Oct 22 23:43:18 test1 Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
Oct 22 23:43:18 test1 Keepalived_vrrp: VRRP_Group(VGB) Syncing instances to BACKUP state
Oct 22 23:43:18 test1 Keepalived_vrrp: Netlink reflector reports IP 192.168.2.188 removed
Oct 22 23:43:18 test1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.2.188 removed
  
  以上就完成双机热备的Lvs+keepalived的负载均衡。
  relserver上需要执行的脚本:
# more /usr/local/bin/lvs_real
#!/bin/sh
VIP=192.168.2.188            (直接路由模式的vip必须跟服务器对外提供服务的ip地址在同一个网段)
/etc/rc.d/init.d/functions
case "$1" in
start)
    echo " start tunl port"
    ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP up
   (如果有多VIP,依次往下写)
    echo "2">/proc/sys/net/ipv4/conf/all/arp_announce
    echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
    ;;
stop)
    echo " stop tunl port"
    ifconfig lo:0 down
    echo "0">/proc/sys/net/ipv4/conf/all/arp_announce
    echo "0">/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "0">/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "0">/proc/sys/net/ipv4/conf/lo/arp_ignore
    ;;
*)
    echo "Usage: $0 {start|stop}"
    exit 1
esac
  总结:只需在负载均衡机器上安装ipvsadm和keepalived,并执行keepalived启动。在real server机器上仅需开启本地回环地址(脚本)即可,不用安装任何软件。
  经验:千万不要将负载均衡机器和web服务器搭建在一起,虽然端口等都不冲突,但是会出现莫名其妙的问题..


1.real server上未执行此脚本时:
# ifconfig lo:0
lo:0      Link encap:Local Loopback
          UP LOOPBACK RUNNINGMTU:16436Metric:1

2.real server上执行此脚本后:
# /usr/local/bin/lvs_real start
start tunl port
# ifconfig lo:0
lo:0      Link encap:Local Loopback
          inet addr:192.168.2.188Mask:255.255.255.255
          UP LOOPBACK RUNNINGMTU:16436Metric:1
所以:

keepalive机器上不用执行此脚本,因为在keepalived.conf中已经定义了vip地址,
而real server机器上必须执行此脚本。将虚拟IP绑定在本机回环网卡上。
这里还需注意一下MAC地址的问题(netseek):
假如两台VS之间使用的互备关系,那么当一台VS接管LVS服务时,可能会网络不通,这时因为路由器的MAC缓存表里无法及时刷新MAC.关于vip这个地址的MAC地址还是替换的VS的MAC,有两种解决方法,一种是修改新VS的MAC地
址,另一种是使用 send_arp /arpiing 命令.
以arping 命令为例.
/sbin/arping -I eth0 -c 3 -s ${vip}${gateway_ip} > /dev/null 2>&1
Eg:
/sbin/arping -I eth0 -c 3 -s 192.168.1.6192.168.1.1
如果采用Piranha/keealived方案切换的时候会内置自动发送 send_arp命令.UltraMonkey方案经测试也会自动发送此命令.如用 heartbeat方案,需要写一个send_arp 或者arping 相关的脚本当作heartbeat 一个资源切换服务的时候自动发送相关命令脚本.

附上一篇lvs+keeplived配置地址:
http://bbs.linuxtone.org/thread-1077-1-1.html
页: [1]
查看完整版本: LVS+keepalived搭建负载均衡