设为首页 收藏本站
查看: 797|回复: 0

[经验分享] Linux HA集群之Keepalived

[复制链接]

尚未签到

发表于 2018-12-29 09:45:14 | 显示全部楼层 |阅读模式
  大纲

  一、什么是Keepalived
  二、Keepalived工作原理
  三、Keepalived配置过程
  

  

  

  

  一、什么是Keepalived
  Keepalived用C写的简单的一个路由软件,这个项目的主要目标是对Linux系统和基于Linux的基础设施提供简单而强大负载均衡和高可用性。负载均衡架构依赖于众所周知的和广泛使用的Linux虚拟服务器(IPVS)内核模块提供第四层负载均衡。另一方面,高可用性是通过VRRP协议实现。
  Keepalived的作用是检测web服务器的状态,如果有一台web服务器死机,或工作出现故障,Keepalived将检测到,并将有故障的web服务器从系统中剔除,当web服务器工作正常后Keepalived自动将web服务器加入到服务器群中,这些工作全部自动完成,不需要人工干涉,需要人工做的只是修复故障的web服务器。
  

  

  二、Keepalived工作原理
  Layer3,4&7工作在IP/TCP协议栈的IP层,TCP层,及应用层,原理分别如下:

  •   Layer3:Keepalived使用Layer3的方式工作式时,Keepalived会定期向服务器群中的服务器发送一个ICMP的数据包(既我们平时用的Ping程序),如果发现某台服务的IP地址没有激活,Keepalived便报告这台服务器失效,并将它从服务器群中剔除,这种情况的典型例子是某台服务器被非法关机。Layer3的方式是以服务器的IP地址是否有效作为服务器工作正常与否的标准。
  •   Layer4:如果您理解了Layer3的方式,Layer4就容易了。Layer4主要以TCP端口的状态来决定服务器工作正常与否。如web server的服务端口一般是80,如果Keepalived检测到80端口没有启动,则Keepalived将把这台服务器从服务器群中剔除。
  •   Layer7:Layer7就是工作在具体的应用层了,比Layer3,Layer4要复杂一点,在网络上占用的带宽也要大一些。Keepalived将根据用户的设定检查服务器程序的运行是否正常,如果与用户的设定不相符,则Keepalived将把服务器从服务器群中剔除。
  

              Software Design
  
  

  

  三、Keepalived配置过程

系统环境
CentOS5.8 x86_64
Director
    Master     172.16.1.101

    Slave    172.16.1.105

RealServer
    node1.network.com    node1    172.16.1.103
    node2.network.com    node2    172.16.1.104
软件包

  •   ipvsadm-1.24-13.el5.x86_64.rpm
  •   keepalived-1.2.1-5.el5.x86_64.rpm
  •   httpd-2.2.15-47.el6.centos.1.x86_64.rpm
      

拓扑图



1、时间同步
[root@Master ~]# ntpdate s2c.time.edu.cn
[root@Slave ~]# ntpdate s2c.time.edu.cn
[root@node1 ~]# ntpdate s2c.time.edu.cn
[root@node2 ~]# ntpdate s2c.time.edu.cn
可根据需要在每个节点上定义crontab任务
[root@Master ~]# which ntpdate
/sbin/ntpdate
[root@Master ~]# echo "*/5 * * * * /sbin/ntpdate s2c.time.edu.cn &> /dev/null" >> /var/spool/cron/root
[root@Master ~]# crontab -l
*/5 * * * * /sbin/ntpdate s2c.time.edu.cn &> /dev/null2、主机名称要与uname -n保持一致,并通过/etc/hosts解析
Master
[root@Master ~]# hostname Master
[root@Master ~]# uname -n
Master
[root@Master ~]# sed -i 's@\(HOSTNAME=\).*@\1Master@g'  /etc/sysconfig/network
Slave
[root@Slave ~]# hostname Slave
[root@Slave ~]# uname -n
Slave
[root@node2 ~]# sed -i 's@\(HOSTNAME=\).*@\1Slave@g'  /etc/sysconfig/network
node1
[root@node1 ~]# hostname node1.network.com
[root@node1 ~]# uname -n
node3.network.com
[root@node1 ~]# sed -i 's@\(HOSTNAME=\).*@\1node1.network.com@g'  /etc/sysconfig/network
node2
[root@node2 ~]# hostname node2.network.com
[root@node2 ~]# uname -n
node2.network.com
[root@node2 ~]# sed -i 's@\(HOSTNAME=\).*@\1node2.network.com@g'  /etc/sysconfig/network
node1添加hosts解析
[root@Master ~]# vim /etc/hosts
[root@Master ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1CentOS5.8 CentOS5 localhost.localdomain localhost
::1localhost6.localdomain6 localhost6
172.16.1.101Master
172.16.1.105Slave
172.16.1.103node1.network.com node1
172.16.1.104node2.network.com node2
拷贝此hosts文件至Slave
[root@Master ~]# scp /etc/hosts Slave:/etc/
The authenticity of host 'Slave (172.16.1.105)' can't be established.
RSA key fingerprint is 13:42:92:7b:ff:61:d8:f3:7c:97:5f:22:f6:71:b3:24.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'Slave' (RSA) to the list of known hosts.
hosts                                               100%  328     0.3KB/s   00:00   
拷贝此hosts文件至node1
[root@Master ~]# scp /etc/hosts node1:/etc/
The authenticity of host 'node1 (172.16.1.103)' can't be established.
RSA key fingerprint is 1e:87:cd:f0:95:ff:a8:ef:19:bc:c6:e7:0a:87:6b:fa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,172.16.1.103' (RSA) to the list of known hosts.
root@node1's password:
hosts                                            100%  328     0.3KB/s   00:00
拷贝此hosts文件至node2
[root@Master ~]# scp /etc/hosts node2:/etc/
The authenticity of host 'node2 (172.16.1.104)' can't be established.
RSA key fingerprint is 1e:87:cd:f0:95:ff:a8:ef:19:bc:c6:e7:0a:87:6b:fa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,172.16.1.104' (RSA) to the list of known hosts.
root@node2's password:
hosts                                                     100%  328     0.3KB/s   00:003、关闭iptables和selinux
Master
[root@Master ~]# service iptables stop
[root@Master ~]# vim /etc/sysconfig/selinux
[root@Master ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#enforcing - SELinux security policy is enforced.
#permissive - SELinux prints warnings instead of enforcing.
#disabled - SELinux is fully disabled.
#SELINUX=permissive
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#targeted - Only targeted network daemons are protected.
#strict - Full SELinux protection.
SELINUXTYPE=targeted
Slave
[root@Slave ~]# service iptables stop
[root@Slave ~]# vim /etc/sysconfig/selinux
[root@Slave ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#enforcing - SELinux security policy is enforced.
#permissive - SELinux prints warnings instead of enforcing.
#disabled - SELinux is fully disabled.
#SELINUX=permissive
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#targeted - Only targeted network daemons are protected.
#strict - Full SELinux protection.
SELINUXTYPE=targeted
node1
[root@node1 ~]# service iptables stop
[root@node1 ~]# vim /etc/sysconfig/selinux
[root@node1 ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#enforcing - SELinux security policy is enforced.
#permissive - SELinux prints warnings instead of enforcing.
#disabled - SELinux is fully disabled.
#SELINUX=permissive
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#targeted - Only targeted network daemons are protected.
#strict - Full SELinux protection.
SELINUXTYPE=targeted
node2
[root@node2 ~]# service iptables stop
[root@node2 ~]# vim /etc/sysconfig/selinux
[root@node2 ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#enforcing - SELinux security policy is enforced.
#permissive - SELinux prints warnings instead of enforcing.
#disabled - SELinux is fully disabled.
#SELINUX=permissive
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#targeted - Only targeted network daemons are protected.
#strict - Full SELinux protection.
SELINUXTYPE=targeted  4、配置node1和node2
首先安装httpd服务
[root@node1 ~]# yum install -y httpd
提供测试页面
[root@node1 ~]# echo "node1.network.com" > /var/www/html/index.html
[root@node1 ~]# cat /var/www/html/index.html
node1.network.com
启动httpd服务
[root@node1 ~]# service httpd start
Starting httpd:                                            [  OK  ]
创建一个脚本,用于配置node1
[root@node1 ~]# vim RealServer.sh
[root@node1 ~]# cat RealServer.sh
#!/bin/bash
#
# Script to start LVS DR real server.
# chkconfig: - 90 10
# description: LVS DR real server
#
.  /etc/rc.d/init.d/functions
VIP=172.16.1.110
host=`/bin/hostname`
case "$1" in
start)
       # Start LVS-DR real server on this machine.
        /sbin/ifconfig lo down
        /sbin/ifconfig lo up
        echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        /sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
        /sbin/route add -host $VIP dev lo:0
;;
stop)
        # Stop LVS-DR real server loopback device(s).
        /sbin/ifconfig lo:0 down
        echo 0 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/eth0/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
;;
status)
        # Status of LVS-DR real server.
        islothere=`/sbin/ifconfig lo:0 | grep $VIP`
        isrothere=`netstat -rn | grep "lo:0" | grep $VIP`
        if [ ! "$islothere" -o ! "isrothere" ];then
            # Either the route or the lo:0 device
            # not found.
            echo "LVS-DR real server Stopped."
        else
            echo "LVS-DR real server Running."
        fi
;;
*)
            # Invalid entry.
            echo "$0: Usage: $0 {start|status|stop}"
            exit 1
;;
esac
给此脚本添加执行权限并执行
[root@node1 ~]# chmod +x RealServer.sh
[root@node1 ~]# ./RealServer.sh start
查看四个内核参数、路由条目以及VIP是否配置上
[root@node1 ~]# cat /proc/sys/net/ipv4/conf/eth0/arp_ignore
1
[root@node1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
1
[root@node1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_announce
2
[root@node1 ~]# cat /proc/sys/net/ipv4/conf/eth0/arp_announce
2
[root@node1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
172.16.1.110    0.0.0.0         255.255.255.255 UH    0      0        0 lo
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
0.0.0.0         172.16.1.1      0.0.0.0         UG    0      0        0 eth0
[root@node1 ~]# ifconfig lo:0
lo:0      Link encap:Local Loopback  
          inet addr:172.16.1.110  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
node1的配置完毕,node2是一样的,这里就不再演示了  
  5、Master与Slave安装keepalived与ipvsadm
Master
[root@Master ~]# wget http://techdata.mirror.gtcomm.net/sysadmin/keepalived/keepalived-1.2.1-5.el5.x86_64.rpm
[root@Master ~]# yum install --nogpgcheck -y keepalived-1.2.1-5.el5.x86_64.rpm ipvsadm
[root@Master ~]# scp  keepalived-1.2.1-5.el5.x86_64.rpm Slave:~
keepalived-1.2.1-5.el5.x86_64.rpm                        100%  163KB 163.4KB/s   00:00
Slave
[root@Master ~]# yum install --nogpgcheck -y keepalived-1.2.1-5.el5.x86_64.rpm ipvsadm  6、编辑keepalived的主配置文件
[root@Master ~]# cd /etc/keepalived/
[root@Master keepalived]# ls
keepalived.conf
[root@Master keepalived]# cp keepalived.conf{,.back}
[root@master ~]# cat /etc/keepalived/keepalived.conf  
! Configuration File for keepalived
global_defs {  
   notification_email {   
    root@localhost                # 指定通知的邮箱地址  
   }   
   notification_email_from root     #指定通知邮件时由谁发的  
   smtp_server 127.0.0.1             # 指定smtp邮件服务器地址
   smtp_connect_timeout 30   
   router_id LVS_DEVEL   
}
vrrp_instance VI_1 {  
    state MASTER                     # 指定为主节点   
    interface eth0                   # 指定接口
    virtual_router_id 51             # 指定虚拟路由标识符
    priority 100                     # 指定优先级,注意,主的一定要比从的优先级数字大   
    advert_int 1                     # 指定同步时间间隔
    authentication {   
        auth_type PASS   
        auth_pass soysauce            # 指定认证密码   
    }   
    virtual_ipaddress {   
        172.16.1.110/16 dev eth0 label eth0:0     # 指定VIP   
    }   
}
virtual_server 172.16.1.110 80 {  
    delay_loop 6                       
    lb_algo rr                         # 指定调度算法
    lb_kind DR                         # 指定LVS类型
    nat_mask 255.255.0.0               # 指定掩码  
    #persistence_timeout 50      
    protocol TCP
    real_server 172.16.1.103 80 {  
        weight 1   
        HTTP_GET {                  # 指定健康状态检测方法
            url {   
              path /   
          status_code 200   
            }   
            connect_timeout 2   
            nb_get_retry 3   
            delay_before_retry 1   
        }   
    }   
    real_server 172.16.1.104 80 {   
        weight 1   
        HTTP_GET {   
            url {   
              path /   
              status_code 200   
            }   
            connect_timeout 2   
            nb_get_retry 3   
            delay_before_retry 1   
        }   
    }   
}
将此配置文件拷贝至Slave上
[root@Master keepalived]# scp keepalived.conf Slave:/etc/keepalived/
keepalived.conf                                    100% 1166     1.1KB/s   00:00  7、配置从节点的配置文件
[root@Slave ~]# vim /etc/keepalived/keepalived.conf
[root@Slave ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state BACKUP                        # 这里改为BACKUP
    interface eth0
    virtual_router_id 51
    priority 99                        # 修改优先级,一定要比主节点小
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass soysauce
    }
    virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
    }
}
virtual_server 172.16.1.110 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
    #persistence_timeout 50
    protocol TCP
    real_server 172.16.1.103 80 {
        weight 1
        HTTP_GET {
            url {
              path /
      status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
}
    }
    real_server 172.16.1.104 80 {   
        weight 1   
        HTTP_GET {   
            url {   
              path /   
              status_code 200   
            }   
            connect_timeout 2   
            nb_get_retry 3   
            delay_before_retry 1   
        }
    }
}  8、启动keepalived服务,提供ipvs高可用

Master
[root@Master keepalived]# service keepalived start
Starting keepalived:                                       [  OK  ]
Slave
[root@Slave keepalived]# service keepalived start
Starting keepalived:                                       [  OK  ]
在Master节点上查看ipvs规则是否生效
[root@Master keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.1.110:80 rr
  -> 172.16.1.104:80              Route   1      0          0         
  -> 172.16.1.103:80              Route   1      0          0         
查看vip地址是否生效
[root@Master keepalived]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
    inet 172.16.1.110/16 scope global eth0:0
    inet6 fe80::20c:29ff:fefe:8238/64 scope link
       valid_lft forever preferred_lft forever  浏览器访问一下
  

  

  再刷新一下


  可以看到,rr调度算法生效
  

  9、增加Keepalived的维护模式切换功能

要想启用keepalived的维护模式切换功能,只需要在配置文件中定义一个vrrp_script和track_script即可

vrrp_script chk_maintainace {
script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}
track_script {
chk_maintainace
    }
vrrp_script放在vrrp实例之外
track_script放在vrrp实例之内
配置文件样例
[root@Master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_script chk_maintainace {                        # 定义vrrp_script脚本
script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass soysauce
    }
    virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
    }
    track_script {                                    # 定义track_script脚本
chk_maintainace
    }
}
virtual_server 172.16.1.110 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
    #persistence_timeout 50
    protocol TCP
    real_server 172.16.1.103 80 {
        weight 1
        HTTP_GET {
            url {
              path /
      status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 172.16.1.104 80 {
        weight 1
        HTTP_GET {
            url {
              path /
      status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
修改完配置文件之后,手动同步至从节点,并修改从节点上配置文件state为BACKUP,priority为99
修改完成之后,重启两个节点上的keepalived服务
先查看Master上的vip是已经生效的
[root@Master keepalived]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
    inet 172.16.1.110/16 scope global eth0:0
    inet6 fe80::20c:29ff:fefe:8238/64 scope link
       valid_lft forever preferred_lft forever
此时创建一个空文件down
[root@Master keepalived]# touch down
再次查看ip地址,发现已经转移至从节点了
[root@Master keepalived]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
    inet6 fe80::20c:29ff:fefe:8238/64 scope link
       valid_lft forever preferred_lft forever
再删除down文件并查看vip是否转移回来
[root@Master keepalived]# rm -f down
[root@Master keepalived]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
    inet 172.16.1.110/16 scope global eth0:0
    inet6 fe80::20c:29ff:fefe:8238/64 scope link
       valid_lft forever preferred_lft forever
因为Master优先级为100,Slave优先级为99,所以当Master上线时,资源会自动转移回来  10、增加Keepalived的主从切换邮件通知功能
提供一个脚本来实现
[root@Master ~]# cd /etc/keepalived/
[root@Master keepalived]# vim notify.sh
[root@Master keepalived]# cat notify.sh
#!/bin/bash
# Author: MageEdu
# description: An example of notify script
#
vip=172.16.1.110
contact='root@localhost'
notify() {
    mailsubject="`hostname` to be $1: $vip floating"
    mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
    echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
    master)
        notify master
        exit 0
    ;;
    backup)
        notify backup
        exit 0
    ;;
    fault)
        notify fault
        exit 0
    ;;
    *)
        echo 'Usage: `basename $0` {master|backup|fault}'
        exit 1
    ;;
esac
[root@Master keepalived]# chmod +x notify.sh
添加在配置文件中
[root@Master keepalived]# vim keepalived.conf
[root@Master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_script chk_maintainace {
script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass soysauce
    }
    virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
    }
    track_script {
chk_maintainace
    }
    notify_master "/etc/keepalived/notify.sh master"        # 增加这三行
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 172.16.1.110 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
    #persistence_timeout 50
    protocol TCP
    real_server 172.16.1.103 80 {
        weight 1
        HTTP_GET {
            url {
              path /
      status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 172.16.1.104 80 {
        weight 1
        HTTP_GET {
            url {
              path /
      status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
然后将notif.sh脚本复制至Slave的/etc/keepalived目录下,增加执行权限
然后在其配置文件中增加三行,增加完成之后两个节点重启keepalived服务即可  11、增加对nginx的高可用
通过一个vrrp_script和track_script来实现

[root@Master keepalived]# vim keepalived.conf
[root@Master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
#vrrp_script chk_maintainace {
#    script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
#    interval 1
#    weight -2
#}
vrrp_script chk_nginx {
    script "killall -0 nginx"
    interval 1
    weight -2
    fail 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass soysauce
    }
    virtual_ipaddress {
    172.16.1.110/16 dev eth0 label eth0:0
    }
    track_script {
#    chk_maintainace
    chk_nginx
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
#virtual_server 172.16.1.110 80 {
#    delay_loop 6
#    lb_algo rr
#    lb_kind DR
#    nat_mask 255.255.0.0
#    #persistence_timeout 50
#    protocol TCP
#    real_server 172.16.1.103 80 {
#        weight 1
#        HTTP_GET {
#            url {
#              path /
#          status_code 200
#            }
#            connect_timeout 2
#            nb_get_retry 3
#            delay_before_retry 1
#        }
#    }
#    real_server 172.16.1.104 80 {
#        weight 1
#        HTTP_GET {
#            url {
#              path /
#          status_code 200
#            }
#            connect_timeout 2
#            nb_get_retry 3
#            delay_before_retry 1
#        }
#    }
#}
然后修改notify.sh脚本
[root@Master keepalived]# vim notify.sh
[root@Master keepalived]# cat notify.sh
#!/bin/bash
# Author: MageEdu
# description: An example of notify script
#
vip=172.16.1.110
contact='root@localhost'
notify() {
    mailsubject="`hostname` to be $1: $vip floating"
    mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
    echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
    master)
        notify master
/etc/rc.d/init.d/nginx start
        exit 0
    ;;
    backup)
        notify backup
/etc/rc.d/init.d/nginx restart
        exit 0
    ;;
    fault)
        notify fault
/etc/rc.d/init.d/nginx stop
        exit 0
    ;;
    *)
        echo 'Usage: `basename $0` {master|backup|fault}'
        exit 1
    ;;
esac
修改完成之后,手动同步配置文件和notify.sh至从节点,并修改从节点上配置文件state和priority
修改完成之后,重启两个节点上的keepalived服务  12、实现基于多虚拟路由的master/master模型

配置两个虚拟路由即可,双方互为主从
[root@Master keepalived]# vim keepalived.conf
[root@Master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
#vrrp_script chk_maintainace {
#    script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
#    interval 1
#    weight -2
#}
vrrp_script chk_nginx {
    script "killall -0 nginx"
    interval 1
    weight -3
    fail 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass soysauce
    }
    virtual_ipaddress {
    172.16.1.110/16 dev eth0 label eth0:0
    }
    track_script {
#   chk_maintainace
    chk_nginx
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 52
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass network
    }
    virtual_ipaddress {
    172.16.1.120/16 dev eth0 label eth0:1
    }
    track_script {
#   chk_maintainace
    chk_nginx
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
#virtual_server 172.16.1.110 80 {
#    delay_loop 6
#    lb_algo rr
#    lb_kind DR
#    nat_mask 255.255.0.0
#    #persistence_timeout 50
#    protocol TCP
#    real_server 172.16.1.103 80 {
#        weight 1
#        HTTP_GET {
#            url {
#              path /
#          status_code 200
#            }
#            connect_timeout 2
#            nb_get_retry 3
#            delay_before_retry 1
#        }
#    }
#    real_server 172.16.1.104 80 {
#        weight 1
#        HTTP_GET {
#            url {
#              path /
#          status_code 200
#            }
#            connect_timeout 2
#            nb_get_retry 3
#            delay_before_retry 1
#        }
#    }
#}
再来修改从节点的配置文件
[root@Slave keepalived]# vim keepalived.conf
You have new mail in /var/spool/mail/root
[root@Slave keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
#vrrp_script chk_maintainace {
#    script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
#    interval 1
#    weight -2
#}
vrrp_script chk_nginx {
    script "killall -0 nginx"
    interval 1
    weight -3
    fail 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass soysauce
    }
    virtual_ipaddress {
    172.16.1.110/16 dev eth0 label eth0:0
    }
    track_script {
#   chk_maintainace
    chk_nginx
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass network
    }
    virtual_ipaddress {
    172.16.1.120/16 dev eth0 label eth0:1
    }
    track_script {
#   chk_maintainace
    chk_nginx
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
#virtual_server 172.16.1.110 80 {
#    delay_loop 6
#    lb_algo rr
#    lb_kind DR
#    nat_mask 255.255.0.0
#    #persistence_timeout 50
#    protocol TCP
#    real_server 172.16.1.103 80 {
#        weight 1
#        HTTP_GET {
#            url {
#              path /
#          status_code 200
#            }
#            connect_timeout 2
#            nb_get_retry 3
#            delay_before_retry 1
#        }
#    }
#    real_server 172.16.1.104 80 {
#        weight 1
#        HTTP_GET {
#            url {
#              path /
#          status_code 200
#            }
#            connect_timeout 2
#            nb_get_retry 3
#            delay_before_retry 1
#        }
#    }
#}
重启两节点的keepalived服务
[root@Master keepalived]# service keepalived restart
Stopping keepalived:                                       [  OK  ]
Starting keepalived:                                       [  OK  ]
[root@Slave keepalived]# service keepalived restart
Stopping keepalived:                                       [  OK  ]
Starting keepalived:                                       [  OK  ]
此时查看两个节点的vip是否生效
[root@Master keepalived]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
    inet 172.16.1.110/16 scope global eth0:0
    inet6 fe80::20c:29ff:fefe:8238/64 scope link
       valid_lft forever preferred_lft forever
[root@Slave ~]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:66:34:d1 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.105/24 brd 255.255.255.255 scope global eth0
    inet 172.16.1.120/16 scope global eth0:1
    inet6 fe80::20c:29ff:fe66:34d1/64 scope link
       valid_lft forever preferred_lft forever
此时使某一个节点downdiao
[root@Master keepalived]# service keepalived stop
Stopping keepalived:                                       [  OK  ]
可以发现down掉的节点的vip转移至另一个节点,另一个节点有了两个vip
[root@Master keepalived]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
    inet6 fe80::20c:29ff:fefe:8238/64 scope link
       valid_lft forever preferred_lft forever
[root@Slave ~]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:66:34:d1 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.105/24 brd 255.255.255.255 scope global eth0
    inet 172.16.1.120/16 scope global eth0:1
    inet 172.16.1.110/16 scope global secondary eth0:0
    inet6 fe80::20c:29ff:fe66:34d1/64 scope link
       valid_lft forever preferred_lft forever
再让down掉的那个节点重新上线
[root@Master keepalived]# service keepalived start
Starting keepalived:                                       [  OK  ]
[root@Slave keepalived]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:66:34:d1 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.105/24 brd 255.255.255.255 scope global eth0
    inet 172.16.1.120/16 scope global eth0:1
    inet6 fe80::20c:29ff:fe66:34d1/64 scope link
       valid_lft forever preferred_lft forever
可以看到地址又转移回来了
[root@Master keepalived]# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
    inet 172.16.1.110/16 scope global eth0:0
    inet6 fe80::20c:29ff:fefe:8238/64 scope link
       valid_lft forever preferred_lft forever  

  

  

  

  

  

  

  

  

  

  

  

  

  





运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-657113-1-1.html 上篇帖子: keepalived的原理和基本实现 下篇帖子: LVS+Keepalived(一)VRRP笔记
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表