设为首页 收藏本站
查看: 589|回复: 0

[经验分享] 集群之LVS的高可用

[复制链接]

尚未签到

发表于 2019-1-3 09:58:36 | 显示全部楼层 |阅读模式
  系统平台 redhat5.8
  IP配置信息:
  LVS-DR-master HA1: 172.16.66.6

  LVS-DR-backup HA2: 172.16.66.7

  LVS-DR-vip: 172.16.66.1
  LVS-DR-rs1:172.16.66.4   

  LVS-DR-rs2:172.16.66.5

  软件包下载参考地址
  http://www.linuxvirtualserver.org/software/kernel-2.6/

  http://www.keepalived.org/software/
  

  每台机器配置前的准备工作
  关闭selinux
  # getenforce  查看selinux状态,若为enforcing则执行以下步骤修改
  # setenforce 0   

  # vim /etc/sysconfig/selinux  (服务器重启后才会永久生效)

  修改SELINUX=enforcing为SELINUX=disabled

  

  一、RS的配置过程
  1、RS1的配置
  1)首先配置本机IP:(网卡要改为桥接方式)
  setup -->Network configuration --> Edit Devices --> eth0(eth0) – Advanced Micro Devices [AMD] --> 修改IP为 172.16.66.4
  或者vim /etc/sysconfig/network-scripts/ifcfg-eth0 修改IP)

  # service network restart 重启服务(每次修改配置后都不要忘了重启服务)
  2)编辑lvs.sh开机启动脚本并添加执行权限开启
  # vim /etc/init.d/lvs.sh  

  #!/bin/bash
  #
  # Script to start LVS DR real server.
  # chkconfig: - 90 10
  # description: LVS DR real server
  #
  . /etc/rc.d/init.d/functions
  VIP=172.16.66.1
  host=`/bin/hostname`
  case "$1" in
  start)
  # Start LVS-DR real server on this machine.
  /sbin/ifconfig lo down
  /sbin/ifconfig lo up
  echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
  echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
  echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
  echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
  /sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
  #(broadcast为广播地址,255.255.255.255意味着只跟自己在同一个网段内,全是网络地址)
  /sbin/route add -host $VIP dev lo:0
  ;;
  stop)
  # Stop LVS-DR real server loopback device(s).
  /sbin/ifconfig lo:0 down
  echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
  echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
  echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
  echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
  ;;
  status)
  # Status of LVS-DR real server.
  islothere=`/sbin/ifconfig lo:0 | grep $VIP`
  isrothere=`netstat -rn | grep "lo:0" | grep $VIP`
  if [ ! "$islothere" -o ! "isrothere" ];then
  # Either the route or the lo:0 device
  # not found.
  echo "LVS-DR real server Stopped."
  else
  echo "LVS-DR real server Running."
  fi
  ;;
  *)
  # Invalid entry.
  echo "$0: Usage: $0 {start|status|stop}"
  exit 1
  ;;
  esac
  # chmod +x /etc/init.d/lvs.sh 添加执行权限
  # cd /etc/init.d/
  # ./lvs.sh start  启动服务
  3)安装httpd服务,提供页面并开启服务
  # yum install httpd -y
  # echo "RS1.magedu.com">/var/www/html/index/html
  # service httpd start


  4)测试环境是否配置成功

  在物理主机上ping 172.16.66.1看看是否能ping通

  Ping通后 可执行 arp -a 命令查看哪一个IP响应了
  ifconfig 验证(虚拟IP为172.16.66.1)

  2、RS2的配置(与RS1相同)
  1)配置IP:(网卡要改为桥接方式)
  IP: 172.16.66.5
  # vim /etc/sysconfig/network-scripts/ifcfg-eth0 设置IP

  # service network restart 重启服务
  2)编辑开机启动脚本添加执行权限后开启服务

  # vim /etc/init.d/lvs.sh 脚本内容和RS1的相同
  # cd /etc/init.d/

  # chmod +x lvs.sh 添加执行权限
  # ./lvs.sh start 启动服务
  3)安装httpd服务提供相应的网页页面并开启服务

  # yum install httpd –y

  # echo "RS2.magedu.com">/var/www/html/index.html
  # service httpd start


  4) 验证环境是否成功

  在物理主机上ping172.16.66.1 查看是否能ping通, 然后执行arp –a查看响应状态
  # ifconfig 验证
  二、配置节点HA1、HA2
  让两个节点各自在本地提供两个页面,以只读方式进行提供
  HA1: IP为 172.16.66.6
  HA2: IP为 172.16.66.7
  vip: 172.16.66.1 (虚拟IP)
  以两个节点node1,node2为例
  创建节点的集群有几个需要注意的地方:
  1)节点名称,对于名称的解析绝不可以依赖于DNS,应依赖于本地配置文件/etc/hosts,每一个节点的节点名称一定要跟uname -n 的节点保持一致
  2)ssh互信通信,即以不提供密码的方式能够通过基于密钥通信的方式互相访问对方节点上的用户
  3)集群中各节点时间必须同步,这是我们所依赖的基本前提,因为高可用集群的节点必须时刻监控对方的心跳信息
  1、修改HA1、HA2的两台主机的主机名
  1)修改HA1主机名

  # vim /etc/sysconfig/network 修改内容如下

  2)修改HA2主机名

  # vim /etc/sysconfig/network修改内容如下:

  

  2、配置两主机HA1与HA2双机互信
  1)HA1上操作过程
  # ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''  复制密钥到本地
  # ssh-copy-id -i .ssh/id_rsa.pub root@172.16.66.7 将公钥文件发送到HA2上


  # ssh 172.16.66.7  与另一台主机HA2进行互信

  2)HA2上操作过程
  # ssh-keygen -t rsa -f ~/.ssh/id_rsa -P  ''   

  # ssh-copy-id -i .ssh/id_rsa.pub root@172.16.66.6   

  # ssh 172.16.66.6 'ifconfig'   与节点HA1进行通信
  

  3、配置主机解析和时间同步
  HA1上操作
  1)主机解析配置

  # vim /etc/hosts 添加如下内容

  172.16.66.6 node1.magedu.com node1
  172.16.66.7 node2.magedu.com node2

  # ping node2       测试一下能否ping通node2

  # scp /etc/hosts node2:/etc/ 复制主机解析配置文件到HA2下,使其双方保持一致
  # iptables -L 确保iptables没有被限定

  2)时间同步配置

  # date
  # ntpdate 172.16.0.1 通过另外一台服务器同步时间

  # service ntpd stop   关闭ntpd服务器
  # chkconfig ntpd off 确保ntpd服务器开机时不能自动启动
  # crontab -e  为了保证以后的时间是同步的,编辑配置文件,添加内容如下
  */5 * * * * /sbin/ntpdata 172.16.0.1 &>/dev/null
  # scp /var/spool/cron/root node2:/var/spool/cron/ 复制到同步时间的配置文件到HA2
  HA2上操作
  # ping node1 查看是否能ping通节点1
  # ping node1.magedu.com
  # date
  # crontab -l 查看编辑的同步时间的配置文件是否从node1上复制过来了

  三、利用keepalived实现LVS的高可用
  1、HA1和HA2上分别安装keepalived和ipvsadm   (ipvsadm本系统自带的有所以直接安装)

  # yum -y --nogpgcheck localinstall keepalived-1.2.7-5.el5.i386.rpm 安装keepalived包
  # scp keepalived-1.2.7-5.el5.i386.rpm node2:/root/ 复制软件包到node2上
  # yum -y install ipvsadm 装上工具方便监测
  2、我们服务的转移情况
  HA1 主节点上配置
  [root@node1 ~]# cd /etc/keepalived/
  [root@node1 keepalived]# ls 查看配置文件
  keepalived.conf keepalived.conf.haproxy_example notify.sh
  [root@node1 keepalived]# cp keepalived.conf keepalived.conf.bak 备份主配置文件
  [root@node1 keepalived]# vim keepalived.conf 修改内容如下
  ! Configuration File for keepalived
  global_defs {
  notification_email {
  root@localhost
  }
  notification_email_from keepalived@localhost
  smtp_server 127.0.0.1
  smtp_connect_timeout 30
  router_id LVS_DEVEL
  }
  vrrp_instance VI_1 {
  state MASTER
  interface eth0    # 虚拟接口通过哪个物理接口进行发送
  virtual_router_id 79
  priority 101
  advert_int 1
  authentication {
  auth_type PASS
  auth_pass keepalivedpass pass  # 表示简单进行认证
  }
  virtual_ipaddress {
  172.16.66.1 /16 dev eth0 lable eth0:0  # 这是我们的虚拟地址,要配置在网卡接口上的
  }
  virtual_server 172.16.66.1 80 {
  delay_loop 6
  lb_algo rr
  lb_kind DR
  nat_mask 255.255.0.0
  # persistence_timeout 50 # 不需要持久连接的
  protocol TCP
  real_server 172.16.66.4 80 {
  weight 1
  HTTP_GET {
  url {
  path /
  status_code 200
  }
  connect_timeout 2
  nb_get_retry 3
  delay_before_retry 1
  }
  }
  real_server 172.16.66.5 80 {
  weight 1
  HTTP_GET {
  url {
  path /
  status_code 200
  }
  connect_timeout 2
  nb_get_retry 3
  delay_before_retry 1
  }
  }
  }
  (vrrp_instance VI_1定义vrrp的虚拟路由,对于虚拟路由而言我们两端的特色初始状态一端为master,一端为backup,为master的一端要比backup的一端大点,当我们的服务遇到故障时要进行监测,并降低优先级,降低的优先级要比我们的备节点要小,减去降低的优先级后要比backup定义的优先级要小。)
  3、把配置文件复制到另一个节点HA2一份
  [root@node1 keepalived]# scp keepalived.conf node2:/etc/keepalived/

  HA2上配置
  [root@node1 keepalived]# vim keepalived.conf 修改如下内容(只修改两处)
  vrrp_instance VI_1 {
  state BACKUP
  interface eth0
  virtual_router_id 79
  priority 100 要比master的小
  advert_int 1
  authentication {
  auth_type PASS
  auth_pass keepalivedpass
  }
  4、分别在两个节点上启动服务
  [root@node1 keepalived]# service keepalived start
  Starting keepalived: [ OK ]
  [root@node2 keepalived]# service keepalived start
  Starting keepalived: [ OK ]
  5、查看IP和ipvsadm规则并在物理主机上访问

  查看ipvsadm规则

  在物理主机上访问172.16.66.1

  刷新网页

  再查看ipvsadm规则

  四、利用keepalived实现web服务的高可用
  此过程的实现只需两台虚拟机HA1、HA2
  1、HA1的配置
  [root@node1 ~]# service keepalived stop
  [root@node1 ~]# yum -y install httpd
  [root@node1 ~]# vim /var/www/html/index.html

  [root@node1 keepalived]# service httpd start
  Starting httpd: [ OK ]
  在物理主机上浏览器访问172.16.66.6则会出现node1内容 (或者在本系统curl http://172.16.66.7直接查看)
  

  2、HA2的配置
  [root@node1 ~]# service keepalived stop  停止keepalived服务
  [root@node2 ~]# yum -y install httpd    安装httpd
  [root@node2 keepalived]# vim /var/www/html/index.html

  [root@node2 keepalived]# service httpd start
  Starting httpd: [ OK ]
  在物理主机浏览器上访问172.16.66.7  (或者在本系统机curl http://172.16.66.7)


  3、编辑节点1的keepalived的配置文件并提供相应的脚本后启动服务
  [root@node1 ~]# cd /etc/keepalived/
  [root@node1 keepalived]# cp keepalived.conf.haproxy_example keepalived.conf
  cp: overwrite `keepalived.conf'? yes
  分别修改两节点的配置文件并重启服务
  HA1上配置
  1)修改keepalived配置

  [root@node1 keepalived]# vim keepalived.conf 脚本内容如下
  ! Configuration File for keepalived
  global_defs {
  notification_email {
  linuxedu@foxmail.com
  mageedu@126.com
  }
  notification_email_from kanotify@magedu.com
  smtp_connect_timeout 3
  smtp_server 127.0.0.1
  router_id LVS_DEVEL
  }
  vrrp_script chk_httpd {
  script "killall -0 haproxy"
  interval 2
  # check every 2 seconds
  weight -2
  # if failed, decrease 2 of the priority
  fall 2
  # require 2 failures for failures
  rise 1
  # require 1 sucesses for ok
  }
  vrrp_script chk_schedown {
  script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
  interval 2
  weight -2
  }
  vrrp_instance VI_1 {
  interface eth0
  # interface for inside_network, bound by vrrp
  state MASTER
  # Initial state, MASTER|BACKUP
  # As soon as the other machine(s) come up,
  # an election will be held and the machine
  # with the highest "priority" will become MASTER.
  # So the entry here doesn't matter a whole lot.
  priority 101
  # for electing MASTER, highest priority wins.
  # to be MASTER, make 50 more than other machines.
  virtual_router_id 51
  # arbitary unique number 0..255
  # used to differentiate multiple instances of vrrpd
  # running on the same NIC (and hence same socket).
  garp_master_delay 1
  authentication {
  auth_type PASS
  auth_pass password
  }
  track_interface {
  eth0
  }
  # optional, monitor these as well.
  # go to FAULT state if any of these go down.
  virtual_ipaddress {
  172.16.66.1/16 dev eth0 label eth0:0
  }
  #addresses add|del on change to MASTER, to BACKUP.
  #With the same entries on other machines,
  #the opposite transition will be occuring.
  #/ brd  dev  scope  label
  track_script {
  chk_haproxy
  chk_schedown
  }
  notify_master "/etc/keepalived/notify.sh master"
  notify_backup "/etc/keepalived/notify.sh backup"
  notify_fault "/etc/keepalived/notify.sh fault"
  }
  #vrrp_instance VI_2 {
  # interface eth0
  # state MASTER # BACKUP for slave routers
  # priority 101 # 100 for BACKUP
  # virtual_router_id 79
  # garp_master_delay 1
  #
  # authentication {
  # auth_type PASS
  # auth_pass password
  # }
  # track_interface {
  # eth0
  # }
  # virtual_ipaddress {
  # 172.16.66.2/16 dev eth0 label eth0:1
  # }
  # track_script {
  # chk_haproxy
  # chk_mantaince_down
  # }
  #
  # notify_master "/etc/keepalived/notify.sh master eth0:1"
  # notify_backup "/etc/keepalived/notify.sh backup eth0:1"
  # notify_fault "/etc/keepalived/notify.sh fault eth0:1"
  #}
  2)复制配置文件内容到节点2上
  # scp keepalived.conf notify.sh node2:/etc/keepalived/


  [root@node1 keepalived]# service keepalived restart
  Stopping keepalived: [ OK ]
  Starting keepalived: [ OK ]
  HA2的配置
  [root@node1 keepalived]# vim keepalived.conf 修改如下内容
  vrrp_instance VI_1 {
  interface eth0
  # interface for inside_network, bound by vrrp
  state BACKUP
  # Initial state, MASTER|BACKUP
  # As soon as the other machine(s) come up,
  # an election will be held and the machine
  # with the highest "priority" will become MASTER.
  # So the entry here doesn't matter a whole lot.
  priority 100
  # for electing MASTER, highest priority wins.
  # to be MASTER, make 50 more than other machines.
  [root@node2 keepalived]# service keepalived restart
  Stopping keepalived: [ OK ]
  Starting keepalived: [ OK ]
  4、模拟master出现故障
  首先停掉主服务器的web服务,然后查看IP是否漂移到了从服务器上
  [root@node1 keepalived]# service httpd stop
  Stopping httpd: [ OK ]
  主服务器上IP显示为

  从服务器上IP显示为

  测试:在物理主机上访问:172.16.66.1

  五、利用keepalived实现web服务的双主模型
  双主模型的实现是在主从模型的基础上做的
  1、编辑两个节点的配置文件
  HA1:
  [root@node1 keepalived]# vim keepalived.conf 修改配置文件如下
  vrrp_instance VI_2 {
  interface eth0
  state BACKUP # BACKUP for slave routers
  priority 100 # 100 for BACKUP
  virtual_router_id 79
  garp_master_delay 1
  authentication {
  auth_type PASS
  auth_pass password
  }
  track_interface {
  eth0
  }
  virtual_ipaddress {
  172.16.66.2/16 dev eth0 label eth0:1
  }
  track_script {
  chk_haproxy
  chk_mantaince_down
  }
  notify_master "/etc/keepalived/notify.sh master eth0:1"
  notify_backup "/etc/keepalived/notify.sh backup eth0:1"
  notify_fault "/etc/keepalived/notify.sh fault eth0:1"
  }
  HA2:[root@node2 keepalived]# vim keepalived.conf
  vrrp_instance VI_2 {
  interface eth0
  state MASTER # BACKUP for slave routers
  priority 101 # 100 for BACKUP
  virtual_router_id 103
  garp_master_delay 1
  authentication {
  auth_type PASS
  auth_pass password
  }
  track_interface {
  eth0
  }
  virtual_ipaddress {
  172.16.66.2/16 dev eth0 label eth0:1
  }
  track_script {
  chk_httpd
  chk_schedown
  }
  notify_master "/etc/keepalived/notify.sh master eth0:1"
  notify_backup "/etc/keepalived/notify.sh backup eth0:1"
  notify_fault "/etc/keepalived/notify.sh fault eth0:1"
  }
  2、分别重启启动两节点的keepalived服务
  [root@node1 keepalived]# service keepalived restart
  Stopping keepalived: [ OK ]
  Starting keepalived: [ OK ]
  [root@node2 keepalived]# service keepalived restart
  Stopping keepalived: [ OK ]
  Starting keepalived: [ OK ]
  3、验证,首先查看两节点的IP,然后在物理主机上访问
  查看节点1的显示为

  查看节点2的IP显示为

  在物理主机上访问172.16.66.1

  在物理主机上访问172.16.66.2

  4、模拟节点1 宕掉
  [root@node1 keepalived]# touch down
  查看节点1的IP ifconfig(节点转移走)

  查看节点2的IP

  在物理主机上访问验证
  访问172.16.66.1和172.16.66.2都是由node2主机返回结果
  访问 172.16.66.1

  访问 172.16.66.2

  [root@node1 keepalived]# rm –rf down 删除down掉的节点
  删除down文件后查看节点IP是否夺回了资源


  这就利用keepalived实现的一些高可用的部分功能,希望对您有所帮助。




运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-658872-1-1.html 上篇帖子: LB集群 下篇帖子: LVS实例完整剖析
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表