LVS(DR)+keepalived+nfs+raid+LVM
LVS理论篇https://s3.运维网.com/wyfs02/M00/8D/A4/wKiom1ikWLezxsAWAAJSHJoFJTA731.png-wh_500x0-wm_3-wmp_4-s_2162435039.png
1、Client 向目标VIP 发出请求,Director(负载均衡器)接收。此时IP 包头及数据帧信息为:
https://s3.运维网.com/wyfs02/M00/8D/A1/wKioL1ikWLigeX7SAAD5Eudl6Pg871.png-wh_500x0-wm_3-wmp_4-s_1584703440.png
2、Director 根据负载均衡算法选择RealServer_1,不修改也不封装IP 报文,而是将数据帧的MAC 地址改为RealServer_1 的MAC 地址,然后在局域网上发送。IP 包头及数据帧头信息如下:
https://s3.运维网.com/wyfs02/M02/8D/A1/wKioL1ikWLjzpEXWAAD3mZVlBAs693.png-wh_500x0-wm_3-wmp_4-s_62671225.png
3、RealServer_1 收到这个帧,解封装后发现目标IP 与本机匹配(RealServer 事先绑定了VIP,必须的!)于是处理这个报文,随后重新封装报文,发送到局域网。此时IP 包头及数据帧头信息如下:
https://s1.运维网.com/wyfs02/M00/8D/A4/wKiom1ikWLnDFwP-AADig0uMLos538.png-wh_500x0-wm_3-wmp_4-s_3619353588.png
4、Client 将收到回复报文。Client 认为得到正常的服务,而不会知道是哪台服务器处理的(注意,如果跨网段,那么报文通过路由器经由Internet 返回给用户)
LVS-DR 中的ARP 问题
https://s1.运维网.com/wyfs02/M00/8D/A1/wKioL1ikWLnwWU59AAG9m9nQr3E243.png-wh_500x0-wm_3-wmp_4-s_2604318195.png
在LVS-DR 负载均衡群集中,负载均衡器与节点服务器都要配置相同的VIP 地址,在局
域网中具有相同的IP 地址,势必会造成各种服务器ARP 通信紊乱。当一个ARP 广播发送到
LVS-DR 群集时,因为负载均衡器和节点服务器都是连接到相同的网络上的,它们都会接收
到ARP 广播,这个时候,应该只有前段的负载均衡器进行响应,其他节点服务器不应该响
应ARP 广播。
1、对节点服务器进行处理,使其不影响针对VIP 的ARP 请求。
使用虚拟接口lo:0 承载VIP 地址
设置内核参数arp_ignore=1,系统只响应目的IP 为本地IP 的ARP 请求
2、RealServer 返回报文(源IP 是VIP)经由路由器转发,在重新封装报文时,需要现货区路由器的MAC 地址,发送ARP 请求保重的源IP 地址,而不使用发送接口(如eth0)的IP地址
https://s1.运维网.com/wyfs02/M01/8D/A4/wKiom1ikWLqCwjBqAACKy7szjcg162.png-wh_500x0-wm_3-wmp_4-s_2552914037.png
3、路由器收到ARP 请求后,将跟新ARP 表项,原有的VIP 对应的Director 的MAC 地址将会被跟新为VIP 对应的RealServer 的MAC 地址。
https://s1.运维网.com/wyfs02/M01/8D/A1/wKioL1ikWLqg_cMuAACHQWfUAVY403.png-wh_500x0-wm_3-wmp_4-s_982463742.png
此时新来的报文请求,路由器根据ARP 表项,会将该报文转发给RealServer,从而导致
Director 的VIP 失效!
https://s1.运维网.com/wyfs02/M01/8D/A4/wKiom1ikWLrxDbPiAAE0MySlh-w011.png-wh_500x0-wm_3-wmp_4-s_3957875582.png
15
云计算
3、路由器收到ARP 请求后,将跟新ARP 表项,原有的VIP 对应的Director 的MAC 地址将会被跟新为VIP 对应的RealServer 的MAC 地址。此时新来的报文请求,路由器根据ARP 表项,会将该报文转发给RealServer,从而导致Director 的VIP 失效!
解决方法:
对节点服务器进行处理,设置内核参数arp_announce=2,系统不使用IP 包的源地址来
设置ARP 请求的源地址,而选择发送接口的IP 地址。解决ARP 的两个问题的设置方法:修改/etc/sysctl.conf 文件
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
LVS+DR实战篇:
实验环境:
VIP:192.168.18.41
BL: 192.168.18.31
web1:192.168.18.32
web2:192.168.18.33
nfs: 192.168.18.34
NFS共享存储:
# mount /dev/cdrom /media/cdrom/
mount: block device /dev/sr0 is write-protected, mounting read-only
# yum -y installnfs-utils rpcbind
# rpm -q nfs-utils rpcbind
# mkdir /www
# vi /etc/exports
/www 192.168.18.0/24(ro,sync,no_root_squash)
# service rpcbind start
正在启动 rpcbind: [确定]
# service nfs start
启动 NFS 服务: [确定]
启动 NFS mountd: [确定]
启动 NFS 守护进程: [确定]
正在启动 RPC idmapd: [确定]
# showmount -e 192.168.18.34
Export list for 192.168.18.34:
/www 192.168.18.0/24
# chkconfig rpcbind on
# chkconfig nfs on
# echo "ce shi ye">/www/index.html
web服务器配置:
web1
# rpm -q httpd
httpd-2.2.15-29.el6.centos.x86_64
# vi /etc/httpd/conf/httpd.conf
# yum -y install nfs-utils
# service httpd start
正在启动 httpd: [确定]
# mount 192.168.18.34:/www /var/www/html/
# df -h
Filesystem SizeUsed Avail Use% Mounted on
/dev/mapper/vg_doo-lv_root 18G3.9G 13G24% /
tmpfs 383M 0383M 0% /dev/shm
/dev/sda1 485M 35M426M 8% /boot
192.168.18.34:/www 18G1.3G 16G 8% /var/www/html
# vi /etc/fstab
192.168.18.34:/www /var/www/html nfs defaults,_netdev 1 2
(备份,第二检测)
web2:(同web1)
https://s1.运维网.com/wyfs02/M02/8D/A4/wKiom1ikWbewsWwDAAA_pEnj2WA009.png-wh_500x0-wm_3-wmp_4-s_1584082880.png
https://s1.运维网.com/wyfs02/M02/8D/A1/wKioL1ikWbfSQVLHAAA25RIwol8536.png-wh_500x0-wm_3-wmp_4-s_1428875385.png
3、LVS-NAT 部署
ipvsadm 工具参数说明:
-A 添加虚拟服务器
-D 删除虚拟服务器
-C 删除所有配置条目
-E 修改虚拟服务器
-L 或-l,列表查看
-n 不做解析,以数字形式显示
-c 输出当前IPVS 连接
-a 添加真实服务器
-d 删除某个节点
-t 指定VIP 地址及TCP 端口
-s 指定负载调度算法,rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq,默认wlc
-m NAT 群集模式
-g DR 模式
-i TUN 模式
-w 设置权重(权重为0 时表示暂停节点)
--help 帮助
BL:
# modprobe ip_vs//加载ip_vs模块
# yum -y install ipvsadm
# service ipvsadm stop
ipvsadm: Clearing the current IPVS table: [确定]
ipvsadm: Unloading modules: [确定]
# ipvsadm -C
# vi /opt/vip.sh
#!/bin/bash
# VIP
VIP="192.168.18.41"
/sbin/ifconfig eth1:vip $VIP broadcast $VIP netmask 255.255.255.255
/sbin/route add -host $VIP dev eth1:vip
~
# chmod +x /opt/vip.sh
# /opt/vip.sh
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.18.41 0.0.0.0 255.255.255.255 UH 0 0 0 eth1
192.168.18.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth1
0.0.0.0 192.168.18.2 0.0.0.0 UG 0 0 0 eth1
# ipvsadm -A -t 192.168.18.41:80 -s rr
# ipvsadm -a -t 192.168.18.41:80 -r 192.168.18.32:80 -g
# ipvsadm -a -t 192.168.18.41:80 -r 192.168.18.33:80 -g
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.18.41:80 rr
-> 192.168.18.32:80 Route 1 0 0
-> 192.168.18.33:80 Route 1 0 0
# ipvsadm-save >/etc/sysconfig/ipvsadm
对web1:
# vi /opt/lvs-dr
#!/bin/bash
# lvs-dr
VIP="192.168.18.41"
/sbin/ifconfig lo:vip $VIP broadcast $VIP netmask 255.255.255.255
/sbin/route add -host $VIP dev lo:vip
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
# chmod +x /opt/lvs-dr
# /opt/lvs-dr
# ip a
1: lo:mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 192.168.2.41/32 brd 192.168.2.41 scope global lo:vip
[root@web1 ~]# scp /opt/lvs-dr 192.168.2.4:/opt
对web2也一样操作
https://s1.运维网.com/wyfs02/M02/8D/A1/wKioL1ikWbiDk3hPAAA5vjZkIWg076.png-wh_500x0-wm_3-wmp_4-s_1934508333.png
对BL 可以看到分配的SYNC请求:
# ipvsadm -lnc
IPVS connection entries
pro expire state source virtual destination
TCP 14:59ESTABLISHED 192.168.18.140:53385 192.168.18.41:80 192.168.18.33:80
TCP 01:24FIN_WAIT 192.168.18.140:53380 192.168.18.41:80 192.168.18.32:80
TCP 01:58FIN_WAIT 192.168.18.140:53387 192.168.18.41:80 192.168.18.33:80
TCP 01:24FIN_WAIT 192.168.18.140:53379 192.168.18.41:80 192.168.18.33:80
TCP 01:25FIN_WAIT 192.168.18.140:53382 192.168.18.41:80 192.168.18.32:80
加上keepalived:
对BL:
实验环境:
VIP:192.168.18.41
BL1:193.168.18.30(从)
BL2:192.168.18.31(主)
web1:192.168.18.32
web2:192.168.18.33
nfs: 192.168.18.34 (raid+LVM)
# yum -y install keepalived
# cd /etc/keepalived/
# cp keepalived.conf keepalived.conf.bak
# vi keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
doo@163.com//自己的警报邮箱
}
notification_email_from ya@163.com//发送邮箱
smtp_server ping.com.cn//邮件服务器的SMTP地址
smtp_connect_timeout 30//smtp服务器超时时间
router_id LVS_DEVEL_BLM//邮件标题识别,可乱写
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 51//同一个虚拟VIP的路由标记,同一个keepalived下主备必须一样
priority 100//优先级,数字越大优先级越高,1-254
advert_int 2//MASTER与SLAVE之间的同步检查时间
authentication {
auth_type PASS//验证类型,有PASS和AH两种
auth_pass 1111//同一个keepalived下必须密码一样
}
virtual_ipaddress {
192.168.18.41//VIP,可以设置多个
}
}
virtual_server 192.168.18.41 80 {//设置VIP
delay_loop 2//健康检查时间
lb_algo rr//轮询算法
lb_kind DR//负载均衡机制NAT ,DR,TUN
! nat_mask 255.255.255.0//非NAT,要注释,下同
! persistence_timeout 300//存留超时时间
protocol TCP
real_server 192.168.18.32 80 {
weight 1//权重值
TCP_CHECK {//readserver的状态检测部分
connect_timeout 10//10秒无响应超时
nb_get_retry 3//重试次数
delay_before_retry 3//两个重试时间间隔为3秒
connect_port 80//检测连接端口
}
}
real_server 192.168.18.33 80 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
# service keepalived start
正在启动 keepalived: [确定]
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.18.41:80 rr
-> 192.168.18.32:80 Route 1 0 0
-> 192.168.18.33:80 Route 1 0 0
对从BL2:192.168.18.30
# modprobe ip_vs
# yum -y install keepalived ipvsadm
# cd /etc/keepalived/
# cp keepalived.conf keepalived.conf.bak
# vi keepalived.conf.bak
! Configuration File for keepalived
global_defs {
notification_email {
doo@163.com
}
notification_email_from ya@163.com
smtp_server ping.com.cn
smtp_connect_timeout 30
router_id LVS_DEVEL_BLM
}
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 51
priority 99
advert_int 2
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.18.41
}
}
virtual_server 192.168.18.41 80 {
delay_loop 2
lb_algo rr
lb_kind DR
! nat_mask 255.255.255.0
! persistence_timeout 300
protocol TCP
real_server 192.168.18.32 80 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.18.33 80 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
# ip a//没有VIP,就对了
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.18.41:80 rr
-> 192.168.18.32:80 Route 1 0 0
-> 192.168.18.33:80 Route 1 0 0
对Web1:
# service httpd stop
停止 httpd: [确定]
对BL:
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.18.41:80 rr
-> 192.168.18.33:80 Route 1 0 0
对web1:
# service httpd start
正在启动 httpd: [确定]
对BL:
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.18.41:80 rr
-> 192.168.18.32:80 Route 1 0 0
-> 192.168.18.33:80 Route 1 0 0
高可用测试:
对主BL:
# ip a
1: lo:mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9d:cf:f3 brd ff:ff:ff:ff:ff:ff
inet 192.168.18.31/24 brd 192.168.18.255 scope global eth1
inet 192.168.18.41/32 scope global eth1
inet6 fe80::20c:29ff:fe9d:cff3/64 scope link
valid_lft forever preferred_lft forever
# service keepalived stop
停止 keepalived: [确定]
对BL从:
# ip a
1: lo:mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1:mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:34:d3:f8 brd ff:ff:ff:ff:ff:ff
inet 192.168.18.30/24 brd 192.168.18.255 scope global eth1
inet 192.168.18.41/32 scope global eth1
inet6 fe80::20c:29ff:fe34:d3f8/64 scope link
valid_lft forever preferred_lft forever
# ipvsadm -l -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP192.168.18.41:80 rr
-> 192.168.18.32:80 Route 1 0 0
-> 192.168.18.33:80 Route 1 0 0
NFS+raid5+LVM 192.168.18.34
对NFS:
# fdisk -l|grep dev
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1 * 1 64 512000 83Linux
/dev/sda2 64 2611 20458496 8eLinux LVM
Disk /dev/sdb: 2147 MB, 2147483648 bytes
Disk /dev/sdc: 2147 MB, 2147483648 bytes
Disk /dev/sdd: 2147 MB, 2147483648 bytes
Disk /dev/sde: 2147 MB, 2147483648 bytes
Disk /dev/mapper/vg_doo2-lv_root: 18.9 GB, 18865979392 bytes
Disk /dev/mapper/vg_doo2-lv_swap: 2080 MB, 2080374784 bytes
# yum -y install parted
# parted /dev/sdb
GNU Parted 2.1
使用 /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel
新的磁盘标签类型? gpt
(parted) mkpart
分区名称?[]? a
文件系统类型?? ext3
起始点? 1
结束点? -1
(parted) p
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdc: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
NumberStart End Size File systemName标志
1 1049kB2146MB2145MB a
(parted) q
信息: You may need to update /etc/fstab.
同样的方法对 c d e....
# yum -y install mdadm
# mdadm -Cv /dev/md5 -a yes -n3 -x1 -l5 /dev/sd1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sdb1: Device or resource busy
mdadm: Cannot use /dev/sdb1: It is busy
mdadm: cannot open /dev/sdb1: Device or resource busy
# mkfs.ext3/dev/md5
# mdadm -D -s >/etc/mdadm.conf
# sed -i '1 s/$/ auto=yes/' /etc/mdadm.conf
LVM:
# pvcreate /dev/md5
Physical volume "/dev/md5" successfully created
# vgcreate vg0 /dev/md5
Volume group "vg0" successfully created
# lvcreate -L 2G -n web vg0
Logical volume "web" created
# mkdir /web
# mkfs.ext4 /dev/vg0/web
# mount /dev/vg0/web /web/
# echo "doo">/web/index.html
# vim /etc/fstab
……
/dev/vg0/web /web ext4 defaults 1 2
# vim /etc/exports
/web 192.168.1.0/24(rw,sync,no_root_squash)
# /etc/init.d/rpcbind start
# /etc/init.d/nfs start
# showmount -e 192.168.18.34
Export list for 192.168.18.34:
/web 192.168.18.0/24
对web1和web2挂载
# vim /etc/fstab
……
192.168.18.34:/web /var/www/html nfs defaults,_netdev 1 2
# yum -y install nfs-utils
# mount 192.168.18.34:/web /var/www/html/
客户端测试:
https://s2.运维网.com/wyfs02/M00/8D/A4/wKiom1ikWbixCR5cAAA83L6JVJ8607.png-wh_500x0-wm_3-wmp_4-s_161578212.png
多刷新几次
# ipvsadm -L -n -c
IPVS connection entries
pro expire state source virtual destination
TCP 01:58FIN_WAIT 192.168.18.140:53988 192.168.18.41:80 192.168.18.32:80
TCP 01:57FIN_WAIT 192.168.18.140:53977 192.168.18.41:80 192.168.18.33:80
TCP 00:58SYN_RECV 192.168.18.140:53998 192.168.18.41:80 192.168.18.33:80
TCP 01:57FIN_WAIT 192.168.18.140:53981 192.168.18.41:80 192.168.18.32:80
TCP 01:57FIN_WAIT 192.168.18.140:53974 192.168.18.41:80 192.168.18.32:80
TCP 01:57FIN_WAIT 192.168.18.140:53976 192.168.18.41:80 192.168.18.32:80
页:
[1]