①:重启两个节点的keepalived服务: service keepalived restart
②:在主节点的/etc/keepalived/目录中,创建 down 文件(touch down),则主节点的IP地址就转移到从节点了。
验证:
主节点:
[root@node1 keepalived]# touch down
[root@node1 keepalived]# ip addr show
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:40:af:c6 brd ff:ff:ff:ff:ff:ff
inet 172.16.249.208/16 brd 172.16.255.255 scope global eth0
inet6 fe80::20c:29ff:fe40:afc6/64 scope link
valid_lft forever preferred_lft forever
从节点:
[root@node2 ~]# ip addr show
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:01:46:91 brd ff:ff:ff:ff:ff:ff
inet 172.16.249.165/16 brd 172.16.255.255 scope global eth0
inet 172.16.39.100/32 scope global eth0
inet6 fe80::20c:29ff:fe01:4691/64 scope link
valid_lft forever preferred_lft forever
1、如何在状态转换时进行通知?
2、如何配置ipvs ?
virutal server
realserver
health check
3、如何对某特定服务做高可用?
4、如何实现基于多虚拟路由的master/master模型?
④:我们仍然是在主节点上配置好,复制 keepalived.conf notify.sh 到backup节点上,修改状态state和优先级priority
l
验证:
在主节点上: touch down 、让主节点降低2个优先级,则从100变成98,则从节点的优先级99大于98则,IP地址转移到从节点上。
主节点:
[root@node1 keepalived]# touch down
查看邮件:
[root@node1 keepalived]# mail
"/var/spool/mail/root": 4 messages 4 new
N 1 root Mon Apr 28 16:13 18/721 "node1.corosync.com to be master: 172.16.39.100 floating"
N 2 root Mon Apr 28 16:15 18/721 "node1.corosync.com to be backup: 172.16.39.100 floating"
邮件中显示主节点从 master 变成了 backup
从节点上:
查看IP地址:
[root@node2 ~]# ip addr show
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:01:46:91 brd ff:ff:ff:ff:ff:ff
inet 172.16.249.165/16 brd 172.16.255.255 scope global eth0
inet 172.16.39.100/32 scope global eth0
inet6 fe80::20c:29ff:fe01:4691/64 scope link
valid_lft forever preferred_lft forever
查看从节点的邮件:
[root@node2 ~]# mail
N 1 root Mon Apr 28 17:27 18/721 "node2.corosync.com to be backup: 172.16.39.100 floating"
N 2 root Mon Apr 28 17:29 18/721 "node2.corosync.com to be master: 172.16.39.100 floating"
邮件中显示已经从 backup 转换成 master
2、配置 ipvs 实现集群服务
virutal server
realserver
health check
Virtual server 定义 virutal server
[root@node ~]# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: httpd: apr_sockaddr_info_get() failed for node.corosync.com
httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
vrrp_instance VI_1 {
interface eth0
state MASTER # BACKUP for slave routers
priority 101 # 100 for BACKUP
virtual_router_id 51
garp_master_delay 1
}
vrrp_instance VI_2 {
interface eth0
state BACKUP # BACKUP for slave routers 第二个虚拟路由设为BACKUP
priority 100 # 100 for BACKUP
virtual_router_id 59 修改这个id,不要和第一个虚拟路由相同
garp_master_delay 1
}
从节点:
vrrp_instance VI_1 {
interface eth0
state BACKUP # BACKUP for slave routers
priority 100 # 100 for BACKUP
virtual_router_id 51
garp_master_delay 1
}
vrrp_instance VI_2 {
interface eth0
state MASTER # BACKUP for slave routers 从节点设置第二个虚拟路由为MASTER
priority 101 # 100 for BACKUP
virtual_router_id 59 改的和上一个节点的中第二个虚拟路由的相同
garp_master_delay 1
}
测试:
主节点1:
[root@node1 keepalived]# ip a
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:40:af:c6 brd ff:ff:ff:ff:ff:ff
inet 172.16.249.208/16 brd 172.16.255.255 scope global eth0
inet 172.16.39.100/32 scope global eth0
inet6 fe80::20c:29ff:fe40:afc6/64 scope link
valid_lft forever preferred_lft forever
该节点上虚拟IP为172.16.39.100
主节点2:
[root@node2 keepalived]# ip a
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:01:46:91 brd ff:ff:ff:ff:ff:ff
inet 172.16.249.165/16 brd 172.16.255.255 scope global eth0
inet 172.16.39.200/32 scope global eth0
inet6 fe80::20c:29ff:fe01:4691/64 scope link
valid_lft forever preferred_lft forever
该节点上虚拟IP为 172.16.39.200
停掉主节点2:
Service keepalived stop
查看主节点1上的IP:
[root@node1 keepalived]# ip a
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:40:af:c6 brd ff:ff:ff:ff:ff:ff
inet 172.16.249.208/16 brd 172.16.255.255 scope global eth0
inet 172.16.39.100/32 scope global eth0
inet 172.16.39.200/32 scope global eth0
inet6 fe80::20c:29ff:fe40:afc6/64 scope link
valid_lft forever preferred_lft forever
两个虚拟IP都在主节点1上了。