|
环境说明:
操作系统: Redhat 6.5 x64,本文采用rpm方式安装haproxy,heartbeat,本文只是试用heartbeat高可用基本功能,实现功能与上篇haproxy+keepalived集群配置示例一致。
app1: 192.168.0.24
app1: 192.168.0.25
VIP : 192.168.0.26
http1:192.168.0.24:8080 主机配置PHP环境。
http2:192.168.0.25:8080 主机配置PHP环境。
一、双机Heartbeat配置
1. app1,app2配置hosts文件
1
2
3
4
5
6
7
| [iyunv@app1 soft]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.24 app1
192.168.0.25 app2
10.10.10.24 app1-priv
10.10.10.25 app2-priv
|
说明:10段是心跳IP, 192.168段是业务IP, 采用VIP地址是192.168.0.26。
2. 安装Heartbeat
说明:RHEL/CentOS6.x采用epel源可以直接安装heartbeat,以及haproxy软件包经过安装没有问题。
在app1,app2两个节点上直接采用RPM包安装。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| [iyunv@app1 soft]# ll
总用量 1924
-rw-r--r--. 1 root root 72744 6月 25 2012 cluster-glue-1.0.5-6.el6.x86_64.rpm
-rw-r--r--. 1 root root 119096 6月 25 2012 cluster-glue-libs-1.0.5-6.el6.x86_64.rpm
-rw-r--r--. 1 root root 165292 12月 3 2013 heartbeat-3.0.4-2.el6.x86_64.rpm
-rw-r--r--. 1 root root 269468 12月 3 2013 heartbeat-libs-3.0.4-2.el6.x86_64.rpm
-rw-r--r--. 1 root root 38264 10月 18 2014 perl-TimeDate-1.16-13.el6.noarch.rpm
-rw-r--r--. 1 root root 913840 7月 3 2011 PyXML-0.8.4-19.el6.x86_64.rpm
-rw-r--r--. 1 root root 374068 11月 10 20:45 resource-agents-3.9.5-24.el6_7.1.x86_64.rpm
[iyunv@app1 soft]#
[iyunv@app1 soft]# rpm -ivh *.rpm
warning: cluster-glue-1.0.5-6.el6.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY
warning: heartbeat-3.0.4-2.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Preparing... ########################################### [100%]
1:cluster-glue-libs ########################################### [ 14%]
2:resource-agents ########################################### [ 29%]
3:PyXML ########################################### [ 43%]
4:perl-TimeDate ########################################### [ 57%]
5:cluster-glue ########################################### [ 71%]
6:heartbeat-libs ########################################### [ 86%]
7:heartbeat ########################################### [100%]
[iyunv@app1 soft]#
|
拷贝样例文件到ha.d目录下:
1
| # cp /usr/share/doc/heartbeat-3.0.4/{authkeys,ha.cf,haresources} /etc/ha.d/
|
2.设置授权KEY
1
2
3
4
| # vi /etc/ha.d/authkeys
auth 1
1 sha1 47e9336850f1db6fa58bc470bc9b7810eb397f04
# chmod 600 /etc/ha.d/authkeys
|
3.添加配置ha资源文件
1
2
3
| # vi /etc/ha.d/haresources
# 初始状态服务器绑定VIP的地址在哪个服务器、哪个网卡上,后面的服务名可以省掉,这样就只有VIP功能,与keepalived功能相似。
app1 IPaddr::192.168.0.26/24/eth0
|
也可以如下配置heartbeat在主从切换的时候对资源启动控制,但是如果手动或服务自动关闭,VIP地址是无法进行监控的。
app1 IPaddr::192.168.0.26/24/eth0 httpd
说明:
(1) 可以将httpd换成nginx,haproxy等服务,可以采用service xxx start / stop这种管理服务器启动方式,并且启动脚本在/etc/init.d/下存在。
(2) app1 IPaddr::192.168.0.26/24/eth0:1 这样可以采用ifconfig命令查看eth0:1 的IP地址。
(3) 可以在此配置多个VIP地址以及绑定在哪个主机上。
4. 配置heartbeat主配置文件在APP1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| # vi /etc/ha.d/ha.cf
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
bcast eth1
ucast eth1 10.10.10.25
#mcast eth1 225.0.0.24 694 1 0
auto_failback on
node app1
node app2
respawn hacluster /usr/lib64/heartbeat/ipfail
ping 192.168.0.253
|
在APP1上配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| vi /etc/ha.d/ha.cf
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 15
warntime 10
initdead 120
udpport 694
bcast eth1
ucast eth1 10.10.10.24
#mcast eth1 225.0.0.25 694 1 0
auto_failback on
node app1
node app2
respawn hacluster /usr/lib64/heartbeat/ipfail
ping 192.168.0.253
|
5、将刚才配置的三个文件同步至node2
1
2
3
4
5
| # scp authkeys ha.cf haresources root@app2:/etc/ha.d/
root@app2's password:
authkeys 100% 56 0.1KB/s 00:00
ha.cf 100% 256 0.3KB/s 00:00
haresources 100% 78 0.1KB/s 00:00
|
6、启动heartbeat服务,测试能否正常提供服务
1
2
3
4
5
6
7
8
| 节点1:
[iyunv@app1 ha.d]# service heartbeat start
Starting High-Availability services: INFO: Resource is stopped
Done.
节点2:
[iyunv@app2 ha.d]# service heartbeat start
Starting High-Availability services: INFO: Resource is stopped
Done.
|
7、手动测试VIP切换
(1) 手动切换成standby状态
1
2
| [iyunv@app1 ha.d]# /usr/share/heartbeat/hb_standby
Going standby [all].
|
或者主服务器 service heartbeat stop 也可以切换VIP到备机上。
(2) 手动切换成主状态
1
| [iyunv@app1 ha.d]# /usr/share/heartbeat/hb_takeover
|
或者主服务器 service heartbeat start 也可以将VIP切回来。
(3) 通过日志查看VIP接管过程
节点1:
# tail -f /var/log/message
Jan 12 12:46:30 app1 heartbeat: [4519]: info: app1 wants to go standby [all]
Jan 12 12:46:30 app1 heartbeat: [4519]: info: standby: app2 can take our all resources
Jan 12 12:46:30 app1 heartbeat: [6043]: info: give up all HA resources (standby).
Jan 12 12:46:30 app1 ResourceManager(default)[6056]: info: Releasing resource group: app1 IPaddr::192.168.0.26/24/eth0
Jan 12 12:46:30 app1 ResourceManager(default)[6056]: info: Running /etc/ha.d/resource.d/IPaddr 192.168.0.26/24/eth0 stop
Jan 12 12:46:30 app1 IPaddr(IPaddr_192.168.0.26)[6119]: INFO: IP status = ok, IP_CIP=
Jan 12 12:46:30 app1 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.0.26)[6093]: INFO: Success
Jan 12 12:46:30 app1 heartbeat: [6043]: info: all HA resource release completed (standby).
Jan 12 12:46:30 app1 heartbeat: [4519]: info: Local standby process completed [all].
Jan 12 12:46:31 app1 heartbeat: [4519]: WARN: 1 lost packet(s) for [app2] [1036:1038]
Jan 12 12:46:31 app1 heartbeat: [4519]: info: remote resource transition completed.
Jan 12 12:46:31 app1 heartbeat: [4519]: info: No pkts missing from app2!
Jan 12 12:46:31 app1 heartbeat: [4519]: info: Other node completed standby takeover of all resources.
节点2:
[iyunv@app2 ha.d]# tail -f /var/log/messages
Jan 12 12:46:30 app2 heartbeat: [4325]: info: app1 wants to go standby [all]
Jan 12 12:46:30 app2 heartbeat: [4325]: info: standby: acquire [all] resources from app1
Jan 12 12:46:30 app2 heartbeat: [5459]: info: acquire all HA resources (standby).
Jan 12 12:46:30 app2 ResourceManager(default)[5472]: info: Acquiring resource group: app1 IPaddr::192.168.0.26/24/eth0
Jan 12 12:46:30 app2 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.0.26)[5500]: INFO: Resource is stopped
Jan 12 12:46:30 app2 ResourceManager(default)[5472]: info: Running /etc/ha.d/resource.d/IPaddr 192.168.0.26/24/eth0 start
Jan 12 12:46:31 app2 IPaddr(IPaddr_192.168.0.26)[5625]: INFO: Adding inet address 192.168.0.26/24 with broadcast address 192.168.0.255 to device eth0
Jan 12 12:46:31 app2 IPaddr(IPaddr_192.168.0.26)[5625]: INFO: Bringing device eth0 up
Jan 12 12:46:31 app2 IPaddr(IPaddr_192.168.0.26)[5625]: INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.0.26 eth0 192.168.0.26 auto not_used not_used
Jan 12 12:46:31 app2 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.0.26)[5599]: INFO: Success
Jan 12 12:46:31 app2 heartbeat: [5459]: info: all HA resource acquisition completed (standby).
Jan 12 12:46:31 app2 heartbeat: [4325]: info: Standby resource acquisition done [all].
Jan 12 12:46:31 app2 heartbeat: [4325]: info: remote resource transition completed.
手动添加VIP地址命令:
/etc/ha.d/resource.d/IPaddr 192.168.0.27/24/eth0:2 start
(4) 查看VIP地址信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
| [iyunv@app1 ha.d]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:4c:39:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.24/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.26/24 brd 192.168.0.255 scope global secondary eth0
inet6 fe80::20c:29ff:fe4c:3943/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:4c:39:4d brd ff:ff:ff:ff:ff:ff
inet 10.10.10.24/24 brd 10.10.10.255 scope global eth1
inet6 fe80::20c:29ff:fe4c:394d/64 scope link
valid_lft forever preferred_lft forever
[iyunv@app1 ha.d]#
节点2:
[iyunv@app2 ha.d]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:cf:05:99 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.25/24 brd 192.168.0.255 scope global eth0
inet6 fe80::20c:29ff:fecf:599/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:cf:05:a3 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.25/24 brd 10.10.10.255 scope global eth1
inet6 fe80::20c:29ff:fecf:5a3/64 scope link
valid_lft forever preferred_lft forever
[iyunv@app2 ha.d]#
|
二、HAproxy反向代理配置
app1, app2配置操作
1. 添加非本机IP邦定支持
1
2
3
| # vi /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
# sysctl -p
|
2. 安装haproxy软件
1
| # rpm -ivh haproxy-1.4.24-2.el6.x86_64.rpm
|
3. 创建配置文件
1)app1上创建配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
| # vi /usr/local/haproxy/conf/haproxy.cfg
global
log 127.0.0.1 local0
maxconn 4000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
nbproc 1
stats socket /var/lib/haproxy/stats
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
option httpclose
option forwardfor
retries 3
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
timeout check 1s
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
listen stats
mode http
bind 0.0.0.0:91
stats enable
stats uri /admin
stats realm "Admin console"
stats auth admin:123456
stats hide-version
stats refresh 10s
stats admin if TRUE
frontend web_proxy
bind *:80
mode http
acl url_dynamic path_end -i .php
use_backend phpserver if url_dynamic
default_backend webservers
backend webservers
balance roundrobin
option httpchk GET /test.html HTTP/1.0\r\nHost:192.168.0.26
server node01 192.168.0.24:8080 weight 3 check inter 2000 rise 2 fall 1
server node02 192.168.0.25:8080 weight 3 check inter 2000 rise 2 fall 1
backend phpserver
balance roundrobin
option httpchk GET /test.php
server node01 192.168.0.24:8080 weight 3 check inter 2000 rise 2 fall 1
|
2)app2上创建配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
| # vi /usr/local/haproxy/conf/haproxy.cfg
global
log 127.0.0.1 local0
maxconn 4000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
nbproc 1
stats socket /var/lib/haproxy/stats
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
option httpclose
option forwardfor
retries 3
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
timeout check 1s
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
listen stats
mode http
bind 0.0.0.0:91
stats enable
stats uri /admin
stats realm "Admin console"
stats auth admin:123456
stats hide-version
stats refresh 10s
stats admin if TRUE
frontend web_proxy
bind *:80
mode http
acl url_dynamic path_end -i .php
use_backend phpserver if url_dynamic
default_backend webservers
backend webservers
balance roundrobin
option httpchk GET /test.html HTTP/1.0\r\nHost:192.168.0.26
server node01 192.168.0.24:8080 weight 3 check inter 2000 rise 2 fall 1
server node02 192.168.0.25:8080 weight 3 check inter 2000 rise 2 fall 1
backend phpserver
balance roundrobin
option httpchk GET /test.html
server node01 192.168.0.24:8080 weight 3 check inter 2000 rise 2 fall 1
|
说明:两节点互为主备模式,均优化将本机的节点应用做为主节点,也可以为负载均衡模式。
4. app1,app2上配置HAproxy日志文件
1) Haproxy日志配置,否则默认是不记haproxy日志的,注意与RHEL/CentOS5.x版本的区别。
1
2
3
4
5
6
| # vi /etc/rsyslog.conf
$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 127.0.0.1
local0.* /var/log/haproxy.log
*.info;mail.none;authpriv.none;cron.none;local0.none /var/log/messages
|
说明: 第五行是去掉在/var/log/message再记录haproxy.log日志的功能的。
直接手动执行
service rsyslog restart
注:Redhat/Centos6.X 默认haproxy服务运行采用haproxy用户,RPM包安装haproxy,系统已自动配置日录文件轮询功能。
三、配置heartbeat小结:
1,由于对keepalived比较熟悉,heartbeat在配置VIP地址功能方面与keepalived比较相似,可以实现多个VIP,以及互备配置,都可以实现对负载均衡器高可用实现,缺点就是无法直接对资源的健康状态进行监控,需要另外编写脚本进行健康状态监控。
2,如果仅仅想提供负载均衡配置的话,我还是认为两边haproxy,httpd服务均配置启动状态,VIP切换到哪边就由哪边提供服务,而不必由heartbeat进行VIP切换时启动。
3,heartbeat提供对资源的管控是其优势,heartbeat+drbd+nfs/mysql等是其精典组合,还需要深入学习。
4,heartbaet v3版本的新功能还需要深入学习。
|
|
|