实验目的 :
利用corosyne/openais + ldirectord 实现LVS(DR)中的Director 的高可用
实验环境 :
ReadHat 5.8
VIP 172.16.45.2
Real Server:
RS1 172.16.45.5
RS2 172.16.45.6
Director:
node1.yue.com 172.16.45.11
node2.yue.com 172.16.45.12
需要用到的rpm包:
cluster-glue-1.0.6-1.6.el5.i386.rpm
cluster-glue-libs-1.0.6-1.6.el5.i386.rpm
corosync-1.2.7-1.1.el5.i386.rpm
corosynclib-1.2.7-1.1.el5.i386.rpm
heartbeat-3.0.3-2.3.el5.i386.rpm
heartbeat-libs-3.0.3-2.3.el5.i386.rpm
ldirectord-1.0.1-1.el5.i386.rpm
libesmtp-1.0.4-5.el5.i386.rpm
openais-1.1.3-1.6.el5.i386.rpm
openaislib-1.1.3-1.6.el5.i386.rpm
pacemaker-1.1.5-1.1.el5.i386.rpm
cts-1.1.5-1.1.el5.i386.rpm
pacemaker-libs-1.1.5-1.1.el5.i386.rpm
perl-MailTools-2.08-1.el5.rf.noarch.rpm
perl-Pod-Escapes-1.04-1.2.el5.rf.noarch.rpm
perl-Pod-Simple-3.07-1.el5.rf.noarch.rpm
perl-Test-Pod-1.42-1.el5.rf.noarch.rpm
perl-TimeDate-1.16-5.el5.noarch.rpm
resource-agents-1.0.4-1.1.el5.i386.rpm
另外准备好系统光盘,作为yum源
一、先配置Real Server
1. 同步两台Real Server的时间
# hwclock -s
2. 安装 apache
# yum -y install httpd
为两台Real Server提供网页文件
[root@RS1 ~]# echo "Real Server 1" > /var/www/html/index.html
[root@RS2 ~ ]# echo "Real Server 2" > /var/www/html/index.html
[root@RS1 ~]# vi /etc/httpd/conf/httpd.conf
更改:ServerName RS1.yue.com
[root@RS2 ~ ]# vi /etc/httpd/conf/httpd.conf
更改:ServerName RS2.yue.com
# /etc/init.d/httpd start
3. 在RS1上编辑内核的相关参数 (此时配置的内核参数和网卡都是临时生效的,若想要永久有效需要写入相应的配置文件中 )
[root@RS1 ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
[root@RS1 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@RS1 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[root@RS1 ~]# echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
[root@RS1 ~]# ifconfig lo:0 172.16.45.2 broadcast 172.16.45.255 netmask 255.255.255.255 up 配置vip
[root@RS1 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:7E:8B:C6
inet addr:172.16.45.5 Bcast:172.16.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:144986 errors:0 dropped:0 overruns:0 frame:0
TX packets:39438 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:29527500 (28.1 MiB) TX bytes:5000577 (4.7 MiB)
Interrupt:67 Base address:0x2000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:140 errors:0 dropped:0 overruns:0 frame:0
TX packets:140 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:17628 (17.2 KiB) TX bytes:17628 (17.2 KiB)
lo:0 Link encap:Local Loopback
inet addr:172.16.45.2 Mask:255.255.255.255
UP LOOPBACK RUNNING MTU:16436 Metric:1
[root@RS1 ~]# elinks -dump http ://172.16.45.2 测试是否正常
Real Server 1
[root@RS1 ~]# elinks -dump http ://172.16.45.5
Real Server 1
设定服务开机自动启动
[root@RS1 ~]# chkconfig --add httpd
[root@RS1 ~]# chkconfig httpd on
[root@RS1 ~]# chkconfig --list httpd
httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
4. 在RS2 上做同样的设置
[root@RS2 ~]# elinks -dump http://172.16.45.2 测试是否正常
Real Server 2
[root@RS2 ~]# elinks -dump http://172.16.45.6
Real Server 2
二、配置 Director
1. 双机互信
# ssh-keygen -t rsa
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
2. 主机名
# vi /etc/hosts
172.16.45.11 node1.yue.com node1
172.16.45.12 node2.yue.com node2
3. 时间同步
# hwcolock -s
4.安装上面提到的相关rpm包
# yum -y --nogpgcheck localinstall *.rpm
5. 将rpm包传送给node2并安装
[root@node1 tmp]# scp *.rpm node2:/tmp
[root@node1 tmp]# ssh node2 'yum -y --nogpgcheck localinstall /tmp/*.rpm '
6. 关闭 heartbeat服务
[root@node1 ~]# chkconfig --list heartbeat
heartbeat 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@node1 ~]# chkconfig heartbeat off
[root@node1 ~]# ssh node2 'chkconfig heartbeat off'
7. 提供 corosync的配置文件
[root@node1 ~]# cd /etc/corosync/
[root@node1 corosync]# cp corosync.conf.example corosync.conf
[root@node1 corosync]# vi /etc/corosync/corosync.conf
compatibility: whitetank 兼容性,兼容以前的版本
totem { 多个corosynce 的节点之间心跳信息的传递方式
version: 2
secauth: off 安全认证
threads: 0 启动几个线程
interface { 通过哪个网络接口传递心跳信息,若有多个接口,则ringnumber不能相同
ringnumber: 0
bindnetaddr: 172.16.45.0 绑定的网络地址
mcastaddr: 226.94.100.1 多播地址
mcastport: 5405
}
}
logging {
fileline: off
to_stderr: no 发送到标准错误输出
to_logfile: yes
# to_syslog: yes
logfile: /var/log/corosync.log
debug: off
timestamp: on 是否记录时间戳
logger_subsys {
subsys: AMF 想要启用AMF 需要安装OpenAIS 和OpenAis-libs
debug: off
}
}
amf {
mode: disabled
}
# 另外添加如下内容 :
service {
ver: 0
name: pacemaker
use_mgmtd: yes
}
aisexec {
user: root
group: root
}
8. 节点密钥文件
[root@node1 corosync]# corosync-keygen 生成节点密钥文件
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.
9. 提供ldirectord的配置文件
[root@node1 corosync]# cp /usr/share/doc/ldirectord-1.0.1/ldirectord.cf /etc/ha.d/
[root@node1 corosync]# vi /etc/ha.d/ldirectord.cf
checktimeout=3
checkinterval=1
autoreload=yes
quiescent=no
virtual=172.16.45.2:80
real= 172.16.45.5:80 gate
real= 172.16.45.6:80 gate
fallback=127.0.0.1:80 gate
service=http
scheduler=rr
# persistent=600
# netmask=255.255.255.255
protocol=tcp
checktype=negotiate
checkport=80
request="test.html"
receive="Real Server OK"
10. 将配置文件传送到node2
[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/
authkey 100% 128 0.1KB/s 00:00
corosync.conf 100% 526 0.5KB/s 00:00
[root@node1 corosync]# scp /etc/ha.d/ldirectord.cf node2:/etc/ha.d/
ldirectord.cf 100% 7593 7.4KB/s 00:00
11. 启动corosync 服务
[root@node1 ~]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
[root@node1 corosync]# netstat -unlp
udp 0 0 172.16.45.11 :5405 0.0.0.0:* 4019/corosync
udp 0 0 226.94.100.1 :5405 0.0.0.0:* 4019/corosync
查看corosync引擎是否正常启动
[root@node1 corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/corosync.log
Aug 05 17:32:43 corosync [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
Aug 05 17:32:43 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
Aug 05 17:33:48 corosync [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:170.
Aug 05 17:34:17 corosync [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
Aug 05 17:34:17 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
查看初始化成员节点通知是否正常发出
[root@node1 corosync]# grep "TOTEM" /var/log/corosync.log
Aug 05 17:32:43 corosync [TOTEM ] Initializing transport (UDP/IP).
Aug 05 17:32:43 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Aug 05 17:32:44 corosync [TOTEM ] The network interface [172.16.45.11] is now up.
Aug 05 17:32:44 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Aug 05 17:34:17 corosync [TOTEM ] Initializing transport (UDP/IP).
Aug 05 17:34:17 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Aug 05 17:34:17 corosync [TOTEM ] The network interface [172.16.45.11] is now up.
Aug 05 17:34:18 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
查看pacemaker是否正常启动
[root@node1 corosync]# grep pcmk_startup /var/log/corosync.log
Aug 05 17:32:44 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
Aug 05 17:32:44 corosync [pcmk ] Logging: Initialized pcmk_startup
Aug 05 17:32:44 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
Aug 05 17:32:44 corosync [pcmk ] info: pcmk_startup: Service: 9
Aug 05 17:32:44 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.yue.com
Aug 05 17:34:18 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
Aug 05 17:34:18 corosync [pcmk ] Logging: Initialized pcmk_startup
Aug 05 17:34:18 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
Aug 05 17:34:18 corosync [pcmk ] info: pcmk_startup: Service: 9
Aug 05 17:34:18 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.yue.com
检查启动过程中是否有错误产生
[root@node1 corosync]# grep ERROR: /var/log/corosync.log | grep -v unpack_resources
Aug 05 17:32:45 corosync [pcmk ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=4764, rc=100)
Aug 05 17:34:19 corosync [pcmk ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=4865, rc=100)
如果上面命令执行均没有问题,接着可以执行如下命令启动node2上的corosync
注意:启动node2需要在node1上使用如上命令进行,不要在node2节点上直接启动 ;
[root@node1 corosync]# ssh node2 '/etc/init.d/corosync start'
Starting Corosync Cluster Engine (corosync): [ OK ]
使用如下命令查看集群节点的启动状态 :
[root@node1 ~]# crm status
============
Last updated: Sun Aug 5 17:44:02 2012
Stack: openais
Current DC: node1.yue.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes 2个节点
0 Resources configured. 0个资源
============
Online : [ node1.yue.com node2.yue.com ]
配置集群的工作属性,禁用stonith
corosync默认启用了stonith,而当前集群并没有相应的stonith设备,因此此默认配置目前尚不可用,这可以通过如下命令验正:
[root@node1 ~]# crm_verify -L
crm_verify[4928]: 2012/08/05_17:44:59 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
crm_verify[4928]: 2012/08/05_17:44:59 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
crm_verify[4928]: 2012/08/05_17:44:59 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
-V may provide more details
我们里可以通过如下方式先禁用stonith:
(当然也可以直接使用命令:# crm configure property stonith-enabled=false 来实现)
[root@node1 ~]# crm 进入crm的交互模式,在每层都可以使用help查看在当前位置可以使用的命令
crm(live)# configure
crm(live)configure# property stonith-enabled=false
crm(live)configure# verify 检查语法
crm(live)configure# commit 提交
crm(live)configure# show
node node1.yue.com
node node2.yue.com
property $id="cib-bootstrap-options" \
dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled ="false" stonith已经被禁用
上面的crm,crm_verify命令是1.0后的版本的pacemaker提供的基于命令行的集群管理工具;可以在集群中的任何一个节点上执行。
三、为集群添加集群资源
corosync支持heartbeat,LSB和ocf等类型的资源代理,目前较为常用的类型为LSB 和OCF 两类,stonith 类专为配置stonith设备而用;
可以通过如下命令查看当前集群系统所支持的类型 :
# crm ra classes
heartbeat
lsb
ocf / heartbeat pacemaker
stonith
如果想要查看某种类别下的所用资源代理的列表 ,可以使用类似如下命令实现:
# crm ra list lsb
# crm ra list ocf heartbeat
# crm ra list ocf pacemaker
# crm ra list stonith
# crm ra info [class:[provider:]]resource_agent
例如:
# crm ra info ocf:heartbeat:IPaddr
[root@node1 ~]# crm
crm(live)# resource
crm(live)resource# status 查看资源的状态
Resource Group: web
Web_server (lsb:httpd) Started
WebIP (ocf::heartbeat:IPaddr) Started
crm(live)resource# stop web 停止一个资源
crm(live)resource# status
Resource Group: web
Web_server (lsb:httpd) Stopped
WebIP (ocf::heartbeat:IPaddr) Stopped
crm(live)resource#
crm(live)configure# delete web 删除一个组
1. 接下来要为web集群创建一个IP地址资源,以在通过集群提供web服务时使用;这可以通过如下方式实现:
定义资源的语法 :
primitive [:[:]]
[params attr_list]
[operations id_spec]
[op op_type [=...] ...]
op_type :: start | stop | monitor
例子:
primitive apcfence stonith:apcsmart \
params ttydev=/dev/ttyS0 hostlist="node1 node2" \
op start timeout=60s \
op monitor interval=30m timeout=60s
定义IP资源时的一些参数 :
Parameters (* denotes required, [] the default):
ip* (string) : IPv4 address
The IPv4 address to be configured in dotted quad notation, for example "192.168.1.1".
nic (string, [eth0]) : Network interface
The base network interface on which the IP address will be brought
cidr_netmask (string) : Netmask
The netmask for the interface in CIDR format. (ie, 24), or in dotted quad notation 255.255.255.0).
broadcast (string) : Broadcast address
lvs_support (boolean, [false]) : Enable support for LVS DR
Operations' defaults (advisory minimum):
start timeout=20s
stop timeout=20s
monitor interval=5s timeout=20s
定义IP资源
# crm
crm(live)# configure
crm(live)configure# primitive Web_IP ocf:heartbeat:IPaddr2 params ip=172.16.45.2 nic=eth0 cidr_netmask=32 broadcast=172.16.45.255 lvs_support=true
crm(live)configure# show
node node1.yue.com
node node2.yue.com
primitive Web_IP ocf:heartbeat:IPaddr2 \
params ip="172.16.45.2" nic="eth0" cidr_netmask="32" broadcast="172.16.45.255" lvs_support="true"
primitive Web_ldirectord ocf:heartbeat:ldirectord
property $id="cib-bootstrap-options" \
dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false"
crm(live)configure# verify
crm(live)configure# commit 提交
crm(live)configure# cd
crm(live)# status 查看集群状态
============
Last updated: Sun Aug 5 19:45:08 2012
Stack: openais
Current DC: node1.yue.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ node1.yue.com node2.yue.com ]
Web_IP (ocf::heartbeat:IPaddr2): Started node2.yue.com
crm(live)# bye
关于ldirectord 资源定义时的的一些参数:
Parameters (* denotes required, [] the default):
configfile (string, [/etc/ha.d/ldirectord.cf]): configuration file path
The full pathname of the ldirectord configuration file.
ldirectord (string, [/usr/sbin/ldirectord]): ldirectord binary path
The full pathname of the ldirectord.
Operations' defaults (advisory minimum):
start timeout=15
stop timeout=15
monitor interval=20 timeout=10
定义 ldirectorce资源 :
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# primitive Web_ldirectord ocf:heartbeat:ldirectord
crm(live)configure# show
node node1.yue.com
node node2.yue.com
primitive Web_ldirectord ocf:heartbeat:ldirectord
property $id="cib-bootstrap-options" \
dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false"
crm(live)configure# commit 提交
crm(live)configure# cd
crm(live)# status
============
Last updated: Sun Aug 5 19:44:05 2012
Stack: openais
Current DC: node1.yue.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ node1.yue.com node2.yue.com ]
Web_IP (ocf::heartbeat:IPaddr2):Started node2.yue.com
Web_ldirectord (ocf::heartbeat:ldirectord): Started node1.yue.com
查看Web_IP状态:
[root@node2 tmp]# ip addr show 注意此时是在node2上
1: lo: mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:d9:75:df brd ff:ff:ff:ff:ff:ff
inet 172.16.45.12/16 brd 172.16.255.255 scope global eth0
inet 172.16.45.2/32 brd 172.16.45.255 scope global eth0
定义资源 排列约束(Colocation)
Usage:
colocation : [:] [:] ...
^ ^ ^ ^
约束名称 分数 资源1 资源2 . . .
Example:
colocation dummy_and_apache -inf: apache dummy
colocation c1 inf: A ( B C )
colocation webip_with_webserver inf: WebIP Web_server
定义资源 顺序约束(Orde r) 写在前面的先启动,后停止 具体可以使用:show xml 来查看
Usage:
order score-type: [:] [:] ...
[symmetrical=]
score-type :: advisory | mandatory |
^ ^
建议值 必须这样做
Example:
order c_apache_1 mandatory: apache:start ip_1 --> 先启动apache后启动ip_1
order o1 inf: A ( B C ) --> 先启动 B C ,再启动 A
order webserver_after_webip mandatory: Web_server:start WebIP
定义资源 位置约束(location)
Usage:
location {node_pref|rules}
^ ^ ^
名字 资源 分数: 节点名称
node_pref :: :
rules ::
rule [id_spec] [$role=] :
[rule [id_spec] [$role=] : ...]
Examples:
location conn_1 internal_www 100: node1
location webserver_on_node1 Web_server inf: node1.yue.com
location conn_1 internal_www \
rule 50: #uname eq node1 \
rule pingd: defined pingd
location conn_2 dummy_float \
rule -inf: not_defined pingd or pingd number:lte 0
排列约束
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# colocation Web_IP_with_Web_ld inf: Web_IP Web_ldirectord
crm(live)configure# verify
crm(live)configure# show
node node1.yue.com
node node2.yue.com
primitive Web_IP ocf:heartbeat:IPaddr2 \
params ip="172.16.45.2" nic="eth0" cidr_netmask="32" broadcast="172.16.45.255" lvs_support="true"
primitive Web_ldirectord ocf:heartbeat:ldirectord
colocation Web_IP_with_Web_ld inf: Web_IP Web_ldirectord
property $id="cib-bootstrap-options" \
dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false"
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
============
Last updated: Sun Aug 5 19:50:51 2012
Stack: openais
Current DC: node1.yue.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ node1.yue.com node2.yue.com ]
Web_ldirectord (ocf::heartbeat:ldirectord): Started node1.yue.com
Web_IP (ocf::heartbeat:IPaddr2): Started node1.yue.com
crm(live)# exit
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.45.2:80 rr
-> 172.16.45.6:80 Route 1 0 1
-> 172.16.45.5:80 Route 1 0 0
顺序约束
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# order ld_after_ip mandatory: Web_IP Web_ldirectord
crm(live)configure# verify
crm(live)configure# show
node node1.yue.com
node node2.yue.com
primitive Web_IP ocf:heartbeat:IPaddr2 \
params ip="172.16.45.2" nic="eth0" cidr_netmask="32" broadcast="172.16.45.255" lvs_support="true"
primitive Web_ldirectord ocf:heartbeat:ldirectord
colocation Web_IP_with_Web_ld inf: Web_IP Web_ldirectord
order ld_after_ip inf: Web_IP Web_ldirectord
property $id="cib-bootstrap-options" \
dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false"
crm(live)configure# commit
crm(live)configure# bye
[root@node1 ~]# ssh node2 'crm node standby' 让执行命令的节点 standby
[root@node1 ~]# crm status
============
Last updated: Sun Aug 5 20:08:19 2012
Stack: openais
Current DC: node1.yue.com - partition with quorum
Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Node node1.yue.com: standby
Online: [ node2.yue.com ]
Web_ldirectord (ocf::heartbeat:ldirectord): Started node1.yue.com
Web_IP (ocf::heartbeat:IPaddr2): Started node1.yue.com
[root@node1 tmp]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.45.2:80 rr
-> 172.16.45.6:80 Route 1 0 2
-> 172.16.45.5:80 Route 1 0 1
运维网声明
1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网 享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com