设为首页 收藏本站
查看: 2916|回复: 0

[经验分享] corosync+ pacemaker实现pg流复制自动切换(二)

[复制链接]

尚未签到

发表于 2014-7-4 21:16:15 | 显示全部楼层 |阅读模式
五、测试5.1 备节点失效

在node2上杀死postgres数据库进程,模拟备节点上数据库崩溃:
[iyunv@node2 ~]# killall -9 postgres




查看此时集群状态:
[iyunv@node1 ~]# crm_mon -Afr -1
Last updated: Wed Jan 22 02:15:06 2014
Last change: Wed Jan 22 02:15:33 2014 via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
7 Resources configured
  
  
Online: [ node1 node2 ]
  
Full list of resources:
  
vip-slave (ocf::heartbeat:IPaddr2): Started node1
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep (ocf::heartbeat:IPaddr2): Started node1
Master/Slave Set: msPostgresql [pgsql]
     Masters: [ node1 ]
     Stopped: [ node2 ]
Clone Set: clnPingCheck [pingCheck]
     Started: [ node1 node2 ]
  
Node Attributes:
* Node node1:
    + default_ping_set                 : 100      
    + master-pgsql                     : 1000      
    + pgsql-data-status                : LATEST   
    + pgsql-master-baseline            : 0000000006000078
    + pgsql-status                     : PRI      
* Node node2:
    + default_ping_set                 : 100      
    + master-pgsql                     : -INFINITY
    + pgsql-data-status                : DISCONNECT
    + pgsql-status                     : STOP      
  
Migration summary:
* Node node2:
   pgsql: migration-threshold=1 fail-count=1 last-failure='Wed Jan 22 02:15:35 2014'
* Node node1:
  
Failed actions:
    pgsql_monitor_7000 on node2 'not running' (7): call=42, status=complete, last-rc-change='Wed Jan 22 02:14:58 2014', queued=0ms, exec=0ms




{vip-slave资源已成功切换到了node1上}

重启node2上的corosync,数据库将重新伴随启动:
[iyunv@node2 ~]# service corosync restart
[iyunv@node1 ~]# crm_mon -Afr -1
Last updated: Wed Jan 22 02:16:24 2014
Last change: Wed Jan 22 02:16:55 2014 via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
7 Resources configured
  
  
Online: [ node1 node2 ]
  
Full list of resources:
  
vip-slave (ocf::heartbeat:IPaddr2): Started node2
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep (ocf::heartbeat:IPaddr2): Started node1
Master/Slave Set: msPostgresql [pgsql]
     Masters: [ node1 ]
     Slaves: [ node2 ]
Clone Set: clnPingCheck [pingCheck]
     Started: [ node1 node2 ]
  
Node Attributes:
* Node node1:
    + default_ping_set                 : 100      
    + master-pgsql                     : 1000      
    + pgsql-data-status                : LATEST   
    + pgsql-master-baseline            : 0000000006000078
    + pgsql-status                     : PRI      
* Node node2:
    + default_ping_set                 : 100      
    + master-pgsql                     : 100      
    + pgsql-data-status                : STREAMING|SYNC
    + pgsql-status                     : HS:sync   
  
Migration summary:
* Node node2:
* Node node1:




{vip-slave又重新回到了nod2上}
5.2 主节点失效切换

在node1上杀死postgres数据库进程,模拟主节点上数据库崩溃:
[iyunv@node1 ~]# killall -9 postgres




等会查看集群状态:
[iyunv@node2 ~]# crm_mon -Afr -1
Last updated: Wed Jan 22 02:17:50 2014
Last change: Wed Jan 22 02:18:16 2014 via crm_attribute on node2
Stack: classic openais (with plugin)
Current DC: node1 - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
7 Resources configured
  
  
Online: [ node1 node2 ]
  
Full list of resources:
  
vip-slave (ocf::heartbeat:IPaddr2): Started node2
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node2
     vip-rep (ocf::heartbeat:IPaddr2): Started node2
Master/Slave Set: msPostgresql [pgsql]
     Masters: [ node2 ]
     Stopped: [ node1 ]
Clone Set: clnPingCheck [pingCheck]
     Started: [ node1 node2 ]
  
Node Attributes:
* Node node1:
    + default_ping_set                 : 100      
    + master-pgsql                     : -INFINITY
    + pgsql-data-status                : DISCONNECT
    + pgsql-status                     : STOP      
* Node node2:
    + default_ping_set                 : 100      
    + master-pgsql                     : 1000      
    + pgsql-data-status                : LATEST   
    + pgsql-master-baseline            : 0000000008014A70
    + pgsql-status                     : PRI      
  
Migration summary:
* Node node2:
* Node node1:
   pgsql: migration-threshold=1 fail-count=1 last-failure='Wed Jan 22 02:18:11 2014'
  
Failed actions:
    pgsql_monitor_2000 on node1 'not running' (7): call=2435, status=complete, last-rc-change='Wed Jan 22 02:18:11 2014', queued=0ms, exec=0ms




{vip-master/vip-rep都已成功切换到node2上,且node2已变为master,node2上pg数据库状态已切换为PRI}

停止node1上的corosync:
[iyunv@node1 ~]# service corosync stop




执行一次基础同步:
[postgres@node1 data]$ pwd
/opt/pgsql/data
[postgres@node1 data]$ rm -rf *
[postgres@node1 data]$ pg_basebackup -h 192.168.1.3 -U postgres -D /opt/pgsql/data/ -P
19172/19172 kB (100%), 1/1 tablespace
NOTICE:  pg_stop_backup complete, all required WAL segments have been archived
[postgres@node1 data]$ ls
backup_label      base    pg_clog      pg_ident.conf  pg_notify  pg_stat_tmp  pg_tblspc    PG_VERSION  postgresql.conf
backup_label.old  global  pg_hba.conf  pg_multixact   pg_serial  pg_subtrans  pg_twophase  pg_xlog     recovery.done




启动node1上的corosync:
[iyunv@node1 ~]# service corosync start




5.3 主节点恢复

修复原主节点后将其恢复为当前备节点

在node1上执行一次基础同步:
[postgres@node1 data]$ pwd
/opt/pgsql/data
[postgres@node1 data]$ rm -rf *
[postgres@node1 data]$ pg_basebackup -h 192.168.2.3 -U postgres -D /opt/pgsql/data/ -P
19172/19172 kB (100%), 1/1 tablespace
NOTICE:  pg_stop_backup complete, all required WAL segments have been archived
[postgres@node1 data]$ ls
backup_label      base    pg_clog      pg_ident.conf  pg_notify  pg_stat_tmp  pg_tblspc    PG_VERSION  postgresql.conf
backup_label.old  global  pg_hba.conf  pg_multixact   pg_serial  pg_subtrans  pg_twophase  pg_xlog     recovery.done




启动heartbeat之前必须删除资锁,不然资源将不会伴随heartbeat启动:
[iyunv@node1 ~]# rm -rf /var/lib/pgsql/tmp/PGSQL.lock




{该锁文件在当节点为主节点时创建,但不会因为heartbeat的异常停止或数据库/系统的异常终止而自动删除,所以在恢复一个节点的时候只要该节点充当过主节点就需要手动清理该锁文件}

重启node1上的heartbeat:
[iyunv@node1 ~]# service heartbeat restart




过段时间后查看集群状态:
[iyunv@node2 ~]# crm_mon -Afr1
============
Last updated: Mon Jan 27 08:50:43 2014
Stack: Heartbeat
Current DC: node2 (f2dcd1df-7429-42f5-82e9-b73921f97cab) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============
  
Online: [ node1 node2 ]
  
Full list of resources:
  
vip-slave (ocf::heartbeat:IPaddr2): Started node1
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node2
     vip-rep (ocf::heartbeat:IPaddr2): Started node2
Master/Slave Set: msPostgresql
     Masters: [ node2 ]
     Slaves: [ node1 ]
Clone Set: clnPingCheck
     Started: [ node1 node2 ]
  
Node Attributes:
* Node node1:
    + default_ping_set                 : 100      
    + master-pgsql:0                   : 100      
    + pgsql-data-status                : STREAMING|SYNC
    + pgsql-status                     : HS:sync   
* Node node2:
    + default_ping_set                 : 100      
    + master-pgsql:1                   : 1000      
    + pgsql-data-status                : LATEST   
    + pgsql-master-baseline            : 00000000120000B0
    + pgsql-status                     : PRI      
  
Migration summary:
* Node node1:
* Node node2:




{vip-slave已成功切到node1上,node1成功成为流复制备节点}

六、管理6.1 启动关闭corosync
[iyunv@node1 ~]# service corosync start
[iyunv@node1 ~]# service corosync stop




6.2 查看HA状态
[iyunv@node1 ~]# crm status
Last updated: Tue Jan 21 23:55:13 2014
Last change: Tue Jan 21 23:37:36 2014 via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
7 Resources configured
  
  
Online: [ node1 node2 ]
  
vip-slave (ocf::heartbeat:IPaddr2): Started node2
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep (ocf::heartbeat:IPaddr2): Started node1
Master/Slave Set: msPostgresql [pgsql]
     Masters: [ node1 ]
     Slaves: [ node2 ]
Clone Set: clnPingCheck [pingCheck]
     Started: [ node1 node2 ]




6.3 查看资源状态及节点属性
[iyunv@node1 ~]# crm_mon -Afr -1
Last updated: Tue Jan 21 23:37:20 2014
Last change: Tue Jan 21 23:37:36 2014 via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
7 Resources configured
  
  
Online: [ node1 node2 ]
  
Full list of resources:
  
vip-slave (ocf::heartbeat:IPaddr2): Started node2
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep (ocf::heartbeat:IPaddr2): Started node1
Master/Slave Set: msPostgresql [pgsql]
     Masters: [ node1 ]
     Slaves: [ node2 ]
Clone Set: clnPingCheck [pingCheck]
     Started: [ node1 node2 ]
  
Node Attributes:
* Node node1:
    + default_ping_set                 : 100      
    + master-pgsql                     : 1000      
    + pgsql-data-status                : LATEST   
    + pgsql-master-baseline            : 0000000006000078
    + pgsql-status                     : PRI      
* Node node2:
    + default_ping_set                 : 100      
    + master-pgsql                     : 100      
    + pgsql-data-status                : STREAMING|SYNC
    + pgsql-status                     : HS:sync   
  
Migration summary:
* Node node2:
* Node node1:




6.4 查看配置
[iyunv@node1 ~]# crm configure show
node node1 \
        attributes pgsql-data-status="LATEST"
node node2 \
        attributes pgsql-data-status="STREAMING|SYNC"
primitive pgsql ocf:heartbeat:pgsql \
        params pgctl="/opt/pgsql/bin/pg_ctl" psql="/opt/pgsql/bin/psql" pgdata="/opt/pgsql/data/" start_opt="-p 5432" rep_mode="sync" node_list="node1 node2" restore_command="cp /opt/archivelog/%f %p" primary_conninfo_opt="keepalives_idle=60 keepalives_interval=5 keepalives_count=5" master_ip="192.168.1.3" stop_escalate="0" \
        op start timeout="60s" interval="0s" on-fail="restart" \
        op monitor timeout="60s" interval="7s" on-fail="restart" \
        op monitor timeout="60s" interval="2s" on-fail="restart" role="Master" \
        op promote timeout="60s" interval="0s" on-fail="restart" \
        op demote timeout="60s" interval="0s" on-fail="stop" \
……
……




6.5 实时监控HA
[iyunv@node1 ~]# crm_mon -Afr
Last updated: Wed Jan 22 00:40:12 2014
Last change: Tue Jan 21 23:37:36 2014 via crm_attribute on node1
Stack: classic openais (with plugin)
Current DC: node1 - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
7 Resources configured
  
  
Online: [ node1 node2 ]
  
Full list of resources:
  
vip-slave (ocf::heartbeat:IPaddr2): Started node2
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep    (ocf::heartbeat:IPaddr2): Started node1
Master/Slave Set: msPostgresql [pgsql]
     Masters: [ node1 ]
     Slaves: [ node2 ]
Clone Set: clnPingCheck [pingCheck]
     Started: [ node1 node2 ]
  
Node Attributes:
* Node node1:
    + default_ping_set                  : 100
    + master-pgsql                      : 1000
    + pgsql-data-status                 : LATEST   
    + pgsql-master-baseline             : 0000000006000078
    + pgsql-status                      : PRI
* Node node2:
    + default_ping_set                  : 100
    + master-pgsql                      : 100
    + pgsql-data-status                 : STREAMING|SYNC
    + pgsql-status                      : HS:sync   
  
Migration summary:* Node node2: * Node node1:




6.6 crm_resource命令资源启动/关闭:
[iyunv@node1 ~]# crm_resource -r vip-master -v started
[iyunv@node1 ~]# crm_resource -r vip-master -v stoped




列举资源:
[iyunv@node1 ~]# crm_resource -L
vip-slave (ocf::heartbeat:IPaddr2): Started
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started
     vip-rep (ocf::heartbeat:IPaddr2): Started
Master/Slave Set: msPostgresql [pgsql]
     Masters: [ node1 ]
     Slaves: [ node2 ]
Clone Set: clnPingCheck [pingCheck]
     Started: [ node1 node2 ]




查看资源位置:
[iyunv@node1 ~]# crm_resource -W -r pgsql
resource pgsql is running on: node2




迁移资源:
[iyunv@node1 ~]# crm_resource -M -r vip-slave -N node2




删除资源:
[iyunv@node1 ~]# crm_resource -D -r vip-slave -t primitive




6.7 crm命令列举指定的RA:
[iyunv@node1 ~]# crm ra list ocf pacemaker
ClusterMon     Dummy          HealthCPU      HealthSMART    Stateful       SysInfo        SystemHealth   controld       ping           pingd
remote




删除节点:
[iyunv@node1 ~]# crm node delete node2




停用节点:
[iyunv@node1 ~]# crm node standby node2




启用节点:
[iyunv@node1 ~]# crm node online node2




配置pacemaker:
[iyunv@node1 ~]# crm configure
crm(live)configure#
……
……
crm(live)configure# commit
crm(live)configure# quit




6.8 重置failcount
[iyunv@node1 ~]# crm resource
crm(live)resource# failcount pgsql set node1 0
crm(live)resource# failcount pgsql show node1
scope=status  name=fail-count-pgsql value=0
  
  
[iyunv@node1 ~]# crm resource cleanup pgsql
Cleaning up pgsql:0 on node1
Waiting for 1 replies from the CRMd. OK
  
  
[iyunv@node1 ~]# crm_failcount -G -U node1 -r pgsql
scope=status  name=fail-count-pgsql value=INFINITY
[iyunv@node1 ~]# crm_failcount -D -U node1 -r pgsql




七、问题记录7.1 Q1

问题现象:

corosync.log日志中报错:

Jan 15 10:23:57 node1 lrmd: [6327]: info: RA output: (pgsql:0:monitor:stderr) /usr/lib/ocf/resource.d//heartbeat/pgsql: line 1749: ocf_local_nodename: command not found

Jan 15 10:23:57 node1 crm_attribute: [11094]: info: Invoked: /usr/sbin/crm_attribute -l reboot -N -n -v 0000000006000090 pgsql-xlog-loc lrm_get_rsc_type_metadata(578)

Jan 15 10:23:57 node1 lrmd: [6327]: info: RA output: (pgsql:0:monitor:stderr) Could not map uname=-n to a UUID: The object/attribute does not exist

解决方式:

查看pgsql脚本,发现其中使用了ocf_local_nodename,该函数本该在ocf-shellfuncs.in中有定义,但却没有这个函数,上网查看相关论坛

http://www.gossamer-threads.com/ ... =post_view_threaded

指出此时需要相关补丁,解决ocf_local_nodename函数的补丁:

https://github.com/ClusterLabs/r ... 1cd7c9b8179896c1903

最新的版本没有ocf_local_nodename函数,所以使用以下版本:

{注:确保pacemaker版本>1.1.8,不然crm_node -n命令无法使用}

https://github.com/ClusterLabs/r ... 19e/heartbeat/pgsql

https://github.com/ClusterLabs/r ... 9896c1903/heartbeat

不含有ocf_local_nodename函数的pgsql脚本:

https://raw.github.com/ClusterLa ... 19e/heartbeat/pgsql
7.2 Q2

问题现象:
[iyunv@node1 ~]# crm configure load update pgsql.crm
WARNING: pingCheck: specified timeout 60s for start is smaller than the advised 90
WARNING: pingCheck: specified timeout 60s for stop is smaller than the advised 100
WARNING: pgsql: specified timeout 60s for stop is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for start is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for notify is smaller than the advised 90
WARNING: pgsql: specified timeout 60s for demote is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for promote is smaller than the advised 120
ERROR: master-group: attribute ordered does not exist
Do you still want to commit no




解决方式:

错误提示:在定义的master-group中ordered属性不存在

(1)该问题是pacemaker版本所致,在pacemaker-1.1版本中不支持ordered,colocated属性,通过以下方法以1.0版本的cibconfig.py替换当前新版本试图解决此问题,结果失败:
[iyunv@node1 ~]# vim /usr/lib64/python2.6/site-packages/crmsh/cibconfig.py
[iyunv@node1 ~]# cd /usr/lib64/python2.6/site-packages/crmsh/
[iyunv@node1 crmsh]# mv cibconfig.py cibconfig.py.bak
[iyunv@node1 crmsh]# wget https://github.com/ClusterLabs/p ... odules/cibconfig.py




(2)从配置脚本中去除关于ordered的定义(成功):

group master-group \

      vip-master \

      vip-rep \

      meta \

          ordered="false"

改为:

group master-group \

      vip-master \

      vip-rep
7.3 Q3

问题现象:

安装pacemaker时报错:
# yum install pacemaker*
……
--> Processing Dependency: libesmtp.so.5()(64bit) for package: pacemaker
--> Finished Dependency Resolution
pacemaker-1.0.12-1.el5.centos.i386 from clusterlabs has depsolving problems
  --> Missing Dependency: libesmtp.so.5 is needed by package pacemaker-1.0.12-1.el5.centos.i386 (clusterlabs)
pacemaker-1.0.12-1.el5.centos.x86_64 from clusterlabs has depsolving problems
  --> Missing Dependency: libesmtp.so.5()(64bit) is needed by package pacemaker-1.0.12-1.el5.centos.x86_64 (clusterlabs)
Error: Missing Dependency: libesmtp.so.5 is needed by package pacemaker-1.0.12-1.el5.centos.i386 (clusterlabs)
Error: Missing Dependency: libesmtp.so.5()(64bit) is needed by package pacemaker-1.0.12-1.el5.centos.x86_64 (clusterlabs)
You could try using --skip-broken to work around the problem
You could try running: package-cleanup --problems
                        package-cleanup --dupes
                        rpm -Va --nofiles --nodigest
The program package-cleanup is found in the yum-utils package.




解决方式:

提示缺少libesmtp,安装即可
# wget ftp://ftp.univie.ac.at/systems/l ... .4-5.el5.x86_64.rpm
# wget ftp://ftp.univie.ac.at/systems/l ... .0.4-5.el5.i386.rpm
# rpm -ivh libesmtp-1.0.4-5.el5.x86_64.rpm
# rpm -ivh libesmtp-1.0.4-5.el5.i386.rpm




7.4 Q4

问题现象:

加载crm配置时报错:
[iyunv@node1 ~]# crm configure load update pgsql.crm
ERROR: pgsql: parameter rep_mode does not exist
ERROR: pgsql: parameter node_list does not exist
ERROR: pgsql: parameter master_ip does not exist
ERROR: pgsql: parameter restore_command does not exist
ERROR: pgsql: parameter primary_conninfo_opt does not exist
WARNING: pgsql: specified timeout 60s for stop is smaller than the advised 120
WARNING: pgsql: action monitor_Master not advertised in meta-data, it may not be supported by the RA
WARNING: pgsql: specified timeout 60s for start is smaller than the advised 120
WARNING: pgsql: action notify not advertised in meta-data, it may not be supported by the RA
WARNING: pgsql: action demote not advertised in meta-data, it may not be supported by the RA
WARNING: pgsql: action promote not advertised in meta-data, it may not be supported by the RA
WARNING: pingCheck: specified timeout 60s for start is smaller than the advised 90
WARNING: pingCheck: specified timeout 60s for stop is smaller than the advised 100
Do you still want to commit no




解决方式:

参数不存在是因为pgsql脚本太旧,需要替换

scp pgsql root@192.168.100.201:/usr/lib/ocf/resource.d/heartbeat/
scp ocf-shellfuncs.in root@192.168.100.201:/usr/lib/ocf/lib/heartbeat/
  
scp pgsql root@192.168.100.202:/usr/lib/ocf/resource.d/heartbeat/
scp ocf-shellfuncs.in root@192.168.100.202:/usr/lib/ocf/lib/heartbeat/




7.5 Q5

问题现象:
[iyunv@node1 ~]# crm_mon -Afr -1
Last updated: Tue Jan 21 05:10:56 2014
Last change: Tue Jan 21 05:10:08 2014 via cibadmin on node1
Stack: classic openais (with plugin)
Current DC: node1 - partition with quorum
Version: 1.1.10-14.el6_5.2-368c726
2 Nodes configured, 2 expected votes
7 Resources configured
  
  
Online: [ node1 node2 ]
  
Full list of resources:
  
vip-slave (ocf::heartbeat:IPaddr2): Stopped
Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Stopped
     vip-rep (ocf::heartbeat:IPaddr2): Stopped
Master/Slave Set: msPostgresql [pgsql]
     Stopped: [ node1 node2 ]
Clone Set: clnPingCheck [pingCheck]
     Stopped: [ node1 node2 ]
  
Node Attributes:
* Node node1:
* Node node2:
  
Migration summary:
* Node node1:
* Node node2:
  
Failed actions:
    pingCheck_monitor_0 on node1 'invalid parameter' (2): call=23, status=complete, last-rc-change='Tue Jan 21 05:10:10 2014', queued=200ms, exec=0ms
    pingCheck_monitor_0 on node2 'invalid parameter' (2): call=23, status=complete, last-rc-change='Tue Jan 21 05:09:36 2014', queued=281ms, exec=0ms




解决方式:

该错误是因为脚本定义中的pingCheck调用的pingd脚本中存在未知参数,经查ocf/pacemaker/pingd中不存在multiplier参数:

primitive pingCheck ocf:pacemaker:pingd \

    params \

        name="default_ping_set" \

        host_list="192.168.100.1" \

        multiplier="100" \

    op start   timeout="60s" interval="0s"  on-fail="restart" \

    op monitor timeout="60s" interval="10s" on-fail="restart" \

    op stop    timeout="60s" interval="0s"  on-fail="ignore"

因此将调用改为ocf:heartbeat:pingd
7.6 Q6

问题现象:

corosync日志中报错:

Jan 21 04:36:02 corosync [TOTEM ] Received message has invalid digest... ignoring.

Jan 21 04:36:02 corosync [TOTEM ] Invalid packet data

解决方式:

说明网络中存在相同的多播,更改多播地址即可。
八、参考资源

脚本:

https://github.com/ClusterLabs/r ... ter/heartbeat/pgsql

脚本使用说明:

https://github.com/t-matsuo/reso ... reaming-replication

crm_resouce命令:

http://www.novell.com/zh-cn/docu ... an_crmresource.html

crm_failcount命令:

http://www.novell.com/zh-cn/docu ... n_crmfailcount.html




运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-21647-1-1.html 上篇帖子: corosync+ pacemaker实现pg流复制自动切换(一) 下篇帖子: Skytools安装配置管理(一)
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表