设为首页 收藏本站
查看: 693|回复: 0

[经验分享] Zabbix HA on two nodes for each component 完结

[复制链接]

尚未签到

发表于 2019-1-19 09:12:05 | 显示全部楼层 |阅读模式
  Linux Platform: CentOS 6.5 x86_64
  

  Network Topology:
  
  

  zabbix mysql server

1. Make sure you have successfully set up DNS resolution and NTP time synchronization for both your Linux Cluster nodes.
  
vi /etc/sysconfig/network
设定主机名为short name, not FQDN

here, I use /etc/hosts to simplify

vi /etc/hosts
# eth0 network for production
192.168.1.13    mysql01
192.168.1.23    mysql02

# for dedicated heartbeat network to avoid problems
# eth1 network
10.10.12.10    mysql01p
10.10.12.20    mysql02p

  

  2. wget http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo -O /etc/yum.repos.d/ha.repo

3. on two nodes
yum -y install pacemaker cman crmsh

4. on two nodes
vi /etc/sysconfig/cman
CMAN_QUORUM_TIMEOUT=0

5. on two nodes
vi /etc/cluster/cluster.conf





   
     
      
         
      
     


   
      
         
        
      
  


   



6. on two nodes
ccs_config_validate

7. on node1 then node2
service cman start

check nodes status with cman_tool nodes

8. on node1 then node2
service pacemaker start
chkconfig cman on
chkconfig pacemaker on

9. for a two nodes cluster, run from one node
crm configure property stonith-enabled="false"
crm configure property no-quorum-policy="ignore"
#crm configure rsc_defaults resource-stickiness="100"

10. crm_verify -LV
crm status

11. on two nodes
yum -y install mysql mysql-server mysql-devel
chkconfig mysqld off

12. on two nodes
mkdir -p /new/mysql; chown -R mysql.mysql /new/mysql

vi /etc/my.cnf
[mysqld]
datadir=/new/mysql
socket=/new/mysql/mysql.sock
default-storage-engine=INNODB
  [client]
socket=/new/mysql/mysql.sock
  
13. on two nodes
Install DRBD 8.3
rpm -ivh http://www.elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
yum -y install drbd83-utils kmod-drbd83
chkconfig drbd off

14. on two nodes
create a new 100M disk just for testing purpose

vi /etc/drbd.d/zabbixdata.res
resource zabbixdata {
device    /dev/drbd0;
disk      /dev/sdb;
meta-disk internal;
syncer {
   verify-alg sha1;
}  
on mysql01.test.com {
   address   192.168.1.13:7789;
}
on mysql02.test.com {
   address   192.168.1.23:7789;
}
}

15. on mysql01
drbdadm create-md zabbixdata
modprobe drbd
drbdadm up zabbixdata

16. on mysql02
drbdadm create-md zabbixdata
modprobe drbd
drbdadm up zabbixdata

17. make mysql01 primary
drbdadm -- --overwrite-data-of-peer primary zabbixdata
  cat /proc/drbd
Note: you should wait until sync finished

18. on mysql01
mkfs.ext4 /dev/drbd0

19. #crm(live)configure
primitive clusterip ocf:heartbeat:IPaddr2 params ip="192.168.1.52" cidr_netmask="24" nic="eth0"
commit
run crm status to check

primitive zabbixdata ocf:linbit:drbd params drbd_resource="zabbixdata"
ms zabbixdataclone zabbixdata meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
primitive zabbixfs ocf:heartbeat:Filesystem params device="/dev/drbd/by-res/zabbixdata" directory="/new/mysql" fstype="ext4"

group myzabbix zabbixfs clusterip
colocation myzabbix_on_drbd inf: myzabbix zabbixdataclone:Master
order myzabbix_after_drbd inf: zabbixdataclone:promote myzabbix:start
  op_defaults timeout=240s
  commit

20. ensure resource is on mysql01
  on mysql01
run mount to see if /dev/drbd0 --> /new/mysql

service mysqld start

Create zabbix database and user on MySQL.

mysql -u root
mysql> create database zabbix character set utf8;
mysql> grant all privileges on zabbix.* to zabbix@'%' identified by 'zabbix';
mysql> flush privileges;
mysql> exit

Import initial schema and data.
copy zabbix server /usr/share/doc/zabbix-server-mysql-2.2.1/create/* to local
mysql -uroot zabbix < schema.sql
mysql -uroot zabbix < images.sql
mysql -uroot zabbix < data.sql

service mysqld stop

21. #crm(live)configure
primitive mysqld lsb:mysqld
edit myzabbix
like this -- group myzabbix zabbixfs clusterip mysqld
commit

22. put mysql01 to standby
crm node standby mysql01p

23. on mysql02 to check mount and mysql -uroot
  

  24. put mysql01 to online
crm node online mysql01p

zabbix server

1. Make sure you have successfully set up DNS resolution and NTP time synchronization for both your Linux Cluster nodes.
  
vi /etc/sysconfig/network
设定主机名为short name, not FQDN

here, I use /etc/hosts to simplify

vi /etc/hosts
# eth0 network for production
192.168.1.14    zabbixserver01
192.168.1.24    zabbixserver02

# for dedicated heartbeat network to avoid problems
# eth1 network
10.10.11.10    zabbixserver01p
10.10.11.20   zabbixserver02p
  

  2. wget http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo -O /etc/yum.repos.d/ha.repo

3. on two nodes
yum -y install pacemaker cman crmsh

4. on two nodes
vi /etc/sysconfig/cman
CMAN_QUORUM_TIMEOUT=0

5. on two nodes
vi /etc/cluster/cluster.conf





   
     
      
         
      
     


   
      
         
        
      
  


   



6. on two nodes
ccs_config_validate

7. on two nodes
service cman start

check nodes status with cman_tool nodes

8. on two nodes
service pacemaker start
chkconfig cman on
chkconfig pacemaker on

9. for a two nodes cluster, run from one node
crm configure property stonith-enabled="false"
crm configure property no-quorum-policy="ignore"
#crm configure rsc_defaults resource-stickiness="100"

10. crm_verify -LV
crm status

11. on two nodes
yum -y install zlib-devel glibc-devel libcurl-devel OpenIPMI-devel libssh2-devel net-snmp-devel openldap-devel
rpm -ivh http://mirrors.sohu.com/fedora-epel/6Server/x86_64/epel-release-6-8.noarch.rpm
yum -y install fping iksemel-devel

12. on two nodes
rpm -ivh http://repo.zabbix.com/zabbix/2.2/rhel/6/x86_64/zabbix-release-2.2-1.el6.noarch.rpm
yum -y install zabbix-server-mysql
chkconfig zabbix-server off

vi /etc/zabbix/zabbix_server.conf
DBHost=192.168.1.52 #mysql server cluster ip
DBName=zabbix
DBUser=zabbix
DBPassword=zabbix
DBSocket=/new/mysql/mysql.sock
SourceIP=192.168.1.51
ListenIP=192.168.1.51

13.#crm(live)configure
primitive clusterip ocf:heartbeat:IPaddr2 params ip="192.168.1.51" cidr_netmask="24" nic="eth0"
commit
run crm status to check

14. ensure resource is on zabbixserver01
on zabbixserver01

service zabbix-server start
check /var/log/zabbix/zabbix_server.log
service zabbix-server stop

15. put zabbixserver01 to standby
crm node standby zabbixserver01p
  
16. on zabbixserver02
service zabbix-server start
check /var/log/zabbix/zabbix_server.log
service zabbix-server stop

17. put zabbixserver01 to online
crm node online zabbixserver01p
  
18. #crm(live)configure
primitive myzabbixserver lsb:zabbix-server
group myzabbix clusterip myzabbixserver
commit
  

  19. for Zabbix agent -- on zabbix server

yum -y install zabbix-agent
chkconfig zabbix-agent off

20. for zabbix agent configuration

vi /etc/zabbix/zabbix_agentd.conf

#Server=[zabbix server ip] --This is for zabbix agent Passive check, port 10050
#ServerActive=[zabbix server ip] --This is for zabbix agent(active) Active check, port 10051
#Hostname=[ Hostname of client system ]
#hostname value set on the agent side should exactly match the ”Host name” configured for the host in the web frontend

Server=192.168.1.51
ServerActive=192.168.1.51
Hostname=Zabbix server

21. ensure resource is on zabbixserver01
on zabbixserver01

service zabbix-agent start
check /var/log/zabbix/zabbix_agent.log
service zabbix-agent stop

22. put zabbixserver01 to standby
crm node standby zabbixserver01p

23. on zabbixserver02
service zabbix-agent start
check /var/log/zabbix/zabbix_agent.log
service zabbix-agent stop

24. put zabbixserver01 to online
crm node online zabbixserver01p

25. #crm(live)configure
primitive zabbixagent lsb:zabbix-agent
edit myzabbix
like this -- group myzabbix clusterip myzabbixserver zabbixagent
commit
  

  
zabbix web

1. Make sure you have successfully set up DNS resolution and NTP time synchronization for both your Linux Cluster nodes.
  
vi /etc/sysconfig/network
设定主机名为short name, not FQDN

here, I use /etc/hosts to simplify

vi /etc/hosts
# eth0 network for production
192.168.1.15    zabbixweb01
192.168.1.25    zabbixweb02

# for dedicated heartbeat network to avoid problems
# eth1 network
10.10.10.10    zabbixweb01p
10.10.10.20    zabbixweb02p

2. wget http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo -O /etc/yum.repos.d/ha.repo

3. on two nodes
yum -y install pacemaker cman crmsh

4. on two nodes
vi /etc/sysconfig/cman
CMAN_QUORUM_TIMEOUT=0

5. on two nodes
vi /etc/cluster/cluster.conf





   
     
      
         
      
     


   
      
         
        
      
  


   



6. on two nodes
ccs_config_validate

7. on two nodes
service cman start

check nodes status with cman_tool nodes

8. on two nodes
service pacemaker start
chkconfig cman on
chkconfig pacemaker on

9. for a two nodes cluster, run from one node
crm configure property stonith-enabled="false"
crm configure property no-quorum-policy="ignore"
#crm configure rsc_defaults resource-stickiness="100"

10. crm_verify -LV
crm status

11. on two nodes
yum -y install httpd httpd-devel php php-cli php-common php-devel php-pear php-gd php-bcmath php-mbstring php-mysql php-xml wget
chkconfig httpd off

12. on two nodes
rpm -ivh http://repo.zabbix.com/zabbix/2.2/rhel/6/x86_64/zabbix-release-2.2-1.el6.noarch.rpm
yum -y install zabbix-web-mysql

13. on two nodes
Apache configuration file for Zabbix frontend is located in /etc/httpd/conf.d/zabbix.conf. Some PHP settings are already configured.

php_value max_execution_time 300
php_value memory_limit 128M
php_value post_max_size 16M
php_value upload_max_filesize 2M
php_value max_input_time 300
php_value date.timezone Asia/Shanghai

14. on two nodes
vi /etc/httpd/conf/httpd.conf


SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1 192.168.1.0/24


15. on zabbixweb01
service httpd start
run http://192.168.1.15/zabbix to finish zabbix setup, put 192.168.1.52 as mysql database address
service httpd stop

16. on zabbixweb02
service httpd start
run http://192.168.1.25/zabbix to finish zabbix setup, put 192.168.1.52 as mysql database address
service httpd stop

17. #crm(live)configure
op_defaults timeout=240s
primitive clusterip ocf:heartbeat:IPaddr2 params ip="192.168.1.50" cidr_netmask="24" nic="eth0"
commit
run crm status to check

primitive website ocf:heartbeat:apache params configfile="/etc/httpd/conf/httpd.conf"
group myweb clusterip website
commit

  

  Note:
  1. 在使用专属心跳网卡eth1建立群集后,把某个节点置于standby/online时,必须附加节点名字:
  crm node standby node_name
  crm node online node_name
  

  2. 如果机器down了,重装系统,然后按步骤1-8即可重新加入群集。
  

  3. for DRBD failure, please refer http://www.drbd.org/users-guide-8.3/ch-troubleshooting.html




运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-665027-1-1.html 上篇帖子: zabbix Aggregate checks聚合 下篇帖子: 搭建zabbix遇到的问题
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表