Corosync+Pacemaker+DRBD+MySQL 实现MySQL高可用
一:Corosync+PacemakerPacemaker是最流行的CRM(集群资源管理器),是从heartbeat v3中独立出来的资源管理器,同时Corosync+Pacemaker是最流行的高可用集群的套件.
二:DRBD
DRBD (Distributed Replicated Block Device,分布式复制块设备)是由内核模块和相关脚本而构成,用以构建高可用性的集群。其实现方式是通过网络来镜像整个设备。你可以把它看作是一种网络RAID1。
三:试验拓扑图
四:试验环境准备(centos6.5.x86_64)
drbd-8.4.3-33.el6.x86_64.rpm
drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm
crmsh-1.2.6-4.el6.x86_64.rpm
mariadb-5.5.36-linux-x86_64.tar.gz
corosync.x86_64-1.4.1-17.el6
五:实验配置
1)配置各节点互相解析
配置node1
# uname -n
node3
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.0.1server.magelinux.com server
172.16.16.1 node1
172.16.16.2 node2
172.16.16.3 node3
172.16.16.4 node4
172.16.16.5 node5
配置node2
# uname -n
node4
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.0.1server.magelinux.com server
172.16.16.1 node1
172.16.16.2 node2
172.16.16.3 node3
172.16.16.4 node4
172.16.16.5 node5
配置各节点时间同步
# ntpdate 172.16.0.1
# ntpdate 172.16.0.1 #172.16.0.1为时间服务器
配置各节点ssh互信
node4上的操作
# ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ''
# ssh-copy-id -i .ssh/id_rsa.pub root@node3 #发给node3
node3上的操作
# ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ''
# ssh-copy-id -i .ssh/id_rsa.pub root@node4 #发给node4
2)安装配置corosync
node3
# yum install corosync node4
# yum install corosync 配置node3上的corosync
# cd /etc/corosync/
# cp corosync.conf.example corosync.conf
# vim corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: on#启动认证
threads: 0
interface {
ringnumber: 0
bindnetaddr: 172.16.0.0 #心跳主机网段
mcastaddr: 226.94.16.1#组播传递心跳信息
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log #日志位置
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service {
ver: 0
name: pacemaker
}
aisexec {
user: root
group: root
}
生成密钥文件
前面的几部操作为了节约生成密钥的时间.random是根据敲击键盘的频率来生成密钥,如果之前没有足够的生成密钥所需要的信息那么需要你不停的敲击键盘.
# mv /dev/{random,random.bak}
# ln -s /dev/urandom /dev/random
# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.
查看是否已经生成密钥文件
# ll
total 24
-r-------- 1 root root128 Oct 11 19:52 authkey#此为密钥文件
-rw-r--r-- 1 root root537 Oct 11 19:49 corosync.conf
-rw-r--r-- 1 root root445 Nov 222013 corosync.conf.example
-rw-r--r-- 1 root root 1084 Nov 222013 corosync.conf.example.udpu
drwxr-xr-x 2 root root 4096 Nov 222013 service.d
drwxr-xr-x 2 root root 4096 Nov 222013 uidgid.d
将配置文件和密钥复制到node4节点上
# scp authkey corosync.conf node4:/etc/corosync/
authkey 100%128 0.1KB/s 00:00
corosync.conf 100%537 0.5KB/s 00:00
3)安装配置pacemaker和crm
# yum install pacemaker 先获得crmsh-1.2.6-4.el6.x86_64.rpm 包
# rpm -ivh crmsh-1.2.6-4.el6.x86_64.rpm
error: Failed dependencies:
pssh is needed by crmsh-1.2.6-4.el6.x86_64
python-dateutil is needed by crmsh-1.2.6-4.el6.x86_64
python-lxml is needed by crmsh-1.2.6-4.el6.x86_64
需要解决依赖关系
# yum install python-dateutil python-lxml
# rpm -ivh crmsh-1.2.6-4.el6.x86_64.rpm--nodeps
Preparing... ###########################################
1:crmsh ###########################################
4)node4安装pacemaker和crm方法同node3
启动node3和node4的pacemaker
# service corosync start
# service corosync start
查看corosync引擎是否成功启动
#grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
Oct 11 20:19:35 corosync Corosync Cluster Engine ('1.4.1'): started and ready to provide service. #说明已经启动准备好了
Oct 11 20:19:35 corosync Successfully read main configuration file '/etc/corosync/corosync.conf'
查看初始化成员节点通知是否正常发出
# grepTOTEM /var/log/cluster/corosync.log
Oct 11 20:19:35 corosync Initializing transport (UDP/IP Multicast).
Oct 11 20:19:35 corosync Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Oct 11 20:19:35 corosync The network interface is now up.
查看pacemaker是否正常启动
# grep pcmk_startup /var/log/cluster/corosync.log
Oct 11 20:19:35 corosync info: pcmk_startup: CRM: Initialized
Oct 11 20:19:35 corosync Logging: Initialized pcmk_startup
Oct 11 20:19:35 corosync info: pcmk_startup: Maximum core file size is: 18446744073709551615
Oct 11 20:19:35 corosync info: pcmk_startup: Service: 9
Oct 11 20:19:35 corosync info: pcmk_startup: Local hostname: node3
查看集群状态
# crm status
Last updated: Sat Oct 11 20:29:24 2014
Last change: Sat Oct 11 20:19:35 2014 via crmd on node4
Stack: classic openais (with plugin)
Current DC: node4 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ node3 node4 ] #在线
5)安装DRBD
先获得drbd安装包drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm ; drbd-8.4.3-33.el6.x86_64.rpm
#rpm -ivh drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm #先安装kmdl包
# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm
6)node4安装方法同node3
7)配置DRBD
# cat /etc/drbd.d/global_common.conf
global {
usage-count yes; #让linbit公司收集目前drbd的使用情况,yes为参加,我们这里不参加设置为no
# minor-count dialog-refresh disable-ip-verification
}
common {
handlers {
gency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
gency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
wn.sh; echo o > /proc/sysrq-trigger ; halt -f";
}
startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
}
options {
# cpu-mask on-no-data-accessible
}
disk {
on-io-error detach; #添加这一项,同步出错分离
}
net {
cram-hmac-alg "sha1";
shared-secret "mydrbdlab"; #添加认证算法和认证密钥
}
}
增加资源
node3
resource web {
device /dev/drbd0;
disk /dev/sda3; #sda3为事先创建好的分区和node4保持一致
on node3 { #节点为主机名
address 172.16.16.3:7788;
meta-disk internal;
}
on node4 {
address 172.16.16.4:7788;
meta-disk internal;
}
}
同步DRBD配置文件到node4
# scp global_common.conf web.res node4:/etc/drbd.d/ node3与node4初始化资源
# drbdadm create-md web
# drbdadm create-md web
启动DRBD
# service drbd start#两边需要同时启动
# service drbd start
查看两边的状
开始时两边都为secondary状态
# drbd-overview
0:web/0Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
# drbd-overview
0:web/0Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
选取node3为主节点
# drbdadm -- --overwrite-data-of-peer primary web
# drbd-overview
0:web/0Connected Primary/Secondary UpToDate/UpToDate C r-----
# drbd-overview
0:web/0Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
# drbd-overview
0:web/0Connected Secondary/Primary UpToDate/UpToDate C r-----
看 node3有之前的secondary变成了primary,说明node3已经变成了主节点
进行格式化并挂载
# mke2fs -t ext4 /dev/drbd0#这主节点上进行格式化并挂载
# mount /dev/drbd0 /mnt
设置node4为主节点
node3上面的操作
# umount /mnt
# drbdadm secondary web
# drbd-overview
0:web/0Connected Secondary/Secondary UpToDate/UpToDate C r-----
看:node3已经有primary变成了secondary
node4上面的操作
# drbdadm primary web
# drbd-overview
0:web/0Connected Primary/Secondary UpToDate/UpToDate C r-----
# mount /dev/drbd0 /mnt
# cd /mnt
# ls
lost+found
看:node4 变成了secondary 而且可以在node4上面进行挂载.
说明:我们的drbd工作一切正常
8)安装mysql
node3上创建mysql用户与组
# groupadd -g 3306 mysql
# useradd -u 3306 -g mysql -s /sbin/nologin -M mysql
# id mysql
uid=3306(mysql) gid=3306(mysql) groups=3306(mysql)
node4上创建用户与组同node3
node3安装mysql
先获得mysql安装包 mariadb-5.5.36-linux-x86_64.tar.gz
# tar xf mariadb-5.5.36-linux-x86_64.tar.gz -C /usr/local/
# cd /usr/local/
# ln -sv mariadb-5.5.36-linux-x86_64 mysql
`mysql' -> `mariadb-5.5.36-linux-x86_64'
# cd mysql
# chown root.mysql ./*
# ll
total 212
drwxr-xr-x2 root mysql 4096 Oct 11 21:54 bin
-rw-r--r--1 root mysql17987 Feb 242014 COPYING
-rw-r--r--1 root mysql26545 Feb 242014 COPYING.LESSER
drwxr-xr-x3 root mysql 4096 Oct 11 21:54 data
drwxr-xr-x2 root mysql 4096 Oct 11 21:55 docs
drwxr-xr-x3 root mysql 4096 Oct 11 21:55 include
-rw-r--r--1 root mysql 8694 Feb 242014 INSTALL-BINARY
drwxr-xr-x3 root mysql 4096 Oct 11 21:55 lib
drwxr-xr-x4 root mysql 4096 Oct 11 21:54 man
drwxr-xr-x 11 root mysql 4096 Oct 11 21:55 mysql-test
-rw-r--r--1 root mysql 108813 Feb 242014 README
drwxr-xr-x2 root mysql 4096 Oct 11 21:55 scripts
drwxr-xr-x 27 root mysql 4096 Oct 11 21:55 share
drwxr-xr-x4 root mysql 4096 Oct 11 21:55 sql-bench
drwxr-xr-x4 root mysql 4096 Oct 11 21:54 support-files
提供配置文件
# cp support-files/my-large.cnf /etc/my.cnf
cp: overwrite `/etc/my.cnf'? y
# vim /etc/my.cnf
增加一行
datadir = /mydata/data
挂载DRBD到/mydata/data
# mkdir -pv /mydata/data
# mount /dev/drbd0 /mydata/data
# chown -R mysql.mysql /mydata
初始化mysql
# scripts/mysql_install_db --datadir=/mydata/data/ --basedir=/usr/local/mysql --user=mysql 给mysql提供启动脚本
# cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
# chmod +x /etc/init.d/mysqld
# service mysqld start
Starting MySQL....
给mysql提供客户端
node4安装mysql同node3
将node4作为主节点
# umount /mnt
# drbdadm secondary web
# drbd-overview
0:web/0Connected Secondary/Secondary UpToDate/UpToDate C r-----
# drbdadm primary web
# drbd-overview
0:web/0Connected Primary/Secondary UpToDate/UpToDate C r-----
挂载DRBD
# mkdir -pv /mydata/data
# mount /dev/drbd0 /mydata/data
# chown -R mysql.mysql /mydata
把node3上的mysql配置文件发送到node4相应的目录中.
# scp /etc/my.cnf node4:/etc/
my.cnf 100% 4924 4.8KB/s 00:00
# scp /etc/init.d/mysqld node4:/etc/init.d/
mysqld 100% 12KB11.6KB/s 00:00
测试能否启动
# service mysqld start
Starting MySQL...
OK 说明成功启动
好了,到这里mysql配置全部完成
9)配置crmsh 资源管理
在配置crmsh之前要先把drbd停掉
关闭drbd并设置开机不启动
关闭node3
# service drbd stop
Stopping all DRBD resources: .
# chkconfig drbd off
# chkconfig drbd --list
drbd 0:关闭 1:关闭 2:关闭 3:关闭 4:关闭 5:关闭 6:关闭
关闭node4
# service drbd stop
Stopping all DRBD resources: .
# chkconfig drbd off
# chkconfig drbd --list
drbd 0:关闭 1:关闭 2:关闭 3:关闭 4:关闭 5:关闭 6:关闭
禁用STONISH、忽略法定票数
# crm
crm(live)# configure
crm(live)configure# property stonith-enabled=false
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# verify
crm(live)configure# commit
增加DRBD资源
# crm
crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=web op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30
crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
增加文件系统资源
crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydata/data fstype=ext4 op start timeout=60 op stop timeout=60
crm(live)configure# verify
crm(live)configure# colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master#mystor和mysqldrbd在一起
crm(live)configure# order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start #mystore要晚于mysqldrbd启动
crm(live)configure# verify
增加mysql资源
crm(live)configure# primitive mysqld lsb:mysqld
crm(live)configure# colocation mysqld_with_mystore inf: mysqld mystore #定义mysql和mysqlstore 在一起
crm(live)configure# verify
增加VIP资源
crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip=172.16.16.8 op monitor interval=30s timeout=20s
crm(live)configure# colocation myip_with_ms_mysqldrbd_master inf: myip ms_mysqldrbd:Master#第一VIP和mysqldrbd在一起
查看一下配置
crm(live)configure# show
node node3 \
attributes standby="on"
node node4
primitive myip ocf:heartbeat:IPaddr \
params ip="172.16.16.8" \
op monitor interval="30s" timeout="20s"
primitive mysqld lsb:mysqld
primitive mysqldrbd ocf:linbit:drbd \
params drbd_resource="web" \
op start timeout="240" interval="0" \
op stop timeout="100" interval="0" \
op monitor role="Master" interval="20" timeout="30" \
op monitor role="Slave" interval="30" timeout="30"
primitive mystore ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/mydata/data" fstype="ext4" \
op start timeout="60" interval="0" \
op stop timeout="60" interval="0"
ms ms_mysqldrbd mysqldrbd \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started"
colocation myip_with_ms_mysqldrbd inf: ms_mysqldrbd:Master myip
colocation mysqld_with_mystore inf: mysqld mystore
colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master
order mysqld_after_mystore inf: mystore mysqld
order mystore_after_ms_mysqldrbd inf: ms_mysqldrbd:promote mystore:start
property $id="cib-bootstrap-options" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
last-lrm-refresh="1413090998"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
10)测试一下高可用mysql能否使用
首先查看一下高可用集群的状态
crm(live)# status
Last updated: Sun Oct 12 14:27:51 2014
Last change: Sun Oct 12 14:27:47 2014 via crm_attribute on node3
Stack: classic openais (with plugin)
Current DC: node3 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
5 Resources configured
Online: [ node3 node4 ]
Master/Slave Set: ms_mysqldrbd
Masters: [ node3 ]
Slaves: [ node4 ]
mystore(ocf::heartbeat:Filesystem):Started node3
myip(ocf::heartbeat:IPaddr):Started node3
看 现在node3和node4都在线 ,此时node3为主节点,资源都在node3节点上.看下node3是否已经启动了mysql
可以看到mysql在node3节点上
现在我们来模拟node3节点下线
# crm
crm(live)# node
crm(live)node# standby
crm(live)node#
现在来查看下mysql高可用集群的状态
# crm status
Last updated: Sun Oct 12 14:39:38 2014
Last change: Sun Oct 12 14:39:22 2014 via crm_attribute on node3
Stack: classic openais (with plugin)
Current DC: node3 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
5 Resources configured
Node node3: standby
Online: [ node4 ]
Master/Slave Set: ms_mysqldrbd
Masters: [ node4 ]
Stopped: [ node3 ]
mystore(ocf::heartbeat:Filesystem):Started node4
myip(ocf::heartbeat:IPaddr):Started node4
可以看到node4成为了主节点,node3处于standby状态,资源都转移到了node4上面.
可以看到mysql在node4节点上依然工作起来..
11)测试一下能否写入数据
给连接的IP地址授权
# /usr/local/mysql/bin/mysql
Welcome to the MariaDB monitor.Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.36-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> grant all on *.* to root@"172.16.16.%" identified by "123456";
Query OK, 0 rows affected (0.12 sec)
器其他主机上连接数据库测试
可以看到在stu16(IP:172.16.16.1)主机上连接测试可以,查询数据,创建数据.
status下面出现提示错误时可以先进入resource模式cleanup下相应的服务.如果想要在configure模式下edit.需要先停掉相应的服务,同样进入resource模式stop相应的服务,在进行cleanup.接着就可以编辑保存了.
注意:
KILL掉服务后,服务是不会自动重启的。因为节点没有故障,所以资源不会转移,默认情况下,pacemaker不会对任何资源进行监控。所以,即便是资源关掉了,只要节点没有故障,资源依然不会转移。要想达到资源转移的目的,得定义监控(monitor)。
OK 我们的mysql高可以集群就写到这里.
页:
[1]