构建MySQL+DRBD+Keepalived高性能架构
前言* DRBD(DistributedReplicatedBlockDevice)是一个基于块设备级别在远程服务器直接同步和镜像数据的开源软件,类似于RAID1数据镜像,通常配合keepalived、heartbeat等HA软件来实现高可用性。这里简单记录仅供参考。一、实施环境
系统版本:CentOS 5.8
DRBD版本: drbd-8.3.15
Keepalived:keepalived-1.1.15
Master:192.168.149.128
Backup:192.168.149.129
二、初始化配置
1) 在128、129两台服务器/etc/hosts里面都添加如下配置:
192.168.149.128node1
192.168.149.129node2
2) 优化系统kernel参数,直接上sysctl.conf配置如下:
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.tcp_max_tw_buckets = 10000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 30
net.ipv4.ip_local_port_range = 1024 65530
net.ipv4.icmp_echo_ignore_all = 1
3) 两台服务器分别添加一块设备,用于DRBD主设备存储,我这里为/dev/sdb 30G硬盘;
执行如下命令:mkfs.ext3 /dev/sdb ;dd if=/dev/zero of=/dev/sdb bs=1M count=1;sync三、DRBD安装配置yum -y install drbd83* kmod-drbd83 ; modprobedrbd安装完成并加载drbd模块后,vi修改/etc/drbd.conf配置文件,本文内容如下:
global { usage-count yes;}common {syncer { rate 100M; }}resource r0 { protocol C; startup {
} disk { on-io-error detach; #size 1G; } net { } on node1 { device /dev/drbd0; disk /dev/sdb; address 192.168.149.128:7898; meta-disk internal; } on node2 { device /dev/drbd0; disk /dev/sdb; address 192.168.149.129:7898; meta-disk internal; }}配置修改完毕后执行如下命令初始化:drbdadm create-mdr0;/etc/init.d/drbd restart ;/etc/init.d/status
如下图:
以上步骤,需要在两台服务器都执行,两台都配置完毕后,在node2从上面执行如下命令:/etc/init.d/drbd status 看到如下信息,表示目前两台都为从,我们需要设置node1为master,命令如下:drbdadm -- --overwrite-data-of-peer primary allmkfs.ext3 /dev/drbd0mkdir/app;mount /dev/drbd0/app自此,DRBD配置完毕,我们可以往/app目录写入任何东西,当master出现宕机或者其他故障,手动切换到backup,数据没有任何丢失,相当于两台服务器做网络RAID1。四、Keepalived配置
wget http://www.keepalived.org/software/keepalived-1.1.15.tar.gz; tar -xzvf keepalived-1.1.15.tar.gz ;cd keepalived-1.1.15 ; ./configure ; make ;make installDIR=/usr/local/ ;cp $DIR/etc/rc.d/init.d/keepalived/etc/rc.d/init.d/ ; cp $DIR/etc/sysconfig/keepalived /etc/sysconfig/ ;mkdir -p /etc/keepalived ; cp $DIR/sbin/keepalived /usr/sbin/ 两台服务器均安装keepalived,并进行配置,首先在node1(master)上配置,keepalived.conf内容如下:
! Configuration File for keepalivedglobal_defs { router_id LVS_DEVEL}vrrp_script check_mysql { script "/data/sh/check_mysql.sh" interval 5 }vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 52 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.149.100 } track_script { check_mysql }} 然后创建check_mysql.sh检测脚本,内容如下:
#!/bin/shA=`ps -C mysqld --no-header |wc -l`if[ $A -eq 0 ];then/bin/umount /app/drbdadm secondary r0killall keepalivedfi添加node2(backup)上配置,keepalived.conf内容如下:
! Configuration File for keepalivedglobal_defs { router_id LVS_DEVEL}vrrp_sync_group VI{group { VI_1}notify_master /data/sh/master.shnotify_backup /data/sh/backup.sh}vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 52 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.149.100 }}创建master.sh检测脚本,内容如下:
#!/bin/bashdrbdadm primary r0/bin/mount /dev/drbd0 /app//etc/init.d/mysqld start创建backup.sh检测脚本,内容如下:
#!/bin/bash/etc/init.d/mysqld stop/bin/umount /dev/drbd0drbdadm secondary r0发生脑裂恢复步骤如下:
Master执行命令:drbdadmsecondary r0drbdadm----discard-my-dataconnect r0Backup上执行命令:
drbdadmconnect r0
页:
[1]