设为首页 收藏本站
查看: 1184|回复: 3

[经验分享] redis的哨兵模式

[复制链接]
累计签到:67 天
连续签到:1 天
发表于 2018-7-3 20:57:53 | 显示全部楼层 |阅读模式
master  :192.168.20.114

slave1    : 192.168.20.113

salve2    : 192.168.20.116

三台服务器都安装redis相同的版本redis-3.0.5.tar.gz

安装略:

redis服务器的配置相同:

master :

cd /usr/local/redis/

cat redis.conf

bind 0.0.0.0
daemonize yes
pidfile /var/run/redis.pid
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/usr/local/redis/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes


redis-slave1:

cat redis.conf

bind 0.0.0.0
daemonize yes
pidfile /var/run/redis.pid
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/usr/local/redis/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slaveof 192.168.20.121 6379    #salve比master多了这一行ip是master的ip地址
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes


redis-slave2

cat redis.conf

bind 0.0.0.0
daemonize yes
pidfile /var/run/redis.pid
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/usr/local/redis/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slaveof 192.168.20.121 6379    #salve比master多了这一行ip是master的ip地址
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes


哨兵模式的配置文件也是相同:

master :

cat sentinel.conf

port 26379
daemonize yes
dir /tmp
logfile "/usr/local/redis/sentinel.log"
sentinel monitor mymaster 192.168.20.121 6379 2
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000

salve1:

cat sentinel.conf

port 26379
daemonize yes
dir /tmp
logfile "/usr/local/redis/sentinel.log"
sentinel monitor mymaster 192.168.20.121 6379 2
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000

slave2:

cat sentinel.conf

port 26379
daemonize yes
dir /tmp
logfile "/usr/local/redis/sentinel.log"
sentinel monitor mymaster 192.168.20.121 6379 2
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000

启动redis和sentinel(所有节点都启动)

redis-server /usr/local/redis/redis.conf

redis-sentinel /usr/local/redis/sentinel.conf

查看各redis的角色:

121:

redis-cli -h 192.168.20.121 -p 6379 INFO|grep role

role:master

120:

redis-cli -h 192.168.20.120 -p 6379 INFO|grep role

role:slave

119:

redis-cli -h 192.168.20.119 -p 6379 INFO|grep role

role:slave

查看主节点的信息:

redis-cli -h 192.168.20.121 -p 6379 info Replication

# Replication
role:master
connected_slaves:2
slave0:ip=192.168.20.120,port=6379,state=online,offset=42097,lag=0            #salve节点
slave1:ip=192.168.20.119,port=6379,state=online,offset=42097,lag=0            #salve节点
master_repl_offset:42097
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:42096


master有两个节点,同时现在已连接成功

此时打开master的sentinel.conf,在末尾可看到如下自动写入的内容:

port 26379
daemonize yes
dir "/tmp"
logfile "/usr/local/redis/sentinel.log"
sentinel monitor mymaster 192.168.20.121 6379 2
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel known-slave mymaster 192.168.20.120 6379
# Generated by CONFIG REWRITE
sentinel known-slave mymaster 192.168.20.119 6379
sentinel known-sentinel mymaster 192.168.20.119 26379 4cce67dc6006ce85af935bba7bcbafabbd640126
sentinel known-sentinel mymaster 192.168.20.120 26379 f37fbb62618de39fb7a3cb204f00f163164405d2
sentinel current-epoch 0

查看从节点的信息:

119 :

redis-cli -h 192.168.20.119 -p 6379 info Replication

# Replication
role:slave
master_host:192.168.20.121
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:117104
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

120 :

redis-cli -h 192.168.20.120 -p 6379 info Replication

# Replication
role:slave
master_host:192.168.20.121
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:135963
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

此时redis的哨兵模式就完成了,

下面模拟当master停止工作时看看是否能够自动切换


停止master的工作:

redis-cli -h 192.168.20.121

192.168.20.121:6379> shutdown
not connected> quit

[root@bogon redis]# redis-cli -h 192.168.20.119 -p 6379 INFO|grep role

role:master

[root@bogon redis]# redis-cli -h 192.168.20.120 -p 6379 INFO|grep role

role:slave

此时查看119 上升为master


当原master恢复正常工作时查看此时的角色

121:

redis-cli -h 192.168.20.121 -p 6379 INFO|grep role

role:slave


查看此时119的从节点的数量:

redis-cli  -h 192.168.20.119 -p 6379 info replication
# Replication
role:master
connected_slaves:2
slave0:ip=192.168.20.120,port=6379,state=online,offset=77642,lag=0
slave1:ip=192.168.20.121,port=6379,state=online,offset=77499,lag=1
master_repl_offset:77642
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:77641

此时119上升为master,有两个salve节点,分别是121和120







运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-533429-1-1.html 上篇帖子: Nginx搭建 下篇帖子: 一个简单的ansible
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表