redis cluster
Redis cluster 介绍 Redis在3.0版正式引入了集群特性。Redis集群是一个分布式(distributed)、容错(fault-tolerant)的 Redis内存K/V服务, 普通单机 Redis 使用的功能 仅是集群中的功能的一个子集(subset)。Redis集群并不支持处理多个keys的命令,因为这需要在不同的节点间移动数据,从而达不到像Redis那样的性能,在高负载的情况下可能会导致不可预料的错误。Redis集群的几个重要特征: (1).Redis 集群的分片特征在于将键空间分拆了16384个槽位,每一个节点负责其中一些槽位。 (2).Redis提供一定程度的可用性,可以在某个节点宕机或者不可达的情况下继续处理命令. (3).Redis 集群中不存在中心(central)节点或者代理(proxy)节点, 集群的其中一个主要设计目标是达到线性可扩展性(linear scalability)。1. Redis的数据分片(Sharding) Redis 集群的键空间被分割为 16384 (2^14)个槽(slot), 集群的最大节点数量也是 16384 个(推荐的最大节点数量为 1000 个),同理每个主节点可以负责处理1到16384个槽位。 当16384个槽位都有主节点负责处理时,集群进入“稳定”上线状态,可以开始处理数据命令。当集群没有处理稳定状态时,可以通过执行重配置(reconfiguration)操作,使得每个哈希槽都只由一个节点进行处理。 重配置指的是将某个/某些槽从一个节点移动到另一个节点。一个主节点可以有任意多个从节点, 这些从节点用于在主节点发生网络断线或者节点失效时, 对主节点进行替换。 集群的使用公式CRC16(Key)&16383计算key属于哪个槽:HASH_SLOT = CRC16(key) mod 16384 。CRC16其结果长度为16位。
2. Redis集群节点 Redis 集群中的节点不仅要记录键和值的映射,还需要记录集群的状态,包括键到正确节点的映射。它还具有自动发现其他节点,识别工作不正常的节点,并在有需要时,在从节点中选举出新的主节点的功能。 为了执行以上列出的任务, 集群中的每个节点都与其他节点建立起了“集群连接(cluster bus)”, 该连接是一个 TCP 连接, 使用二进制协议进行通讯。 节点之间使用 Gossip 协议 来进行以下工作: a).传播(propagate)关于集群的信息,以此来发现新的节点。 b).向其他节点发送 PING 数据包,以此来检查目标节点是否正常运作。 c).在特定事件发生时,发送集群信息。 除此之外, 集群连接还用于在集群中发布或订阅信息。 集群节点不能前端代理命令请求, 所以客户端应该在节点返回 -MOVED或者 -ASK转向(redirection)错误时, 自行将命令请求转发至其他节点。 客户端可以自由地向集群中的任何一个节点发送命令请求, 并可以在有需要时, 根据转向错误所提供的信息, 将命令转发至正确的节点, 所以在理论上来说, 客户端是无须保存集群状态信息的。但如果客户端可以将键和节点之间的映射信息保存起来, 可以有效地减少可能出现的转向次数, 借此提升命令执行的效率。 每个节点在集群中由一个独一无二的 ID标识, 该 ID 是一个十六进制表示的 160 位随机数,在节点第一次启动时由 /dev/urandom 生成。节点会将它的 ID 保存到配置文件, 只要这个配置文件不被删除, 节点就会一直沿用这个 ID 。一个节点可以改变它的 IP 和端口号, 而不改变节点 ID 。 集群可以自动识别出IP/端口号的变化, 并将这一信息通过 Gossip协议广播给其他节点知道。 下面是每个节点都有的关联信息, 并且节点会将这些信息发送给其他节点: a).节点所使用的 IP 地址和 TCP 端口号。 b).节点的标志(flags)。 c).节点负责处理的哈希槽。 b).节点最近一次使用集群连接发送 PING 数据包(packet)的时间。 e).节点最近一次在回复中接收到 PONG 数据包的时间。 f).集群将该节点标记为下线的时间。 g).该节点的从节点数量。 如果该节点是从节点的话,那么它会记录主节点的节点 ID 。
在了解Redis Cluster的集群基本特征后,下面搭建一个Redis Cluster集群。
一、安装并配置redis3.2.1yum -y install gcc zlib-devel jemalloc-develruby rubygemstclwget http://download.redis.io/releases/redis-3.2.1.tar.gztar xfz redis-3.2.1.tar.gzcd redis-3.2.1makecp src/redis-trib.rb /bincp src/redis-server /bincp src/redis-cli /bingem install redis --version 3.2.1
echo never > /sys/kernel/mm/transparent_hugepage/enabledecho "vm.overcommit_memory = 1" >> /etc/sysctl.confsysctl -pecho 1024 > /proc/sys/net/core/somaxconn
mkdir /data/redis/{6300,6301} -p
# tail -5 /etc/rc.localtouch /var/lock/subsys/localecho never > /sys/kernel/mm/transparent_hugepage/enabledecho 512 > /proc/sys/net/core/somaxconnredis-server /data/redis/redis_6300.confredis-server /data/redis/redis_6301.conf## cat /data/redis/redis_6300.confdaemonize yesbind 192.168.0.59port 6300tcp-backlog 511timeout 0tcp-keepalive 0loglevel noticemaxmemory 10gbdatabases 16dir /data/redis/6300slave-serve-stale-data yesloglevel noticelogfile "/data/redis/6300/redis_6300.log"slave-read-only yes#slave只读repl-disable-tcp-nodelay yes#not use defaultslave-priority 100appendonly yes#打开aof持久化appendfsync everysec#每秒一次aof写no-appendfsync-on-rewrite yes#关闭在aof rewrite的时候对新的写操作进行fsyncauto-aof-rewrite-min-size 64mblua-time-limit 5000cluster-enabled yes#打开redis集群cluster-config-file /data/redis/6300/nodes-6300.confcluster-node-timeout 15000#节点互连超时的阀值(单位毫秒)cluster-migration-barrier 1#一个主节点在拥有多少个好的从节点的时候就要割让一个从节点出来给其他没有从节点或者从节点挂掉的主节点cluster-require-full-coverage no#如果某一些key space没有被集群中任何节点覆盖,最常见的就是一个node挂掉,集群将停止接受写入auto-aof-rewrite-percentage 80-100 #部署在同一机器的redis实例,把auto-aof-rewrite搓开,防止瞬间fork所有redis进程做rewrite,占用大量内存slowlog-log-slower-than 10000slowlog-max-len 128notify-keyspace-events ""hash-max-ziplist-entries 512hash-max-ziplist-value 64list-max-ziplist-entries 512list-max-ziplist-value 64set-max-intset-entries 512zset-max-ziplist-entries 128zset-max-ziplist-value 64activerehashing yesclient-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60hz 10aof-rewrite-incremental-fsync yes#
# cat /data/redis/redis_6301.confdaemonize yesbind 192.168.0.59port 6301tcp-backlog 511timeout 0tcp-keepalive 0loglevel noticemaxmemory 10gbdatabases 16dir /data/redis/6301slave-serve-stale-data yesloglevel noticelogfile "/data/redis/6301/redis_6301.log"slave-read-only yesrepl-disable-tcp-nodelay yesslave-priority 100appendonly yesappendfsync everysecno-appendfsync-on-rewrite yesauto-aof-rewrite-min-size 64mblua-time-limit 5000cluster-enabled yescluster-config-file /data/redis/6301/nodes-6301.confcluster-node-timeout 15000cluster-migration-barrier 1cluster-require-full-coverage noauto-aof-rewrite-percentage 80-100slowlog-log-slower-than 10000slowlog-max-len 128notify-keyspace-events ""hash-max-ziplist-entries 512hash-max-ziplist-value 64list-max-ziplist-entries 512list-max-ziplist-value 64set-max-intset-entries 512zset-max-ziplist-entries 128zset-max-ziplist-value 64activerehashing yesclient-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60hz 10aof-rewrite-incremental-fsync yes## redis-server /data/redis/redis_6300.conf# redis-server /data/redis/redis_6301.conf
二、create创建集群create命令可选replicas参数,replicas表示需要有几个slave。最简单命令使用如下:$ruby redis-trib.rb create 10.180.157.199:6379 10.180.157.200:6379 10.180.157.201:6379
有一个slave的创建命令如下:$ruby redis-trib.rb create --replicas 1 10.180.157.199:6379 10.180.157.200:6379 10.180.157.201:6379 10.180.157.202:637910.180.157.205:637910.180.157.208:6379
创建流程如下:1、首先为每个节点创建ClusterNode对象,包括连接每个节点。检查每个节点是否为独立且db为空的节点。执行load_info方法导入节点信息。2、检查传入的master节点数量是否大于等于3个。只有大于3个节点才能组成集群。3、计算每个master需要分配的slot数量,以及给master分配slave。分配的算法大致如下: 先把节点按照host分类,这样保证master节点能分配到更多的主机中。 不停遍历遍历host列表,从每个host列表中弹出一个节点,放入interleaved数组。直到所有的节点都弹出为止。 master节点列表就是interleaved前面的master数量的节点列表。保存在masters数组。 计算每个master节点负责的slot数量,保存在slots_per_node对象,用slot总数除以master数量取整即可。 遍历masters数组,每个master分配slots_per_node个slot,最后一个master,分配到16384个slot为止。 接下来为master分配slave,分配算法会尽量保证master和slave节点不在同一台主机上。对于分配完指定slave数量的节点,还有多余的节点,也会为这些节点寻找master。分配算法会遍历两次masters数组。 第一次遍历masters数组,在余下的节点列表找到replicas数量个slave。每个slave为第一个和master节点host不一样的节点,如果没有不一样的节点,则直接取出余下列表的第一个节点。 第二次遍历是在对于节点数除以replicas不为整数,则会多余一部分节点。遍历的方式跟第一次一样,只是第一次会一次性给master分配replicas数量个slave,而第二次遍历只分配一个,直到余下的节点被全部分配出去。4、打印出分配信息,并提示用户输入“yes”确认是否按照打印出来的分配方式创建集群。5、输入“yes”后,会执行flush_nodes_config操作,该操作执行前面的分配结果,给master分配slot,让slave复制master,对于还没有握手(cluster meet)的节点,slave复制操作无法完成,不过没关系,flush_nodes_config操作出现异常会很快返回,后续握手后会再次执行flush_nodes_config。6、给每个节点分配epoch,遍历节点,每个节点分配的epoch比之前节点大1。7、节点间开始相互握手,握手的方式为节点列表的其他节点跟第一个节点握手。8、然后每隔1秒检查一次各个节点是否已经消息同步完成,使用ClusterNode的get_config_signature方法,检查的算法为获取每个节点cluster nodes信息,排序每个节点,组装成node_id1:slots|node_id2:slot2|...的字符串。如果每个节点获得字符串都相同,即认为握手成功。9、此后会再执行一次flush_nodes_config,这次主要是为了完成slave复制操作。10、最后再执行check_cluster,全面检查一次集群状态。包括和前面握手时检查一样的方式再检查一遍。确认没有迁移的节点。确认所有的slot都被分配出去了。11、至此完成了整个创建流程,返回 All 16384 slots covered.。
实例:1 创建cluster,添加三台master# redis-trib.rb create 192.168.0.59:6300 192.168.1.95:6300 192.168.2.236:6300>>> Creating cluster>>> Performing hash slots allocation on 3 nodes...Using 3 masters:192.168.0.59:6300192.168.1.95:6300192.168.2.236:6300M: b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 slots:0-5460 (5461 slots) masterM: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:5461-10922 (5462 slots) masterM: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:10923-16383 (5461 slots) masterCan I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join.>>> Performing Cluster Check (using node 192.168.0.59:6300)M: b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 slots:0-5460 (5461 slots) masterM: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:5461-10922 (5462 slots) masterM: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:10923-16383 (5461 slots) master All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage... All 16384 slots covered.#
2 检查cluster的状态# redis-trib.rb check 192.168.1.95:6300>>> Performing Cluster Check (using node 192.168.1.95:6300)M: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:5461-10922 (5462 slots) master 0 additional replica(s)M: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:10923-16383 (5461 slots) master 0 additional replica(s)M: b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 slots:0-5460 (5461 slots) master 0 additional replica(s) All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage... All 16384 slots covered.
3 为每个master各添加一台slave# redis-trib.rb add-node --slave --master-id 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6301 192.168.1.95:6300>>> Adding node 192.168.2.236:6301 to cluster 192.168.1.95:6300>>> Performing Cluster Check (using node 192.168.1.95:6300)M: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:5461-10922 (5462 slots) master 0 additional replica(s)M: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:10923-16383 (5461 slots) master 0 additional replica(s)M: b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 slots:0-5460 (5461 slots) master 0 additional replica(s) All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage... All 16384 slots covered.>>> Send CLUSTER MEET to node 192.168.2.236:6301 to make it join the cluster.Waiting for the cluster to join....>>> Configure node as replica of 192.168.2.236:6300. New node added correctly.
# redis-trib.rb add-node --slave --master-id b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6301 192.168.1.95:6300>>> Adding node 192.168.0.59:6301 to cluster 192.168.1.95:6300>>> Performing Cluster Check (using node 192.168.1.95:6300)M: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:5461-10922 (5462 slots) master 0 additional replica(s)M: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:10923-16383 (5461 slots) master 1 additional replica(s)S: 44102d5ae0638dc1369af09787f10af6b93169df 192.168.2.236:6301 slots: (0 slots) slave replicates 1bf7e417c5e90bef3970835d50e710d703e80189M: b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 slots:0-5460 (5461 slots) master 0 additional replica(s) All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage... All 16384 slots covered.>>> Send CLUSTER MEET to node 192.168.0.59:6301 to make it join the cluster.Waiting for the cluster to join.>>> Configure node as replica of 192.168.0.59:6300. New node added correctly.#
# redis-trib.rb add-node --slave --master-id e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6301 192.168.0.59:6300>>> Adding node 192.168.1.95:6301 to cluster 192.168.0.59:6300>>> Performing Cluster Check (using node 192.168.0.59:6300)M: b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 slots:0-5460 (5461 slots) master 1 additional replica(s)M: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:5461-10922 (5462 slots) master 0 additional replica(s)S: 44102d5ae0638dc1369af09787f10af6b93169df 192.168.2.236:6301 slots: (0 slots) slave replicates 1bf7e417c5e90bef3970835d50e710d703e80189S: 430f55f658999e2e174bd7aa6b4590cb03620780 192.168.0.59:6301 slots: (0 slots) slave replicates b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efccM: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:10923-16383 (5461 slots) master 1 additional replica(s) All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage... All 16384 slots covered.>>> Send CLUSTER MEET to node 192.168.1.95:6301 to make it join the cluster.Waiting for the cluster to join.>>> Configure node as replica of 192.168.1.95:6300. New node added correctly.
# redis-trib.rb check 192.168.1.95:6300>>> Performing Cluster Check (using node 192.168.1.95:6300)M: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:5461-10922 (5462 slots) master 1 additional replica(s)S: 430f55f658999e2e174bd7aa6b4590cb03620780 192.168.0.59:6301 slots: (0 slots) slave replicates b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efccM: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:10923-16383 (5461 slots) master 1 additional replica(s)S: 44102d5ae0638dc1369af09787f10af6b93169df 192.168.2.236:6301 slots: (0 slots) slave replicates 1bf7e417c5e90bef3970835d50e710d703e80189M: b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 slots:0-5460 (5461 slots) master 1 additional replica(s)S: 43c64a42de0a4996ac25b2c469d8b890a6a0e9cf 192.168.1.95:6301 slots: (0 slots) slave replicates e877cb218010461d67affe72e217b92c3b7df3ce All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage... All 16384 slots covered.#
# redis-trib.rb info 192.168.1.95:6300192.168.1.95:6300 (e877cb21...) -> 0 keys | 5462 slots | 1 slaves.192.168.2.236:6300 (1bf7e417...) -> 0 keys | 5461 slots | 1 slaves.192.168.0.59:6300 (b7ce0951...) -> 0 keys | 5461 slots | 1 slaves. 0 keys in 3 masters.0.00 keys per slot on average.## redis-trib.rb info 192.168.1.95:6301192.168.1.95:6300 (e877cb21...) -> 0 keys | 5462 slots | 1 slaves.192.168.0.59:6300 (b7ce0951...) -> 0 keys | 5461 slots | 1 slaves.192.168.2.236:6300 (1bf7e417...) -> 0 keys | 5461 slots | 1 slaves. 0 keys in 3 masters.0.00 keys per slot on average.#Redis Cluster中的节点分为两种:Master节点和Slave节点,一个Master节点可以拥有若干个Slave节点,Master节点上的数据通过异步方式与Slave节点实现数据同步,当Master节点因为某种原因退出集群后,Redis Cluster会自动从该Master节点的Slave节点中选择出一个作为新的Master节点。因此,redis-trib.rb工具的create子命令提供了--args参数来指定集群中的Master节点拥有几个Slave节点,譬如使用6个redis实例构建集群且--args参数值为1,那么整个集群就包含三个Master节点和三个Slave节点,每个Master节点都有一个Slave节点。
四、客户端连接操作# redis-cli -h pc2 -p 6300pc2:6300> cluster infocluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:3cluster_my_epoch:2cluster_stats_messages_sent:1571cluster_stats_messages_received:1571pc2:6300> cluster node(error) ERR Wrong CLUSTER subcommand or number of argumentspc2:6300> cluster nodes430f55f658999e2e174bd7aa6b4590cb03620780 192.168.0.59:6301 slave b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 0 1467279554369 1 connected1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 master - 0 1467279555371 3 connected 10923-1638344102d5ae0638dc1369af09787f10af6b93169df 192.168.2.236:6301 slave 1bf7e417c5e90bef3970835d50e710d703e80189 0 1467279553367 3 connectedb7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 master - 0 1467279551362 1 connected 0-546043c64a42de0a4996ac25b2c469d8b890a6a0e9cf 192.168.1.95:6301 slave e877cb218010461d67affe72e217b92c3b7df3ce 0 1467279552364 2 connectede877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 myself,master - 0 0 2 connected 5461-10922pc2:6300>
# redis-cli -h pc2 -p 6300pc2:6300> set name "tom"OKpc2:6300> exit
# redis-cli -c -h pc2 -p 6300pc2:6300> set age 20-> Redirected to slot located at 192.168.0.59:6300OK192.168.0.59:6300> get name-> Redirected to slot located at 192.168.1.95:6300"tom"192.168.1.95:6300> get age-> Redirected to slot located at 192.168.0.59:6300"20"192.168.0.59:6300> exit#在启动的时添加一个-c参数,从中可以看到当读写Key-Value数据时会显示数据所属的哈希槽及其存储的节点。
五、添加新的节点# mkdir /data/redis/6302# cp /data/redis/redis_630{1,2}.conf# vim /data/redis/redis_6302.conf# scp /data/redis/redis_6302.conf pc2:/data/redis/redis_6302.conf 100% 1071 1.1KB/s 00:00# ssh pc2 'mkdir /data/redis/6302'# redis-server /data/redis/redis_6302.conf# ssh pc2 'redis-server /data/redis/redis_6302.conf'# lsof -i:6302COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEredis-ser 2877 root 4uIPv412239 0t0TCP *:6302 (LISTEN)# ssh pc2 'lsof -i:6302'COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEredis-ser 2959 root 4uIPv413429 0t0TCP *:6302 (LISTEN)##redis-trib.rb add-node 192.168.0.59:6302 192.168.0.59:6300 #在现有集群中添加一个master#redis-trib.rb check 192.168.0.59:6302#检查集群中每个节点的id号#redis-trib.rb add-node --slave --master-id eb6cef331778f9d8feaf80a1e1b41918736c6d5e 192.168.1.95:6302 192.168.0.59:6300#为新的master再添加一个slave 服务器#redis-trib.rb reshard 192.168.2.236:6300#重新平衡slot# redis-trib.rb info 192.168.2.236:6300192.168.2.236:6300 (1bf7e417...) -> 0 keys | 4462 slots | 1 slaves.192.168.0.59:6302 (eb6cef33...) -> 2 keys | 2999 slots | 1 slaves.192.168.0.59:6300 (b7ce0951...) -> 0 keys | 4462 slots | 1 slaves.192.168.1.95:6300 (e877cb21...) -> 0 keys | 4461 slots | 1 slaves. 2 keys in 4 masters.0.00 keys per slot on average.#
六、节点下线需要先将“要下线的master” 中的slot迁移到其它节点# redis-trib.rb reshard 192.168.2.236:6300
然后查看“要下线的master” 中slot个数已经为0 # redis-trib.rb info 192.168.2.236:6300192.168.2.236:6300 (1bf7e417...) -> 0 keys | 4462 slots | 1 slaves.192.168.0.59:6302 (eb6cef33...) -> 2 keys | 7461 slots | 2 slaves.192.168.0.59:6300 (b7ce0951...) -> 0 keys | 0 slots | 0 slaves.192.168.1.95:6300 (e877cb21...) -> 0 keys | 4461 slots | 1 slaves. 2 keys in 4 masters.0.00 keys per slot on average.
再次检查集群中各节点的id号# redis-trib.rb check 192.168.2.236:6300>>> Performing Cluster Check (using node 192.168.2.236:6300)M: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:11922-16383 (4462 slots) master 1 additional replica(s)M: eb6cef331778f9d8feaf80a1e1b41918736c6d5e 192.168.0.59:6302 slots:0-6461,10923-11921 (7461 slots) master 2 additional replica(s)S: 430f55f658999e2e174bd7aa6b4590cb03620780 192.168.0.59:6301 slots: (0 slots) slave replicates eb6cef331778f9d8feaf80a1e1b41918736c6d5eM: b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc 192.168.0.59:6300 slots: (0 slots) master 0 additional replica(s)S: a120a104a10d05cc5083abeba6a65b729e67cb65 192.168.1.95:6302 slots: (0 slots) slave replicates eb6cef331778f9d8feaf80a1e1b41918736c6d5eM: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:6462-10922 (4461 slots) master 1 additional replica(s)S: 43c64a42de0a4996ac25b2c469d8b890a6a0e9cf 192.168.1.95:6301 slots: (0 slots) slave replicates e877cb218010461d67affe72e217b92c3b7df3ceS: 44102d5ae0638dc1369af09787f10af6b93169df 192.168.2.236:6301 slots: (0 slots) slave replicates 1bf7e417c5e90bef3970835d50e710d703e80189 All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage... All 16384 slots covered.
然后删除“要下线master”的slave服务器,如下操作:# redis-trib.rb del-node 192.168.2.236:6300430f55f658999e2e174bd7aa6b4590cb03620780>>> Removing node 430f55f658999e2e174bd7aa6b4590cb03620780 from cluster 192.168.2.236:6300>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.
最后再将指定的master 下线,如下操作:# redis-trib.rb del-node 192.168.2.236:6300 b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc>>> Removing node b7ce0951962bf4341ef58a2e3cb8bd7b9bb9efcc from cluster 192.168.2.236:6300>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.
再次检查cluster状态,如下:# redis-trib.rb check 192.168.2.236:6300>>> Performing Cluster Check (using node 192.168.2.236:6300)M: 1bf7e417c5e90bef3970835d50e710d703e80189 192.168.2.236:6300 slots:11922-16383 (4462 slots) master 1 additional replica(s)M: eb6cef331778f9d8feaf80a1e1b41918736c6d5e 192.168.0.59:6302 slots:0-6461,10923-11921 (7461 slots) master 1 additional replica(s)S: a120a104a10d05cc5083abeba6a65b729e67cb65 192.168.1.95:6302 slots: (0 slots) slave replicates eb6cef331778f9d8feaf80a1e1b41918736c6d5eM: e877cb218010461d67affe72e217b92c3b7df3ce 192.168.1.95:6300 slots:6462-10922 (4461 slots) master 1 additional replica(s)S: 43c64a42de0a4996ac25b2c469d8b890a6a0e9cf 192.168.1.95:6301 slots: (0 slots) slave replicates e877cb218010461d67affe72e217b92c3b7df3ceS: 44102d5ae0638dc1369af09787f10af6b93169df 192.168.2.236:6301 slots: (0 slots) slave replicates 1bf7e417c5e90bef3970835d50e710d703e80189 All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage... All 16384 slots covered.#
页:
[1]