Redis Cluster集群搭建
1.服务器192.168.1.201
192.168.1.204
192.168.1.205
192.168.1.206
192.168.1.207
192.168.1.208
因为Redis Cluster如果数据冗余是1的话,至少要3个Master和3个Slave。
2.安装步骤
(1)准备工作
mkdir /usr/local/redis_cluster
mkdir /usr/local/redis_cluster (安装所需的文件)
cd /usr/local/redis_cluster
上传redis-3.2.7.tar.gz和rubygems-2.6.10.tgz(其它的yum安装即可,个人认为应该用高版本比较好兼容,此环境因为其它依赖都已经装了,故不再重复安装)
安装zlib
rpm -qa |grep zlib (检查系统中是否安装)
没有的话可以安装zlib-1.2.6.tar
./configure
make && make install
安装ruby
rpm -qa |grep ruby
没有的话可以安装ruby1.9.2
./configure –prefix=/usr/local/ruby
make && make install
cp ruby /usr/local/bin/
这里我们用YUM安装:
yuminstall -y ruby.x86_64 ruby-devel.x86_64 ruby-rdoc.x86_64
安装rubygem(可能需要外网)
tar zxvf rubygems-2.6.10.tgz
为避免报错可以先升级gem:
gem update –system
进行安装:
ruby setup.rb
cpbin/gem /usr/local/bin/
(2)修改配置文件
解压redis-3.2.7.tar.gz
tar -zxvf redis-3.2.7.tar.gz
cd redis-3.2.7
make MALLOC=libc(如果直接make编译的话,jemalloc可能会报错,源于源代码zmalloc.c中,redis2.4以上自带的是jemalloc,所以需要指定,我没有试过指定jemalloc可否,故选用标准内存管理)
make install PREFIX=/usr/local/redis
cp src/redis-server /usr/local/bin/
cp src/redis-cli /usr/local/bin/
cp src/redis-trib.rb /usr/local/bin/
sed -e "s/#.*//g" redis.conf | awk '{if (length!=0) print $0}' > redis_6379.conf(去注释,好调试)
修改redis.conf:
bind 192.168.1.206 (建议绑定,否则客户端重定向时会报错)
protected-mode no (3.0新特性,yes只允许本地127.0.0.1连接)
port 7379(监听端口)
tcp-backlog 511
timeout 0
tcp-keepalive 0
daemonize no (关闭后台运行)
supervised no
pidfile /var/run/redis_7379.pid
loglevel notice
logfile"/data/redis/logs/7379.log" (定义日志文件路径)
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename 7379.rdb (rdb存储文件)
dir /data/redis/data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename"appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes (开启Cluster)
cluster-node-timeout 15000(集群超时时间,节点超时多久则认为它宕机)
cluster-migration-barrier 1 (主有1个可用从时才可被迁移)
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal0 0 0
client-output-buffer-limit slave256mb 64mb 60
client-output-buffer-limit pubsub32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
maxmemory 4G
maxmemory-policy allkeys-lru(当内存用满时的置换策略)
cluster-config-filenodes-7379.conf(集群配置文件,无需创建)
cluster-require-full-coverage no(槽是否全覆盖,一定要no,不然有节点宕机,16384槽无法覆盖满,会导致整个集群停止服务)
内存置换策略
[*] noeviction: 不进行置换,表示即使内存达到上限也不进行置换,所有能引起内存增加的命令都会返回error
[*] allkeys-lru: 优先删除掉最近最不经常使用的key,用以保存新数据
[*] volatile-lru: 只从设置失效(expire set)的key中选择最近最不经常使用的key进行删除,用以保存新数据
[*] allkeys-random: 随机从all-keys中选择一些key进行删除,用以保存新数据
[*] volatile-random: 只从设置失效(expire set)的key中,选择一些key进行删除,用以保存新数据
[*] volatile-ttl: 只从设置失效(expire set)的key中,选出存活时间(TTL)最短的key进行删除,用以保存新数据
(3)启动各实例
这里不用单独先把实例的主从设定,因为在创建集群时,会自动选择主从关系;
cp redis-3.2.7/redis.conf ./redis_6379.conf
mkdir -p /data/redis/{data,logs}
提前准备:
Redis启动时会报告警,虽然不影响使用,但长期可能会有影响,故提示下;
sysctl vm.overcommit_memory=1
内存分配策略:
0,表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程;
1,表示内核允许分配所有的物理内存,而不管当前的内存状态如何;
2,表示内核允许分配超过所有物理内存和交换空间总和的内存。
echo never > /sys/kernel/mm/transparent_hugepage/enabled(写到rc.local里)
Redis报错信息提示:表示你使用的是透明大页,可能导致redis延迟和内存使用问题
启动6个实例(必须是无数据的空节点):
redis-server /usr/local/redis_cluster/redis_6379.conf >/data/redis/logs/6379.log 2>&1 & (注意端口号)
检查实例状态:
redis-cli -c -h 192.168.1.201 -p 6379 cluster nodes
此时每个实例都是master,并且会提示cluster配置未找到,勿急。
(4)建立集群
redis-trib.rb create --replicas 1 192.168.1.201:6379192.168.1.204:6380 192.168.1.205:6381 192.168.1.206:7379 192.168.1.207:7380192.168.1.208:7381 (这里用工具启动,手动启动步骤较复杂,下文中会有提到,仅作为分析原理使用)
# redis-trib.rb create --replicas 1192.168.1.201:6379 192.168.1.204:6380 192.168.1.205:6381 192.168.1.206:7379192.168.1.207:7380 192.168.1.208:7381
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
[*] 168.1.208:7381
[*] 168.1.207:7380
[*] 168.1.206:7379
Adding replica 192.168.1.205:6381 to 192.168.1.208:7381
Adding replica 192.168.1.204:6380 to 192.168.1.207:7380
Adding replica 192.168.1.201:6379 to 192.168.1.206:7379
S: 187f8d5d047da792b598329cebd537d1b54daaec 192.168.1.201:6379
replicates6f1a1dda2742ce5fc7e3699f846ef8c47ab182f4
S: 9ecc74cf38b846656185e460c360761272de55b6 192.168.1.204:6380
replicates810783993b6e59aec3b804e27ab1482a1d60bda9
S: 55922fb281141a1f6e613fe45c8cfe4e946c0588 192.168.1.205:6381
replicatesb6605d95f8609110a9f8cbee371e7ab4f22aa4e3
M: 6f1a1dda2742ce5fc7e3699f846ef8c47ab182f4 192.168.1.206:7379
slots:10923-16383 (5461slots) master
M: 810783993b6e59aec3b804e27ab1482a1d60bda9 192.168.1.207:7380
slots:5461-10922 (5462slots) master
M: b6605d95f8609110a9f8cbee371e7ab4f22aa4e3 192.168.1.208:7381
slots:0-5460 (5461 slots)master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....
>>> Performing Cluster Check (using node192.168.1.201:6379)
S: 187f8d5d047da792b598329cebd537d1b54daaec 192.168.1.201:6379
slots: (0 slots) slave
replicates6f1a1dda2742ce5fc7e3699f846ef8c47ab182f4
M: 810783993b6e59aec3b804e27ab1482a1d60bda9 192.168.1.207:7380
slots:5461-10922 (5462slots) master
1 additional replica(s)
S: 55922fb281141a1f6e613fe45c8cfe4e946c0588 192.168.1.205:6381
slots: (0 slots) slave
replicatesb6605d95f8609110a9f8cbee371e7ab4f22aa4e3
S: 9ecc74cf38b846656185e460c360761272de55b6 192.168.1.204:6380
slots: (0 slots) slave
replicates810783993b6e59aec3b804e27ab1482a1d60bda9
M: b6605d95f8609110a9f8cbee371e7ab4f22aa4e3 192.168.1.208:7381
slots:0-5460 (5461 slots)master
1 additional replica(s)
M: 6f1a1dda2742ce5fc7e3699f846ef8c47ab182f4 192.168.1.206:7379
slots:10923-16383 (5461slots) master
1 additional replica(s)
All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
All 16384 slots covered.
命令中的1表示按1:1的比例建立主从关系,这里了既是3:3啦;
另外我们也可以看出主从已经自动选取。
(5)检查集群状态
redis-trib.rb check 192.168.1.204:6380
#redis-trib.rb check 192.168.1.204:6380
>>> Performing ClusterCheck (using node 192.168.1.204:6380)
S:9ecc74cf38b846656185e460c360761272de55b6 192.168.1.204:6380
slots: (0 slots) slave
replicates 810783993b6e59aec3b804e27ab1482a1d60bda9
M:b6605d95f8609110a9f8cbee371e7ab4f22aa4e3 192.168.1.208:7381
slots:0-5460 (5461 slots) master
1 additional replica(s)
S:55922fb281141a1f6e613fe45c8cfe4e946c0588 192.168.1.205:6381
slots: (0 slots) slave
replicates b6605d95f8609110a9f8cbee371e7ab4f22aa4e3
M:810783993b6e59aec3b804e27ab1482a1d60bda9 192.168.1.207:7380
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S:187f8d5d047da792b598329cebd537d1b54daaec 192.168.1.201:6379
slots: (0 slots) slave
replicates 6f1a1dda2742ce5fc7e3699f846ef8c47ab182f4
M:6f1a1dda2742ce5fc7e3699f846ef8c47ab182f4 192.168.1.206:7379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
All nodes agree about slotsconfiguration.
>>> Check for open slots...
>>> Check slotscoverage...
All 16384 slots covered.
没有任何警告或错误,表示集群启动成功并处于ok状态;
至此整个集群就搭建完毕了。
参考文章:
http://blog.csdn.net/dc_726/article/details/48552531
http://blog.csdn.net/myrainblues/article/details/25881535/
页:
[1]