car.3205 发表于 2019-1-8 10:12:28

ZooKeeper的伪分布式集群搭建以及真分布式集群搭建

zk集群的一些基本概念
  zookeeper集群搭建:


[*]zk集群,主从节点,心跳机制(选举模式)
[*]配置数据文件 myid 1/2/3 对应 server.1/2/3
[*]通过 zkCli.sh -server : 命令检测集群是否配置成功
  和其他大多数集群结构一样,zookeeper集群也是主从结构。搭建集群时,机器数量最低也是三台,因为小于三台就无法进行选举。选举就是当集群中的master节点挂掉之后,剩余的两台机器会进行选举,在这两台机器中选举出一台来做master节点。而当原本挂掉的master恢复正常后,也会重新加入集群当中。但是不会再作为master节点,而是作为slave节点。如下:
http://i2.运维网.com/images/blog/201804/24/30d6a17c5e8a9e057a0c4a820644a863.png

单机伪分布式搭建zookeeper集群
  本节介绍单机伪分布式的zookeeper安装,官方下载地址如下:

  https://archive.apache.org/dist/zookeeper/

  我这里使用的是3.4.11版本,所以找到相应的版本点击进去,复制到.tar.gz的下载链接到Linux上进行下载。命令如下:

# cd /usr/local/src/
# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz
  下载完成之后将其解压到/usr/local/目录下:

# tar -zxvf zookeeper-3.4.11.tar.gz -C /usr/local/
# cd ../zookeeper-3.4.11/
# ls
bin      dist-maven       lib          README_packaging.txtzookeeper-3.4.11.jar.asc
build.xmldocs             LICENSE.txtrecipes               zookeeper-3.4.11.jar.md5
conf       ivysettings.xmlNOTICE.txt   src                   zookeeper-3.4.11.jar.sha1
contrib    ivy.xml          README.md    zookeeper-3.4.11.jar
#
  然后给目录重命名一下:

# cd /usr/local/
# mv zookeeper-3.4.11/ zookeeper00
  接着进行一系列的配置:

# cd zookeeper00/
# cd conf/
# cp zoo_sample.cfg zoo.cfg# 拷贝官方提供的模板配置文件
# vim zoo.cfg# 增加或修改成如下内容
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper00/dataDir
dataLogDir=/usr/local/zookeeper00/dataLogDir
clientPort=2181
4lw.commands.whitelist=*
server.1=192.168.190.129:2888:3888# master节点,ip后面跟的是集群通信的端口
server.2=192.168.190.129:2889:3889
server.3=192.168.190.129:2890:3890
# cd ../
# mkdir {dataDir,dataLogDir}
# cd dataDir/
# vim myid# 配置该节点的id
1
#
  配置完之后,拷贝多个目录出来,因为是单机的伪分布式所以需要在一台机器上安装多个zookeeper:

# cp zookeeper00 zookeeper01 -rf
# cp zookeeper00 zookeeper02 -rf
  配置 zookeeper01:

# cd zookeeper01/conf/
# vim zoo.cfg# 修改内容如下
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper01/dataDir
dataLogDir=/usr/local/zookeeper01/dataLogDir
clientPort=2182# 端口号必须要修改
4lw.commands.whitelist=*
server.1=192.168.190.129:2888:3888
server.2=192.168.190.129:2889:3889
server.3=192.168.190.129:2890:3890
# cd ../dataDir/
# vim myid
2
#
  配置 zookeeper02:

# cd zookeeper02/conf/
# vim zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper02/dataDir
dataLogDir=/usr/local/zookeeper02/dataLogDir
clientPort=2183# 端口号必须要修改
4lw.commands.whitelist=*
server.1=192.168.190.129:2888:3888
server.2=192.168.190.129:2889:3889
server.3=192.168.190.129:2890:3890
# cd ../dataDir/
# vim myid
3
#
  以上就在单机上配置好三个zookeeper集群节点了,现在我们来测试一下,这个伪分布式的zookeeper集群能否正常运作起来:

# cd /usr/local/zookeeper00/bin/
# ./zkServer.sh start# 启动第一个节点
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper00/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
# netstat -lntp |grep java# 查看监听的端口
tcp6       0      0 192.168.190.129:3888    :::*                  LISTEN      3191/java# 集群通信的端口
tcp6       0      0 :::44793                :::*                  LISTEN      3191/java
tcp6       0      0 :::2181               :::*                  LISTEN      3191/java
# cd ../../zookeeper01/bin/
# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper01/bin/../conf/zoo.cfg# 启动第二个节点
Starting zookeeper ... STARTED
# cd ../../zookeeper02/bin/
# ./zkServer.sh start# 启动第三个节点
# netstat -lntp |grep java   # 查看监听的端口
tcp6       0      0 192.168.190.129:2889    :::*                  LISTEN      3232/java         
tcp6       0      0 :::48463                :::*                  LISTEN      3232/java         
tcp6       0      0 192.168.190.129:3888    :::*                  LISTEN      3191/java         
tcp6       0      0 192.168.190.129:3889    :::*                  LISTEN      3232/java         
tcp6       0      0 192.168.190.129:3890    :::*                  LISTEN      3286/java         
tcp6       0      0 :::44793                :::*                  LISTEN      3191/java         
tcp6       0      0 :::60356                :::*                  LISTEN      3286/java         
tcp6       0      0 :::2181               :::*                  LISTEN      3191/java         
tcp6       0      0 :::2182               :::*                  LISTEN      3232/java         
tcp6       0      0 :::2183               :::*                  LISTEN      3286/java         
# jps# 查看进程
3232 QuorumPeerMain
3286 QuorumPeerMain
3191 QuorumPeerMain
3497 Jps
#
  如上,可以看到,三个节点都正常启动成功了,接下来我们进入客户端,创建一些znode,看看是否会同步到集群中的其他节点上去:

# ./zkCli.sh -server localhost:2181# 登录第一个节点的客户端
ls /

create /data test-data
Created /data
ls /

quit
# ./zkCli.sh -server localhost:2182# 登录第二个节点的客户端
ls /# 可以查看到我们在第一个节点上创建的znode,代表集群中的节点能够正常同步数据

get /data# 数据也是一致的
test-data
cZxid = 0x100000002
ctime = Tue Apr 24 18:35:56 CST 2018
mZxid = 0x100000002
mtime = Tue Apr 24 18:35:56 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
quit
# ./zkCli.sh -server localhost:2183# 登录第三个节点的客户端
ls /# 第三个节点也能查看到我们在第一个节点上创建的znode

get /data# 数据也是一致的
test-data
cZxid = 0x100000002
ctime = Tue Apr 24 18:35:56 CST 2018
mZxid = 0x100000002
mtime = Tue Apr 24 18:35:56 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
quit
#
  查看集群的状态、主从信息需要使用 ./zkServer.sh status 命令,但是多个节点的话,逐个查看有些费劲,所以我们写一个简单的shell脚本来批量执行命令。如下:

# vim checked.sh# 脚本内容如下
#!/bin/bash
/usr/local/zookeeper00/bin/zkServer.sh status
/usr/local/zookeeper01/bin/zkServer.sh status
/usr/local/zookeeper02/bin/zkServer.sh status
# sh ./checked.sh# 执行脚本
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper00/bin/../conf/zoo.cfg
Mode: follower# 从节点
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper01/bin/../conf/zoo.cfg
Mode: leader   # 主节点
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper02/bin/../conf/zoo.cfg
Mode: follower
#
  到此为止,我们就成功完成了单机zookeeper伪分布式集群的搭建,并且也测试成功了。

搭建zookeeper分布式集群
  接下来,我们使用三台虚拟机来搭建zookeeper真实分布式集群,机器的ip地址如下:


[*]192.168.190.128
[*]192.168.190.129
[*]192.168.190.130
  注:三台机器都必须具备java的运行环境,并且关闭或清空防火墙规则,不想关闭防火墙的话,就需要去配置相应的防火墙规则
  首先配置一下系统的hosts文件:

# vim /etc/hosts
192.168.190.128 zk000
192.168.190.129 zk001
192.168.190.130 zk002
  把之前做伪分布式实验机器上的其他zookeeper目录删除掉,并把zookeeper目录使用rsync同步到其他机器上。如下:

# cd /usr/local/
# rm -rf zookeeper01
# rm -rf zookeeper02
# mv zookeeper00/ zookeeper
# rsync -av /usr/local/zookeeper/ 192.168.190.128:/usr/local/zookeeper/
# rsync -av /usr/local/zookeeper/ 192.168.190.130:/usr/local/zookeeper/
  然后逐个在三台机器上都配置一下环境变量,如下:

# vim .bash_profile# 增加如下内容
export ZOOKEEPER_HOME=/usr/local/zookeeper
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin
export PATH
# source .bash_profile
  逐个修改配置文件,zk000:

# cd /usr/local/zookeeper/conf/
# vim zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/dataDir
dataLogDir=/usr/local/zookeeper/dataLogDir
clientPort=2181
4lw.commands.whitelist=*
server.1=192.168.190.128:2888:3888# 默认server.1为master节点
server.2=192.168.190.129:2888:3888
server.3=192.168.190.130:2888:3888
# cd ../dataDir/
# vim myid
1
#
  zk001:

# cd /usr/local/zookeeper/conf/
# vim zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/dataDir
dataLogDir=/usr/local/zookeeper/dataLogDir
clientPort=2181
4lw.commands.whitelist=*
server.1=192.168.190.128:2888:3888# 默认server.1为master节点
server.2=192.168.190.129:2888:3888
server.3=192.168.190.130:2888:3888
# cd ../dataDir/
# vim myid
2
#
  zk002:

# cd /usr/local/zookeeper/conf/
# vim zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/dataDir
dataLogDir=/usr/local/zookeeper/dataLogDir
clientPort=2181
4lw.commands.whitelist=*
server.1=192.168.190.128:2888:3888# 默认server.1为master节点
server.2=192.168.190.129:2888:3888
server.3=192.168.190.130:2888:3888
# cd ../dataDir/
# vim myid
3
#
  配置完成之后,启动三台机器的zookeeper服务:

# zkServer.sh start
# zkServer.sh start
# zkServer.sh start
  启动成功后,查看三个机器的集群状态信息:

# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
#
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
#
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
#
  然后我们来测试创建znode是否会同步,进入192.168.190.128机器的客户端:

# zkCli.sh -server 192.168.190.128:2181
ls /

create /real-culster real-data
Created /real-culster
ls /

get /real-culster
real-data
cZxid = 0x300000002
ctime = Tue Apr 24 20:48:32 CST 2018
mZxid = 0x300000002
mtime = Tue Apr 24 20:48:32 CST 2018
pZxid = 0x300000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
quit
  进入192.168.190.129机器的客户端

# zkCli.sh -server 192.168.190.129:2181
ls /

get /real-culster
real-data
cZxid = 0x300000002
ctime = Tue Apr 24 20:48:32 CST 2018
mZxid = 0x300000002
mtime = Tue Apr 24 20:48:32 CST 2018
pZxid = 0x300000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
quit
  进入192.168.190.130机器的客户端

# zkCli.sh -server 192.168.190.130:2181
ls /

get /real-culster
real-data
cZxid = 0x300000002
ctime = Tue Apr 24 20:48:32 CST 2018
mZxid = 0x300000002
mtime = Tue Apr 24 20:48:32 CST 2018
pZxid = 0x300000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
quit
  从以上的测试可以看到,在zookeeper分布式集群中,我们在任意一个节点创建的znode都会被同步的集群中的其他节点上,数据也会被一并同步。所以到此为止,我们的zookeeper分布式集群就搭建成功了。

测试集群角色以及选举
  以上我们只是测试了znode的同步,还没有测试集群中的节点选举,所以本节就来测试一下,当master节点挂掉之后看看slave节点会不会通过选举坐上master的位置。首先我们来把master节点的zookeeper服务给停掉:

# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
# zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
#
  这时到其他两台机器上进行查看,可以看到有一台已经成为master节点了:

# zkServer.sh status# 可以看到zk002这个节点成为了master节点
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
#
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
#
  然后再把停掉的节点启动起来,可以看到,该节点重新加入了集群,但是此时是以slave角色存在,而不会以master角色存在:

# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
#
  可以看到,zk002这个节点依旧是master角色,不会被取代,所以只有在选举的时候集群中的节点才会切换角色:

# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
#


页: [1]
查看完整版本: ZooKeeper的伪分布式集群搭建以及真分布式集群搭建