设为首页 收藏本站
查看: 1188|回复: 0

[经验分享] 安装高可用Hadoop生态 (二) 安装Zookeeper

[复制链接]

尚未签到

发表于 2017-12-18 06:25:38 | 显示全部楼层 |阅读模式
2.    安装Zookeeper

2.1. 解压程序
  ※ 3台服务器分别执行
  

tar -xf ~/install/zookeeper-3.4.9.tar.gz -C/opt/cloud/packages  

  

ln -s /opt/cloud/packages/zookeeper-3.4.9 /opt/cloud/bin/zookeeper  

ln -s /opt/cloud/packages/zookeeper-3.4.9/conf /opt/cloud/etc/zookeeper  

  

mkdir -p /opt/cloud/data/zookeeper/dat  

mkdir -p /opt/cloud/data/zookeeper/logdat  

mkdir -p /opt/cloud/logs/zookeeper  


2.2. 修改配置文件

2.2.1.    修改zoo.cfg
  

mv /opt/cloud/etc/zookeeper/zoo_sample.cfg  /opt/cloud/etc/zookeeper/zoo.cfg  

vi /opt/cloud/etc/zookeeper/zoo.cfg  

  

# The number of milliseconds of each tick  
tickTime
=2000  

  
# The number of ticks that the initial
  
# synchronization phase can take
  
initLimit
=10  

  
# The number of ticks that can pass between
  
# sending a request and getting an acknowledgement
  
syncLimit
=5  

  
# the directory where the snapshot is stored.
  
#
do not use /tmp for storage, /tmp here is just  
# example sakes.
  
dataDir
=/opt/cloud/data/zookeeper/dat  
dataLogDir
=/opt/cloud/data/zookeeper/logdat[1]  

  
# the port at
which the clients will connect  
clientPort
=2181  

  
# the maximum number of client connections.
  
# increase this
if you need to handle more clients  
maxClientCnxns
=100  

  
#
  
# Be sure to read the maintenance section of the
  
# administrator guide before turning on autopurge.
  
#
  
# http:
//zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance  
#
  
# The number of snapshots to retain in dataDir
  
autopurge.snapRetainCount=5[2]
  

  
# Purge task interval in hours
  
# Set to "0" to disable auto purge feature
  
autopurge.purgeInterval=6
  

  
# server.A=B:C:D
  
server.1=hadoop1:2888:3888[3]
  
server.2=hadoop2:2888:3888
  
server.3=hadoop3:2888:3888
  


2.2.2.    修改log配置文件
  

vi /opt/cloud/etc/zookeeper/log4j.properties  

  修改配置项
  

zookeeper.root.logger=INFO, DRFA  
zookeeper.log.
dir=/opt/cloud/logs/zookeeper  

  增加DRFA日志定义
  

log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender  
log4j.appender.DRFA.Append
=true  
log4j.appender.DRFA.DatePattern
='.'yyyy-MM-dd  
log4j.appender.DRFA.File
=${zookeeper.log.dir}/${zookeeper.log.file}  
log4j.appender.DRFA.Threshold
=${zookeeper.log.threshold}  
log4j.appender.DRFA.layout
=org.apache.log4j.PatternLayout  
log4j.appender.DRFA.layout.ConversionPattern
=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n  
log4j.appender.DRFA.Encoding
=UTF-8  
#log4j.appender.DRFA.MaxFileSize
=20MB  


2.2.3.    复制到另外2台服务器
  

scp /opt/cloud/etc/zookeeper/zoo.cfg hadoop2:/opt/cloud/etc/zookeeper  

scp /opt/cloud/etc/zookeeper/log4j.properties hadoop2:/opt/cloud/etc/zookeeper  

scp /opt/cloud/etc/zookeeper/zoo.cfg hadoop3:/opt/cloud/etc/zookeeper  

scp /opt/cloud/etc/zookeeper/log4j.properties hadoop3:/opt/cloud/etc/zookeeper  


2.3. 生成myid
  在dataDir目录下创建一个myid文件,然后分别在myid文件中按照zoo.cfg文件的server.A中A的数值,在不同机器上的该文件中填写相应的值。
  

ssh hadoop1 'echo 1 >/opt/cloud/data/zookeeper/dat/myid'  
ssh hadoop2 'echo 2 >/opt/cloud/data/zookeeper/dat/myid'
  
ssh hadoop3 'echo 3 >/opt/cloud/data/zookeeper/dat/myid'
  


2.4. 设置环境变量
  vi ~/.bashrc
  增加
  

export ZOO_HOME=/opt/cloud/bin/zookeeper  
export ZOOCFGDIR
=${ZOO_HOME}/conf  
export ZOO_LOG_DIR
=/opt/cloud/logs/zookeeper  
export PATH
=$ZOO_HOME/bin:$PATH  

  即刻生效
  source ~/.bashrc
  复制到另外两台服务器
  

scp ~/.bashrc hadoop2:/home/hadoop  

scp ~/.bashrc hadoop3:/home/hadoop  


2.5. 手工执行
  1.启动
  

zkServer.sh start  

  2.输入jps命令查看进程
  

QuorumPeerMain  
Jps
  

  其中,QuorumPeerMain是zookeeper进程,启动正常。
  3、停止zookeeper进程
  

zkServer.sh stop  

  4、启动zookeeper集群
  

[hadoop@hadoop1 ~]$ cexec 'zkServer.sh start'  

  
************************* cloud *************************
  
--------- hadoop1---------
  
ZooKeeper JMX enabled by default
  
Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg
  
Starting zookeeper ... STARTED
  

  
--------- hadoop2---------
  
ZooKeeper JMX enabled by default
  
Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg
  
Starting zookeeper ... STARTED
  

  
--------- hadoop3---------
  
ZooKeeper JMX enabled by default
  
Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg
  
Starting zookeeper ... STARTED
  

  5、查看zookeeper集群状态
  

[hadoop@hadoop1 ~]$ cexec 'zkServer.sh status'  

  
************************* cloud *************************
  

  
--------- hadoop1---------
  
ZooKeeper JMX enabled by default
  
Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg
  
Mode: follower
  

  
--------- hadoop2---------
  
ZooKeeper JMX enabled by default
  
Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg
  
Mode: follower
  

  
--------- hadoop3---------
  
ZooKeeper JMX enabled by default
  
Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg
  
Mode: leader
  

  6、启动客户端脚本
  

zkCli.sh  
ls /zookeeper
  
ls /zookeeper/quota
  


2.6. 系统启动时自动运行
  

vi /opt/cloud/bin/zookeeper/bin/zkServer.sh  

  

找到  
nohup
"$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \  
替换为
  
nohup
"$JAVA" "-Dlog4j.configuration=file:${ZOOCFGDIR}/log4j.properties" \  

  复制到另外两台服务器
  

scp /opt/cloud/bin/zookeeper/bin/zkEnv.sh hadoop2:/opt/cloud/bin/zookeeper/bin/  
scp /opt/cloud/bin/zookeeper/bin/zkServer.sh hadoop2:/opt/cloud/bin/zookeeper/bin/
  
scp /opt/cloud/bin/zookeeper/bin/zkEnv.sh hadoop3:/opt/cloud/bin/zookeeper/bin/
  
scp /opt/cloud/bin/zookeeper/bin/zkServer.sh hadoop3:/opt/cloud/bin/zookeeper/bin/
  

  vi /etc/systemd/system/zookeeper.service
  

[Unit]  
Description
=Zookeeper service  
After
=network.target  

  
[Service]
  
User
=hadoop  
Group
=hadoop  
Type
=forking  

  
Environment
= ZOO_HOME=/opt/cloud/bin/zookeeper  
Environment
= ZOOCFGDIR=/opt/cloud/bin/zookeeper/conf  
Environment
= ZOO_LOG_DIR=/opt/cloud/logs/zookeeper  

  
ExecStart
=/usr/bin/sh -c '/opt/cloud/bin/zookeeper/bin/zkServer.sh start'  
ExecStop
=/usr/bin/sh -c '/opt/cloud/bin/zookeeper/bin/zkServer.sh stop'  

  
[Install]
  
WantedBy
=multi-user.target  

  复制到另外两台服务器
  

scp /etc/systemd/system/zookeeper.service hadoop2:/etc/systemd/system/  
scp /etc/systemd/system/zookeeper.service hadoop3:/etc/systemd/system/
  


重新加载配置信息:systemctl daemon-reload

启动zookeeper:systemctl start zookeeper

停止zookeeper:systemctl stop zookeeper

查看进程状态及日志(重要):systemctl status zookeeper

开机自启动:systemctl enable zookeeper

关闭自启动:systemctl disable zookeeper

启动服务设置为自动启动

  

systemctl daemon-reload  
systemctl start zookeeper
  
systemctl status zookeeper
  
systemctl enable zookeeper
  


2.7. 卸载
  root用户操作


  • 停止并卸载zookeeper服务
  

   systemctl stop zookeeper  
systemctl disable zookeeper
  

rm /etc/systemd/system/zookeeper.service -f  



  • 复原环境变量
         vi ~/.bashrc

         删除zookeeper相关行



  • 删除其他文件
  

   rm /opt/cloud/bin/zookeeper/ -rf  

rm /opt/cloud/data/zookeeper/ -rf  

rm /opt/cloud/logs/zookeeper/ -rf  

rm /opt/cloud/packages/zookeeper-3.4.9/ -rf  



[1] #如果有高速设备,dateLogDir可以设置在高速设备上以大幅提高效率

[2] 设置数据定时清理机制

[3] # server.A=B:C:D:

  # A 是一个数字,表示这个是第几号服务器
  # B 是这个服务器的 ip 地址;
  # C 表示的是这个服务器与集群中的
Leader 服务器交换信息的端口;
  # D 表示用来执行选举时服务器相互通信的端口。

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-425218-1-1.html 上篇帖子: Hadoop HDFS NFS GateWay部署深入具体解释 下篇帖子: hadoop内存大小设置问题【转】
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表