设为首页 收藏本站
查看: 846|回复: 0

[经验分享] hadoop集群的安装配置

[复制链接]
发表于 2018-10-29 14:13:33 | 显示全部楼层 |阅读模式
#启动zookeeper集群  
[hadoop@namenode ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
  
JMX enabled by default
  
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
  
Starting zookeeper ... STARTED
  
[hadoop@datanode1 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
  
JMX enabled by default
  
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
  
Starting zookeeper ... STARTED
  
[hadoop@datanode2 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
  
JMX enabled by default
  
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
  
Starting zookeeper ... STARTED
  

  
#查看zookeeper集群状态
  
[hadoop@namenode ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh status
  
JMX enabled by default
  
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
  
Mode: follower
  
[hadoop@namenode ~]$ ssh datanode1 '/usr/local/zookeeper-3.4.6/bin/zkServer.sh status'
  
JMX enabled by default
  
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
  
Mode: leader
  
[hadoop@namenode ~]$ ssh datanode2 '/usr/local/zookeeper-3.4.6/bin/zkServer.sh status'
  
JMX enabled by default
  
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
  
Mode: follower
  

  
#启动hadoop集群
  
[hadoop@namenode ~]$ /usr/local/hadoop/sbin/start-all.sh
  
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
  
16/04/06 16:05:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  
Starting namenodes on [namenode]
  
namenode: starting namenode, logging to /usr/local/hadoop-2.6.4/logs/hadoop-hadoop-namenode-namenode.out
  
datanode1: datanode running as process 2899. Stop it first.
  
datanode2: starting datanode, logging to /usr/local/hadoop-2.6.4/logs/hadoop-hadoop-datanode-datanode2.out
  
Starting secondary namenodes [0.0.0.0]
  
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.4/logs/hadoop-hadoop-secondarynamenode-namenode.out
  
16/04/06 16:06:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  
starting yarn daemons
  
starting resourcemanager, logging to /usr/local/hadoop-2.6.4/logs/yarn-hadoop-resourcemanager-namenode.out
  
datanode1: starting nodemanager, logging to /usr/local/hadoop-2.6.4/logs/yarn-hadoop-nodemanager-datanode1.out
  
datanode2: starting nodemanager, logging to /usr/local/hadoop-2.6.4/logs/yarn-hadoop-nodemanager-datanode2.out
  

  
#查看hadoop集群的状态
  
[hadoop@namenode ~]$ jps
  
1187 QuorumPeerMain
  
1397 NameNode
  
1978 Jps
  
1580 SecondaryNameNode
  
1724 ResourceManager
  
[hadoop@namenode ~]$ ssh datanode1 'jps'
  
2899 DataNode
  
3699 Jps
  
3444 QuorumPeerMain
  
3576 NodeManager
  
[hadoop@namenode ~]$ ssh datanode2 'jps'
  
1168 QuorumPeerMain
  
1481 Jps
  
1257 DataNode
  
1358 NodeManager
  

  
[hadoop@namenode ~]$ hadoop dfsadmin -report
  
DEPRECATED: Use of this script to execute hdfs command is deprecated.
  
Instead use the hdfs command for it.
  

  
16/04/06 16:07:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  
Configured Capacity: 206834581504 (192.63 GB)
  
Present Capacity: 192312418304 (179.10 GB)
  
DFS Remaining: 192312344576 (179.10 GB)
  
DFS Used: 73728 (72 KB)
  
DFS Used%: 0.00%
  
Under replicated blocks: 0
  
Blocks with corrupt replicas: 0
  
Missing blocks: 0
  

  
-------------------------------------------------
  
Live datanodes (2):
  

  
Name: 192.168.3.67:50010 (datanode1)
  
Hostname: datanode1
  
Decommission Status : Normal
  
Configured Capacity: 103417290752 (96.31 GB)
  
DFS Used: 28672 (28 KB)
  
Non DFS Used: 7262519296 (6.76 GB)
  
DFS Remaining: 96154742784 (89.55 GB)
  
DFS Used%: 0.00%
  
DFS Remaining%: 92.98%
  
Configured Cache Capacity: 0 (0 B)
  
Cache Used: 0 (0 B)
  
Cache Remaining: 0 (0 B)
  
Cache Used%: 100.00%
  
Cache Remaining%: 0.00%
  
Xceivers: 1
  
Last contact: Wed Apr 06 16:07:35 CST 2016
  

  

  
Name: 192.168.3.68:50010 (datanode2)
  
Hostname: datanode2
  
Decommission Status : Normal
  
Configured Capacity: 103417290752 (96.31 GB)
  
DFS Used: 45056 (44 KB)
  
Non DFS Used: 7259643904 (6.76 GB)
  
DFS Remaining: 96157601792 (89.55 GB)
  
DFS Used%: 0.00%
  
DFS Remaining%: 92.98%
  
Configured Cache Capacity: 0 (0 B)
  
Cache Used: 0 (0 B)
  
Cache Remaining: 0 (0 B)
  
Cache Used%: 100.00%
  
Cache Remaining%: 0.00%
  
Xceivers: 1
  
Last contact: Wed Apr 06 16:07:35 CST 2016
  

  
#启动hbase
  
[hadoop@namenode ~]$ /usr/local/hbase-1.2.0/bin/start-hbase.sh
  

  
#在namenode上查看状态
  
[hadoop@namenode ~]$ jps
  
1187 QuorumPeerMain        #zookeeper进程
  
1397 NameNode              #hadoop master进程
  
2154 HMaster               #hbase master进程
  
1580 SecondaryNameNode     #hadoop进程
  
1724 ResourceManager       #hadoop进程
  
2415 Jps
  

  
#查看datanode1的状态
  
[hadoop@namenode ~]$ ssh datanode1 'jps'
  
2899 DataNode              #hadoop datanode节点进程
  
3444 QuorumPeerMain        #zookeeper进程
  
5018 Jps
  
4875 HRegionServer         #hbase进程
  

  
#查看datanode2的状态
  
[hadoop@namenode ~]$ ssh datanode2 'jps'
  
1168 QuorumPeerMain
  
3074 Jps
  
2872 HRegionServer
  
1257 DataNode
  

  
#进入hbase shell进行验证
  
[hadoop@namenode ~]$ /usr/local/hbase-1.2.0/bin/hbase shell
  
HBase Shell; enter 'help' for list of supported commands.
  
Type "exit" to leave the HBase Shell
  
Version 1.2.0, r25b281972df2f5b15c426c8963cbf77dd853a5ad, Thu Feb 18 23:01:49 CST 2016
  

  
hbase(main):001:0> list
  
TABLE
  
0 row(s) in 0.3220 seconds
  

  
=> []
  
hbase(main):002:0> status
  
1 active master, 2 backup masters, 1 servers, 0 dead, 2.0000 average load
  

  
hbase(main):003:0> quit



运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-628105-1-1.html 上篇帖子: 别老扯什么Hadoop了,你的数据根本不够大! 下篇帖子: 修改hadoop/hdfs日志级别
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表