lihu129c 发表于 2018-10-31 11:44:51

Hadoop添加datanode或者tasktracker节点

$ scp -r conf hadoop03:~/hadoop-1.1.2/log4j.properties            100% 4441 4.3KB/s 00:00 capacity-scheduler.xml             100% 7457 7.3KB/s 00:00 configuration.xsl            100% 535 0.5KB/s 00:00 fair-scheduler.xml            100% 327 0.3KB/s 00:00 hdfs-site.xml            100% 319 0.3KB/s 00:00 slaves               100% 18 0.0KB/s 00:00 ssl-server.xml.example             100% 1195 1.2KB/s 00:00 hadoop-policy.xml            100% 4644 4.5KB/s 00:00 taskcontroller.cfg            100% 382 0.4KB/s 00:00 mapred-queue-acls.xml             100% 2033 2.0KB/s 00:00 ssl-client.xml.example             100% 1243 1.2KB/s 00:00 masters               100% 9 0.0KB/s 00:00 core-site.xml            100% 441 0.4KB/s 00:00 hadoop-env.sh            100% 2271 2.2KB/s 00:00 hadoop-metrics2.properties             100% 1488 1.5KB/s 00:00  mapred-site.xml            100% 261 0.3KB/s 00:00
  4. 启动新节点
4.1 在新节点启动集群服务$ bin/hadoop-daemon.sh start datanode  starting datanode, logging to /home/xiaoyu/hadoop-1.1.2/libexec/../logs/hadoop-xiaoyu-datanode-hadoop03.out
  $ bin/hadoop-daemon.sh start tasktracker
  starting tasktracker, logging to /home/xiaoyu/hadoop-1.1.2/libexec/../logs/hadoop-xiaoyu-tasktracker-hadoop03.out
  5. 检查新节点的启动情况
在这里hadoop01为namenode节点这里有三种方法,当然执行命令的方法最简便。  5.1 Namenode状态页面:http://hadoop01:50070

  具体信息如下图

5.2 Jobtracker状态页面:http://hadoop01:50030
  具体信息如下图:

5.3 任意节点为上执行$ bin/hadoop dfsadmin -report Configured Capacity: 32977600512 (30.71 GB)Present Capacity: 20209930240 (18.82 GB)DFS Remaining: 20003794944 (18.63 GB)DFS Used: 206135296 (196.59 MB)DFS Used%: 1.02%Under replicated blocks: 1Blocks with corrupt replicas: 0Missing blocks: 0-------------------------------------------------Datanodes available: 2 (2 total, 0 dead)Name: 192.168.88.172:50010Decommission Status : NormalConfigured Capacity: 16488800256 (15.36 GB)DFS Used: 205955072 (196.41 MB)Non DFS Used: 6369054720 (5.93 GB)DFS Remaining: 9913790464(9.23 GB)DFS Used%: 1.25%DFS Remaining%: 60.12%Last contact: Fri Sep 13 03:35:51 CST 2013Name: 192.168.88.173:50010Decommission Status : NormalConfigured Capacity: 16488800256 (15.36 GB)DFS Used: 180224 (176 KB)Non DFS Used: 6398615552 (5.96 GB)DFS Remaining: 10090004480(9.4 GB)DFS Used%: 0%DFS Remaining%: 61.19%  Last contact: Fri Sep 13 03:35:50 CST 2013
  6. 使正在运行的计算分布到新的数据节点上
  $ ./bin/start-balancer.sh
  starting balancer, logging to /home/xiaoyu/hadoop-1.1.2/libexec/../logs/hadoop-xiaoyu-balancer-hadoop01.out
  $
  这个脚本很有用,大家也可以根据实际需要修改这个脚本。

页: [1]
查看完整版本: Hadoop添加datanode或者tasktracker节点