#在master上操作,格式化Image文件的存储空间,必需是大写的Y su - hadoop cd /hadoop/install/hadoop-0.20.2-cdh3u6/bin/ [hadoop@cc-staging-session2 bin]$ ./hadoop namenode -format 13/04/27 01:46:40 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = cc-staging-session2/127.0.0.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 0.20.2-cdh3u6 STARTUP_MSG: build = git://ubuntu-slave01/var/lib/jenkins/workspace/CDH3u6-Full-RC/build/cdh3/hadoop20/0.20.2-cdh3u6/source -r efb405d2aa54039bdf39e0733cd0bb9423a1eb0a; compiled by 'jenkins' on Wed Mar 20 11:45:36 PDT 2013 ************************************************************/ Re-format filesystem in /hadoop/name ? (Y or N) Y 13/04/27 01:46:42 INFO util.GSet: VM type = 64-bit 13/04/27 01:46:42 INFO util.GSet: 2% max memory = 17.77875 MB 13/04/27 01:46:42 INFO util.GSet: capacity = 2^21 = 2097152 entries 13/04/27 01:46:42 INFO util.GSet: recommended=2097152, actual=2097152 13/04/27 01:46:42 INFO namenode.FSNamesystem: fsOwner=hadoop (auth:SIMPLE) 13/04/27 01:46:42 INFO namenode.FSNamesystem: supergroup=supergroup 13/04/27 01:46:42 INFO namenode.FSNamesystem: isPermissionEnabled=true 13/04/27 01:46:42 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=1000 13/04/27 01:46:42 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 13/04/27 01:46:43 INFO common.Storage: Image file of size 112 saved in 0 seconds. 13/04/27 01:46:43 INFO common.Storage: Storage directory /hadoop/name has been successfully formatted. 13/04/27 01:46:43 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at cc-staging-session2/127.0.0.1 ************************************************************/
#启动namenode和datanode cd /hadoop/install/hadoop-0.20.2-cdh3u6/bin/ ./hadoop-daemon.sh start namenode
#在/hadoop/install/hadoop-0.20.2-cdh3u6/bin/下有很多命令,
* start-all.sh 启动所有的Hadoop守护,包括namenode, datanode,jobtracker,tasktrack,secondarynamenode。
* stop-all.sh 停止所有的Hadoop。
* start-mapred.sh 启动Map/Reduce守护,包括Jobtracker和Tasktrack。
* stop-mapred.sh 停止Map/Reduce守护
* start-dfs.sh 启动Hadoop DFS守护,Namenode和Datanode。
* stop-dfs.sh 停止DFS守护
#在slave1和slave2上启动datanode cd /hadoop/install/hadoop-0.20.2-cdh3u6/bin/ ./hadoop-daemon.sh start datanode
#可以在各个节点上运行jps命令查看是否启动成功 [hadoop@cc-staging-session2 bin]$ jps 11926 NameNode 12566 Jps 12233 SecondaryNameNode 12066 DataNode
#数据节点必需在硬盘上不然会报错 [hadoop@cc-staging-front bin]$ jps 14582 DataNode 14637 Jps
[hadoop@cc-staging-imcenter bin]$ jps 23355 DataNode 23419 Jps
#
|