11/11/30 09:53:56 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ubuntu1/192.168.0.101
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
11/11/30 09:53:56 INFO namenode.FSNamesystem: fsOwner=root,root
11/11/30 09:53:56 INFO namenode.FSNamesystem: supergroup=supergroup
11/11/30 09:53:56 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/11/30 09:53:56 INFO common.Storage: Image file of size 94 saved in 0 seconds.
11/11/30 09:53:57 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
11/11/30 09:53:57 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu1/192.168.0.101
************************************************************/
执行:bin/start-all.sh
输出:
starting namenode, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-namenode-ubuntu1.out
localhost: starting datanode, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-datanode-ubuntu1.out
localhost: starting secondarynamenode, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-secondarynamenode-ubuntu1.out
starting jobtracker, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-jobtracker-ubuntu1.out
localhost: starting tasktracker, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-tasktracker-ubuntu1.out
检查hdfs :bin/hadoopfs
-ls /
输出目录文件则正常。
hadoop文件系统操作:
bin/hadoop
fs -mkdir test
bin/hadoop
fs -ls test
bin/hadoop
fs -rmr test
测试hadoop:
bin/hadoop
fs -mkdir input
自己建立两个文本文件:file1和file2放在/opt/hadoop/sourcedata下
执行:bin/hadoopfs
-put/opt/hadoop/sourcedata/file*
input
执行:bin/hadoop
jar hadoop-0.20.2-examples.jar wordcount input output
输出:
11/11/30 10:15:38 INFO input.FileInputFormat: Total input paths to process : 2
11/11/30 10:15:52 INFO mapred.JobClient: Running job: job_201111301005_0001
11/11/30 10:15:53 INFO mapred.JobClient: map 0% reduce 0%
11/11/30 10:19:07 INFO mapred.JobClient: map 50% reduce 0%
11/11/30 10:19:14 INFO mapred.JobClient: map 100% reduce 0%
11/11/30 10:19:46 INFO mapred.JobClient: map 100% reduce 100%
11/11/30 10:19:54 INFO mapred.JobClient: Job complete: job_201111301005_0001
11/11/30 10:19:59 INFO mapred.JobClient: Counters: 17
11/11/30 10:19:59 INFO mapred.JobClient: Job Counters
11/11/30 10:19:59 INFO mapred.JobClient: Launched reduce tasks=1
11/11/30 10:19:59 INFO mapred.JobClient: Launched map tasks=2
11/11/30 10:19:59 INFO mapred.JobClient: Data-local map tasks=2
11/11/30 10:19:59 INFO mapred.JobClient: FileSystemCounters
11/11/30 10:19:59 INFO mapred.JobClient: FILE_BYTES_READ=146
11/11/30 10:19:59 INFO mapred.JobClient: HDFS_BYTES_READ=64
11/11/30 10:19:59 INFO mapred.JobClient: FILE_BYTES_WRITTEN=362
11/11/30 10:19:59 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=60
11/11/30 10:19:59 INFO mapred.JobClient: Map-Reduce Framework
11/11/30 10:19:59 INFO mapred.JobClient: Reduce input groups=9
11/11/30 10:19:59 INFO mapred.JobClient: Combine output records=13
11/11/30 10:19:59 INFO mapred.JobClient: Map input records=2
11/11/30 10:19:59 INFO mapred.JobClient: Reduce shuffle bytes=102
11/11/30 10:19:59 INFO mapred.JobClient: Reduce output records=9
11/11/30 10:19:59 INFO mapred.JobClient: Spilled Records=26
11/11/30 10:19:59 INFO mapred.JobClient: Map output bytes=120
11/11/30 10:19:59 INFO mapred.JobClient: Combine input records=14
11/11/30 10:19:59 INFO mapred.JobClient: Map output records=14
11/11/30 10:19:59 INFO mapred.JobClient: Reduce input records=13