[hadoop@master ~]$ cd /usr/hadoop/
[hadoop@master hadoop]$ hadoop jar hadoop-0.20.2-examples.jar pi 10 100
Number of Maps = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
14/06/14 05:02:13 INFO mapred.FileInputFormat: Total input paths to process : 10
14/06/14 05:02:13 INFO mapred.JobClient: Running job: job_201406132259_0004
14/06/14 05:02:14 INFO mapred.JobClient: map 0% reduce 0%
14/06/14 05:02:28 INFO mapred.JobClient: map 20% reduce 0%
14/06/14 05:02:31 INFO mapred.JobClient: map 40% reduce 0%
14/06/14 05:02:37 INFO mapred.JobClient: map 80% reduce 0%
14/06/14 05:02:40 INFO mapred.JobClient: map 80% reduce 26%
14/06/14 05:02:43 INFO mapred.JobClient: map 100% reduce 26%
14/06/14 05:02:55 INFO mapred.JobClient: map 100% reduce 100%
14/06/14 05:02:57 INFO mapred.JobClient: Job complete: job_201406132259_0004
14/06/14 05:02:57 INFO mapred.JobClient: Counters: 19
14/06/14 05:02:57 INFO mapred.JobClient: Job Counters
14/06/14 05:02:57 INFO mapred.JobClient: Launched reduce tasks=1
14/06/14 05:02:57 INFO mapred.JobClient: Rack-local map tasks=1
14/06/14 05:02:57 INFO mapred.JobClient: Launched map tasks=10
14/06/14 05:02:57 INFO mapred.JobClient: Data-local map tasks=9
14/06/14 05:02:57 INFO mapred.JobClient: FileSystemCounters
14/06/14 05:02:57 INFO mapred.JobClient: FILE_BYTES_READ=226
14/06/14 05:02:57 INFO mapred.JobClient: HDFS_BYTES_READ=1180
14/06/14 05:02:57 INFO mapred.JobClient: FILE_BYTES_WRITTEN=826
14/06/14 05:02:57 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
14/06/14 05:02:57 INFO mapred.JobClient: Map-Reduce Framework
14/06/14 05:02:57 INFO mapred.JobClient: Reduce input groups=20
14/06/14 05:02:57 INFO mapred.JobClient: Combine output records=0
14/06/14 05:02:57 INFO mapred.JobClient: Map input records=10
14/06/14 05:02:57 INFO mapred.JobClient: Reduce shuffle bytes=280
14/06/14 05:02:57 INFO mapred.JobClient: Reduce output records=0
14/06/14 05:02:57 INFO mapred.JobClient: Spilled Records=40
14/06/14 05:02:57 INFO mapred.JobClient: Map output bytes=180
14/06/14 05:02:57 INFO mapred.JobClient: Map input bytes=240
14/06/14 05:02:57 INFO mapred.JobClient: Combine input records=0
14/06/14 05:02:57 INFO mapred.JobClient: Map output records=20
14/06/14 05:02:57 INFO mapred.JobClient: Reduce input records=20
Job Finished in 45.455 seconds
Estimated value of Pi is 3.14800000000000000000
2.上传本地数据文件文件测试(单词统计,wordcount)
[hadoop@master ~]$ mkdir input
[hadoop@master ~]$ echo "hello word">input/test1.txt
[hadoop@master ~]$ echo "hello hadoop">input/test2.txt
[hadoop@master ~]$ cd /usr/hadoop/
[hadoop@master hadoop]$ hadoop dfs -put ~/input test
-rw-r--r-- 1 hadoop supergroup 13 2014-06-10 20:37 /user/hadoop/test/test2.txt
[hadoop@master hadoop]$ hadoop jar hadoop-0.20.2-examples.jar wordcount test out
14/06/10 20:40:25 INFO input.FileInputFormat: Total input paths to process : 2
14/06/10 20:40:26 INFO mapred.JobClient: Running job: job_201406102021_0001
14/06/10 20:40:27 INFO mapred.JobClient: map 0% reduce 0%
14/06/10 20:40:39 INFO mapred.JobClient: map 50% reduce 0%
14/06/10 20:40:45 INFO mapred.JobClient: map 100% reduce 0%
14/06/10 20:40:51 INFO mapred.JobClient: map 100% reduce 100%
14/06/10 20:40:53 INFO mapred.JobClient: Job complete: job_201406102021_0001
14/06/10 20:40:53 INFO mapred.JobClient: Counters: 18
14/06/10 20:40:53 INFO mapred.JobClient: Job Counters
14/06/10 20:40:53 INFO mapred.JobClient: Launched reduce tasks=1
14/06/10 20:40:53 INFO mapred.JobClient: Rack-local map tasks=1
14/06/10 20:40:53 INFO mapred.JobClient: Launched map tasks=2
14/06/10 20:40:53 INFO mapred.JobClient: Data-local map tasks=1
14/06/10 20:40:53 INFO mapred.JobClient: FileSystemCounters
14/06/10 20:40:53 INFO mapred.JobClient: FILE_BYTES_READ=54
14/06/10 20:40:53 INFO mapred.JobClient: HDFS_BYTES_READ=24
14/06/10 20:40:53 INFO mapred.JobClient: FILE_BYTES_WRITTEN=178
14/06/10 20:40:53 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=24
14/06/10 20:40:53 INFO mapred.JobClient: Map-Reduce Framework
14/06/10 20:40:53 INFO mapred.JobClient: Reduce input groups=3
14/06/10 20:40:53 INFO mapred.JobClient: Combine output records=4
14/06/10 20:40:53 INFO mapred.JobClient: Map input records=2
14/06/10 20:40:53 INFO mapred.JobClient: Reduce shuffle bytes=60
14/06/10 20:40:53 INFO mapred.JobClient: Reduce output records=3
14/06/10 20:40:53 INFO mapred.JobClient: Spilled Records=8
14/06/10 20:40:53 INFO mapred.JobClient: Map output bytes=40
14/06/10 20:40:53 INFO mapred.JobClient: Combine input records=4
14/06/10 20:40:53 INFO mapred.JobClient: Map output records=4