hadoop@hadoop-01:~$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):(在这里直接回车)
Enter same passphrase again:(在这里直接回车)
Your> Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
9d:42:04:26:00:51:c7:4e:2f:7e:38:dd:93:1c:a2:d6 hadoop@hadoop-01
The authenticity of host [hadoop-01] can't be established. The key fingerprint is:
c8:c2:b2:d0:29:29:1a:e3:ec:d9:4a:47:98:29:b4:48 Are you sure you want to continue connecting (yes/no)?
注意:该命令是没有加sudo的,如果加了sudo就会提示出错信息的,因为root用户并没有做无验证ssh设置。以下是输出信息,注意hadoop-03是故意没接的,所以出现No route to host信息。
hadoop@hadoop-01:~$ /usr/lib/hadoop-0.20/bin/start-all.sh
namenode running as process 4836. Stop it first.
hadoop-02: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-02.out
hadoop-04: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-04.out
firehare-303: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-usvr-303b.out
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
hadoop-01: secondarynamenode running as process 4891. Stop it first.
jobtracker running as process 4787. Stop it first.
hadoop-02: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-02.out
hadoop-04: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-04.out
firehare-303: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-usvr-303b.out
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
这样Hadoop就正常启动了!
测试Hadoop
Hadoop架设好了,接下来就是要对其进行测试,看看它是否能正常工作,具体代码如下:
hadoop@hadoop-01:~$ hadoop-0.20 fs -mkdir input
hadoop@hadoop-01:~$ hadoop-0.20 fs -put /etc/hadoop-0.20/conf/*.xml input
hadoop@hadoop-01:~$ hadoop-0.20 fs -ls input
Found 6 items
-rw-r--r-- 3 hadoop supergroup 3936 2010-03-11 08:55 /user/hadoop/input/capacity-scheduler.xml
-rw-r--r-- 3 hadoop supergroup 400 2010-03-11 08:55 /user/hadoop/input/core-site.xml
-rw-r--r-- 3 hadoop supergroup 3032 2010-03-11 08:55 /user/hadoop/input/fair-scheduler.xml
-rw-r--r-- 3 hadoop supergroup 4190 2010-03-11 08:55 /user/hadoop/input/hadoop-policy.xml
-rw-r--r-- 3 hadoop supergroup 536 2010-03-11 08:55 /user/hadoop/input/hdfs-site.xml
-rw-r--r-- 3 hadoop supergroup 266 2010-03-11 08:55 /user/hadoop/input/mapred-site.xml
hadoop@hadoop-01:~$ hadoop-0.20 jar /usr/lib/hadoop-0.20/hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
10/03/11 14:35:57 INFO mapred.FileInputFormat: Total input paths to process : 6
10/03/11 14:35:58 INFO mapred.JobClient: Running job: job_201003111431_0001
10/03/11 14:35:59 INFO mapred.JobClient: map 0% reduce 0%
10/03/11 14:36:14 INFO mapred.JobClient: map 33% reduce 0%
10/03/11 14:36:20 INFO mapred.JobClient: map 66% reduce 0%
10/03/11 14:36:26 INFO mapred.JobClient: map 66% reduce 22%
10/03/11 14:36:36 INFO mapred.JobClient: map 100% reduce 22%
10/03/11 14:36:44 INFO mapred.JobClient: map 100% reduce 100%
10/03/11 14:36:46 INFO mapred.JobClient: Job complete: job_201003111431_0001
10/03/11 14:36:46 INFO mapred.JobClient: Counters: 19
10/03/11 14:36:46 INFO mapred.JobClient: Job Counters
10/03/11 14:36:46 INFO mapred.JobClient: Launched reduce tasks=1
10/03/11 14:36:46 INFO mapred.JobClient: Rack-local map tasks=4
10/03/11 14:36:46 INFO mapred.JobClient: Launched map tasks=6
10/03/11 14:36:46 INFO mapred.JobClient: Data-local map tasks=2
10/03/11 14:36:46 INFO mapred.JobClient: FileSystemCounters
10/03/11 14:36:46 INFO mapred.JobClient: FILE_BYTES_READ=100
10/03/11 14:36:46 INFO mapred.JobClient: HDFS_BYTES_READ=12360
10/03/11 14:36:46 INFO mapred.JobClient: FILE_BYTES_WRITTEN=422
10/03/11 14:36:46 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=204
10/03/11 14:36:46 INFO mapred.JobClient: Map-Reduce Framework
10/03/11 14:36:46 INFO mapred.JobClient: Reduce input groups=4
10/03/11 14:36:46 INFO mapred.JobClient: Combine output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Map input records=315
10/03/11 14:36:46 INFO mapred.JobClient: Reduce shuffle bytes=124
10/03/11 14:36:46 INFO mapred.JobClient: Reduce output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Spilled Records=8
10/03/11 14:36:46 INFO mapred.JobClient: Map output bytes=86
10/03/11 14:36:46 INFO mapred.JobClient: Map input bytes=12360
10/03/11 14:36:46 INFO mapred.JobClient: Combine input records=4
10/03/11 14:36:46 INFO mapred.JobClient: Map output records=4
10/03/11 14:36:46 INFO mapred.JobClient: Reduce input records=4
10/03/11 14:36:46 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/03/11 14:36:46 INFO mapred.FileInputFormat: Total input paths to process : 1
10/03/11 14:36:46 INFO mapred.JobClient: Running job: job_201003111431_0002
10/03/11 14:36:47 INFO mapred.JobClient: map 0% reduce 0%
10/03/11 14:36:56 INFO mapred.JobClient: map 100% reduce 0%
10/03/11 14:37:08 INFO mapred.JobClient: map 100% reduce 100%
10/03/11 14:37:10 INFO mapred.JobClient: Job complete: job_201003111431_0002
10/03/11 14:37:11 INFO mapred.JobClient: Counters: 18
10/03/11 14:37:11 INFO mapred.JobClient: Job Counters
10/03/11 14:37:11 INFO mapred.JobClient: Launched reduce tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: Launched map tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: Data-local map tasks=1
10/03/11 14:37:11 INFO mapred.JobClient: FileSystemCounters
10/03/11 14:37:11 INFO mapred.JobClient: FILE_BYTES_READ=100
10/03/11 14:37:11 INFO mapred.JobClient: HDFS_BYTES_READ=204
10/03/11 14:37:11 INFO mapred.JobClient: FILE_BYTES_WRITTEN=232
10/03/11 14:37:11 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=62
10/03/11 14:37:11 INFO mapred.JobClient: Map-Reduce Framework
10/03/11 14:37:11 INFO mapred.JobClient: Reduce input groups=1
10/03/11 14:37:11 INFO mapred.JobClient: Combine output records=0
10/03/11 14:37:11 INFO mapred.JobClient: Map input records=4
10/03/11 14:37:11 INFO mapred.JobClient: Reduce shuffle bytes=0
10/03/11 14:37:11 INFO mapred.JobClient: Reduce output records=4
10/03/11 14:37:11 INFO mapred.JobClient: Spilled Records=8
10/03/11 14:37:11 INFO mapred.JobClient: Map output bytes=86
10/03/11 14:37:11 INFO mapred.JobClient: Map input bytes=118
10/03/11 14:37:11 INFO mapred.JobClient: Combine input records=0
10/03/11 14:37:11 INFO mapred.JobClient: Map output records=4
10/03/11 14:37:11 INFO mapred.JobClient: Reduce input records=4