设为首页 收藏本站
查看: 771|回复: 0

[经验分享] Hadoop安装遇到的小麻烦

[复制链接]

尚未签到

发表于 2016-12-10 08:31:59 | 显示全部楼层 |阅读模式
root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# mkdir input
root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# cp conf/*.xml input
root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
Exception in thread "main" java.io.IOException: Error opening job jar: hadoop-*-examples.jar
at org.apache.hadoop.util.RunJar.main(RunJar.java:90)
Caused by: java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:131)
at java.util.jar.JarFile.<init>(JarFile.java:150)
at java.util.jar.JarFile.<init>(JarFile.java:87)
at org.apache.hadoop.util.RunJar.main(RunJar.java:88)


ls下,官方给的命令后面没有版本号,而我本地的需要版本号,加上如下

root@tiger:/home/lidongbo/soft/hadoop-0.20.203.0# bin/hadoop jar hadoop-examples-0.20.203.0.jar  grep input output 'dfs[a-z.]+'
11/05/22 11:26:37 INFO mapred.FileInputFormat: Total input paths to process : 6
11/05/22 11:26:38 INFO mapred.JobClient: Running job: job_local_0001
11/05/22 11:26:38 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:38 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:38 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:38 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:38 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:38 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
11/05/22 11:26:39 INFO mapred.JobClient:  map 0% reduce 0%
11/05/22 11:26:41 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/capacity-scheduler.xml:0+7457
11/05/22 11:26:41 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
11/05/22 11:26:41 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:41 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:41 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:41 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:41 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:41 INFO mapred.MapTask: Finished spill 0
11/05/22 11:26:41 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
11/05/22 11:26:42 INFO mapred.JobClient:  map 100% reduce 0%
11/05/22 11:26:44 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/hadoop-policy.xml:0+4644
11/05/22 11:26:44 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
11/05/22 11:26:44 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:44 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:44 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:44 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:44 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:44 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
11/05/22 11:26:47 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/mapred-queue-acls.xml:0+2033
11/05/22 11:26:47 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
11/05/22 11:26:47 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:47 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:47 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:47 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:47 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:47 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
11/05/22 11:26:50 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/hdfs-site.xml:0+178
11/05/22 11:26:50 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
11/05/22 11:26:50 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:50 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:50 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:50 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:50 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:50 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
11/05/22 11:26:53 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/core-site.xml:0+178
11/05/22 11:26:53 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
11/05/22 11:26:53 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:26:53 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:26:53 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:26:53 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:26:53 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:26:53 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
11/05/22 11:26:56 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/input/mapred-site.xml:0+178
11/05/22 11:26:56 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
11/05/22 11:26:56 INFO mapred.LocalJobRunner:
11/05/22 11:26:56 INFO mapred.Merger: Merging 6 sorted segments
11/05/22 11:26:56 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/05/22 11:26:56 INFO mapred.LocalJobRunner:
11/05/22 11:26:56 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
11/05/22 11:26:56 INFO mapred.LocalJobRunner:
11/05/22 11:26:56 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
11/05/22 11:26:56 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449
11/05/22 11:26:59 INFO mapred.LocalJobRunner: reduce > reduce
11/05/22 11:26:59 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
11/05/22 11:27:00 INFO mapred.JobClient:  map 100% reduce 100%
11/05/22 11:27:00 INFO mapred.JobClient: Job complete: job_local_0001
11/05/22 11:27:00 INFO mapred.JobClient: Counters: 17
11/05/22 11:27:00 INFO mapred.JobClient:   File Input Format Counters
11/05/22 11:27:00 INFO mapred.JobClient:     Bytes Read=14668
11/05/22 11:27:00 INFO mapred.JobClient:   File Output Format Counters
11/05/22 11:27:00 INFO mapred.JobClient:     Bytes Written=123
11/05/22 11:27:00 INFO mapred.JobClient:   FileSystemCounters
11/05/22 11:27:00 INFO mapred.JobClient:     FILE_BYTES_READ=1108835
11/05/22 11:27:00 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1232836
11/05/22 11:27:00 INFO mapred.JobClient:   Map-Reduce Framework
11/05/22 11:27:00 INFO mapred.JobClient:     Map output materialized bytes=55
11/05/22 11:27:00 INFO mapred.JobClient:     Map input records=357
11/05/22 11:27:00 INFO mapred.JobClient:     Reduce shuffle bytes=0
11/05/22 11:27:00 INFO mapred.JobClient:     Spilled Records=2
11/05/22 11:27:00 INFO mapred.JobClient:     Map output bytes=17
11/05/22 11:27:00 INFO mapred.JobClient:     Map input bytes=14668
11/05/22 11:27:00 INFO mapred.JobClient:     SPLIT_RAW_BYTES=713
11/05/22 11:27:00 INFO mapred.JobClient:     Combine input records=1
11/05/22 11:27:00 INFO mapred.JobClient:     Reduce input records=1
11/05/22 11:27:00 INFO mapred.JobClient:     Reduce input groups=1
11/05/22 11:27:00 INFO mapred.JobClient:     Combine output records=1
11/05/22 11:27:00 INFO mapred.JobClient:     Reduce output records=1
11/05/22 11:27:00 INFO mapred.JobClient:     Map output records=1
11/05/22 11:27:00 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/05/22 11:27:00 INFO mapred.FileInputFormat: Total input paths to process : 1
11/05/22 11:27:00 INFO mapred.JobClient: Running job: job_local_0002
11/05/22 11:27:00 INFO mapred.MapTask: numReduceTasks: 1
11/05/22 11:27:00 INFO mapred.MapTask: io.sort.mb = 100
11/05/22 11:27:00 INFO mapred.MapTask: data buffer = 79691776/99614720
11/05/22 11:27:00 INFO mapred.MapTask: record buffer = 262144/327680
11/05/22 11:27:00 INFO mapred.MapTask: Starting flush of map output
11/05/22 11:27:00 INFO mapred.MapTask: Finished spill 0
11/05/22 11:27:00 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
11/05/22 11:27:01 INFO mapred.JobClient:  map 0% reduce 0%
11/05/22 11:27:03 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449/part-00000:0+111
11/05/22 11:27:03 INFO mapred.LocalJobRunner: file:/home/lidongbo/soft/hadoop-0.20.203.0/grep-temp-1582508449/part-00000:0+111
11/05/22 11:27:03 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
11/05/22 11:27:03 INFO mapred.LocalJobRunner:
11/05/22 11:27:03 INFO mapred.Merger: Merging 1 sorted segments
11/05/22 11:27:03 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/05/22 11:27:03 INFO mapred.LocalJobRunner:
11/05/22 11:27:03 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
11/05/22 11:27:03 INFO mapred.LocalJobRunner:
11/05/22 11:27:03 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
11/05/22 11:27:03 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/lidongbo/soft/hadoop-0.20.203.0/output
11/05/22 11:27:04 INFO mapred.JobClient:  map 100% reduce 0%
11/05/22 11:27:06 INFO mapred.LocalJobRunner: reduce > reduce
11/05/22 11:27:06 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
11/05/22 11:27:07 INFO mapred.JobClient:  map 100% reduce 100%
11/05/22 11:27:07 INFO mapred.JobClient: Job complete: job_local_0002
11/05/22 11:27:07 INFO mapred.JobClient: Counters: 17
11/05/22 11:27:07 INFO mapred.JobClient:   File Input Format Counters
11/05/22 11:27:07 INFO mapred.JobClient:     Bytes Read=123
11/05/22 11:27:07 INFO mapred.JobClient:   File Output Format Counters
11/05/22 11:27:07 INFO mapred.JobClient:     Bytes Written=23
11/05/22 11:27:07 INFO mapred.JobClient:   FileSystemCounters
11/05/22 11:27:07 INFO mapred.JobClient:     FILE_BYTES_READ=607997
11/05/22 11:27:07 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=701437
11/05/22 11:27:07 INFO mapred.JobClient:   Map-Reduce Framework
11/05/22 11:27:07 INFO mapred.JobClient:     Map output materialized bytes=25
11/05/22 11:27:07 INFO mapred.JobClient:     Map input records=1
11/05/22 11:27:07 INFO mapred.JobClient:     Reduce shuffle bytes=0
11/05/22 11:27:07 INFO mapred.JobClient:     Spilled Records=2
11/05/22 11:27:07 INFO mapred.JobClient:     Map output bytes=17
11/05/22 11:27:07 INFO mapred.JobClient:     Map input bytes=25
11/05/22 11:27:07 INFO mapred.JobClient:     SPLIT_RAW_BYTES=127
11/05/22 11:27:07 INFO mapred.JobClient:     Combine input records=0
11/05/22 11:27:07 INFO mapred.JobClient:     Reduce input records=1
11/05/22 11:27:07 INFO mapred.JobClient:     Reduce input groups=1
11/05/22 11:27:07 INFO mapred.JobClient:     Combine output records=0
11/05/22 11:27:07 INFO mapred.JobClient:     Reduce output records=1
11/05/22 11:27:07 INFO mapred.JobClient:     Map output records=1


ubuntu 11 默认没有安装SSHD
使用sudo apt-get install openssh-server
然后确认sshserver是否启动了: ps -e |grep ssh 如果只有ssh-agent那ssh-server还没有启动,需要/etc/init.d/ssh start,如果看到sshd那说明ssh-server已经启动了。
ssh-server配置文件位于/ etc/ssh/sshd_config,在这里可以定义SSH的服务端口,默认端口是22,你可以自己定义成其他端口号,如222。
然后重启SSH服务: sudo /etc/init.d/ssh restart


root@tiger:/etc# apt-get install openssh-server
正在读取软件包列表... 完成
正在分析软件包的依赖关系树      
正在读取状态信息... 完成      
将会安装下列额外的软件包:
ssh-import-id
建议安装的软件包:
rssh molly-guard openssh-blacklist openssh-blacklist-extra
下列【新】软件包将被安装:
openssh-server ssh-import-id
升级了 0 个软件包,新安装了 2 个软件包,要卸载 0 个软件包,有 109 个软件包未被升级。
需要下载 317 kB 的软件包。
解压缩后会消耗掉 913 kB 的额外空间。
您希望继续执行吗?[Y/n]y
获取:1 http://cn.archive.ubuntu.com/ubuntu/ natty/main openssh-server i386 1:5.8p1-1ubuntu3 [311 kB]
获取:2 http://cn.archive.ubuntu.com/ubuntu/ natty/main ssh-import-id all 2.4-0ubuntu1 [5,934 B]
下载 317 kB,耗时 2秒 (144 kB/s)     
正在预设定软件包 ...
选中了曾被取消选择的软件包 openssh-server。
(正在读取数据库 ... 系统当前共安装有 134010 个文件和目录。)
正在解压缩 openssh-server (从 .../openssh-server_1%3a5.8p1-1ubuntu3_i386.deb) ...
选中了曾被取消选择的软件包 ssh-import-id。
正在解压缩 ssh-import-id (从 .../ssh-import-id_2.4-0ubuntu1_all.deb) ...
正在处理用于 ureadahead 的触发器...
ureadahead will be reprofiled on next reboot
正在处理用于 ufw 的触发器...
正在处理用于 man-db 的触发器...
正在设置 openssh-server (1:5.8p1-1ubuntu3) ...
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
ssh start/running, process 2396
正在设置 ssh-import-id (2.4-0ubuntu1) ...
root@tiger:/etc# sshd
sshd re-exec requires execution with an absolute path



root@tiger:/etc/init.d# ssh 127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
ECDSA key fingerprint is 72:0f:15:ff:d4:14:63:ab:6c:6e:5f:57:4b:5c:cf:dd.
Are you sure you want to continue connecting (yes/no)?


解决了 ssh :connect to host 127.0.0.1 port 22: Connection refused  问题

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-312114-1-1.html 上篇帖子: windows上编译eclipse-plugin for hadoop-0.20.2-cdh3u3 下篇帖子: 实践:使用 Apache Hadoop 处理日志使用典型 Linux 系统上的 Hadoop 从日志中提取有用数据
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表