设为首页 收藏本站
查看: 1243|回复: 0

[经验分享] hadoop 测试第一个mapreduce程序

[复制链接]

尚未签到

发表于 2018-10-30 10:57:26 | 显示全部楼层 |阅读模式
  说明:测试hadoop自带的实例 wordcount程序(此程序统计每个单词在文件中出现的次数)
  2.6.0版本jar程序的路径是
  /usr/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar
  一、在本地创建目录和文件
  创建目录:
  mkdir /home/hadoop/input
  cd /home/hadoop/input
  创建文件:
  touch wordcount1.txt
  touch wordcount2.txt
  二、添加内容
  echo "Hello World" > wordcount1.txt
  echo "Hello Hadoop" > wordcount2.txt
  三、在hdfs上创建input目录
  hadoop fs -mkdir /input
  四、拷贝文件到/input目录
  hadoop fs -put /home/hadoop/input/* /input
  五、执行程序
  hadoop jar /usr/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /input /output
  说明:wordcount为程序的主类名, /input  输入目录  /output 输出目录(输出目录不能存在)
  六、执行过程信息
  15/04/14 15:55:03 INFO client.RMProxy: Connecting to ResourceManager at hdnn140/192.168.152.140:8032
  15/04/14 15:55:04 INFO input.FileInputFormat: Total input paths to process : 2
  15/04/14 15:55:04 INFO mapreduce.JobSubmitter: number of splits:2
  15/04/14 15:55:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1428996061278_0002
  15/04/14 15:55:05 INFO impl.YarnClientImpl: Submitted application application_1428996061278_0002
  15/04/14 15:55:05 INFO mapreduce.Job: The url to track the job: http://hdnn140:8088/proxy/application_1428996061278_0002/
  15/04/14 15:55:05 INFO mapreduce.Job: Running job: job_1428996061278_0002
  15/04/14 15:55:17 INFO mapreduce.Job: Job job_1428996061278_0002 running in uber mode : false
  15/04/14 15:55:17 INFO mapreduce.Job:  map 0% reduce 0%
  15/04/14 15:56:00 INFO mapreduce.Job:  map 100% reduce 0%
  15/04/14 15:56:10 INFO mapreduce.Job:  map 100% reduce 100%
  15/04/14 15:56:11 INFO mapreduce.Job: Job job_1428996061278_0002 completed successfully
  15/04/14 15:56:11 INFO mapreduce.Job: Counters: 49
  File System Counters
  FILE: Number of bytes read=55
  FILE: Number of bytes written=316738
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
  HDFS: Number of bytes read=235
  HDFS: Number of bytes written=25
  HDFS: Number of read operations=9
  HDFS: Number of large read operations=0
  HDFS: Number of write operations=2
  Job Counters
  Launched map tasks=2
  Launched reduce tasks=1
  Data-local map tasks=2
  Total time spent by all maps in occupied slots (ms)=83088
  Total time spent by all reduces in occupied slots (ms)=7098
  Total time spent by all map tasks (ms)=83088
  Total time spent by all reduce tasks (ms)=7098
  Total vcore-seconds taken by all map tasks=83088
  Total vcore-seconds taken by all reduce tasks=7098
  Total megabyte-seconds taken by all map tasks=85082112
  Total megabyte-seconds taken by all reduce tasks=7268352
  Map-Reduce Framework
  Map input records=2
  Map output records=4
  Map output bytes=41
  Map output materialized bytes=61
  Input split bytes=210
  Combine input records=4
  Combine output records=4
  Reduce input groups=3
  Reduce shuffle bytes=61
  Reduce input records=4
  Reduce output records=3
  Spilled Records=8
  Shuffled Maps =2
  Failed Shuffles=0
  Merged Map outputs=2
  GC time elapsed (ms)=1649
  CPU time spent (ms)=4260
  Physical memory (bytes) snapshot=280866816
  Virtual memory (bytes) snapshot=2578739200
  Total committed heap usage (bytes)=244625408
  Shuffle Errors
  BAD_ID=0
  CONNECTION=0
  IO_ERROR=0
  WRONG_LENGTH=0
  WRONG_MAP=0
  WRONG_REDUCE=0
  File Input Format Counters
  Bytes Read=25
  File Output Format Counters
  Bytes Written=25
  七、完成后查看输出目录
  hadoop fs -ls /output
  八、查看输出结果
  hadoop fs -cat /output/part-r-00000
  九、完成


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-628396-1-1.html 上篇帖子: hadoop安装过程中ubuntu系统ssh免密码登陆设置  下篇帖子: Spark VS Hadoop-TimZhang
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表