设为首页 收藏本站
查看: 772|回复: 0

[经验分享] eclipse安装hadoop插件

[复制链接]
累计签到:1 天
连续签到:1 天
发表于 2014-1-22 11:03:15 | 显示全部楼层 |阅读模式
我想还有很多人没有听说过ZModem协议,更不知道有rz/sz这样方便的工具。 好东西不敢独享。以下给出我知道的一点皮毛。 下面一段是从SecureCRT的帮助中copy的:





ZModem is a full-duplex file transfer protocol that supports fast data transfer rates and effective error detection. ZModem is very user friendly, allowing either the sending or receiving party to initiate a file transfer. ZModem supports multiple file ("batch") transfers, and allows the use of wildcards when specifying filenames. ZModem also supports resuming most prior ZModem file transfer attempts.





rz,sz是便是Linux/Unix同Windows进行ZModem文件传输的命令行工具 windows端需要支持ZModem的telnet/ssh客户端,SecureCRT就可以用SecureCRT登陆到Unix/Linux主机(telnet或ssh均可) O 运行命令rz,即是接收文件,SecureCRT就会弹出文件选择对话框,选好文件之后关闭对话框,文件就会上传到当前目录 O 运行命令sz file1 file2就是发文件到windows上(保存的目录是可以配置) 比FTP命令方便多了,而且服务器不用再开FTP服务了 PS:Linux上rz/sz这两个小工具安装lrzsz-x.x.xx.rpm即可,Unix可用源码自行 编译,Solaris spac的可以到sunfreeware下载执行码



如果安装的是hadoop-0.20.2,那么eclipse-plugin的具体位置位在:/home/hadoop/hadoop-0.20.2/contrib/eclipse-plugin下面。
如果安装的是hadoop-0.21.0,那么eclipse-plugin的具体位置位在:/home/hadoop/hadoop-0.21.0/mapred/contrib/eclipse/hadoop-0.21.0-eclipse-plugin.jar下面

将hadoop-0.21.0-eclipse-plugin.jar这个插件保存到eclipse目录下的pluging中,eclipse就能够自动识别。



本机的环境如下:

Eclipse 3.6

Hadoop-0.20.2

Hive-0.5.0-dev

1. 安装hadoop-0.20.2-eclipse-plugin的插件。注意:Hadoop目录中的/hadoop-0.20.2/contrib /eclipse-plugin/hadoop-0.20.2-eclipse-plugin.jar在Eclipse3.6下有问题,无法在 Hadoop Server上运行,可以从http://code.google.com/p/hadoop-eclipse-plugin/下载

2. 选择Map/Reduce视图:window ->  open pers.. ->  other.. ->  map/reduce

3. 增加DFS Locations:点击Map/Reduce Locations—> New Hadoop Loaction,填写对应的host和port

1
2
3
4
5
6
7
8
9
10
[html] view plaincopyprint?
Map/Reduce Master:     
Host: 10.10.xx.xx   
Port: 9001     
DFS Master:     
Host: 10.10.xx.xx(选中 User M/R Master host即可)     
Port: 9000     
User name: root  

更改Advance parameters 中的 hadoop.job.ugi, 默认是 DrWho,Tardis, 改成:root,Tardis。如果看不到选项,则使用Eclipse -clean重启Eclipse   
否则,可能会报错org.apache.hadoop.security.AccessControlException  
4. 设置本机的Host:

1
2
3
4
5
[html] view plaincopyprint?
10.10.xx.xx zw-hadoop-master. zw-hadoop-master     

#注意后面需要还有一个zw-hadoop-master.,否则运行Map/Reduce时会报错:     
java.lang.IllegalArgumentException: Wrong FS: hdfs://zw-hadoop-master:9000/user/root/oplog/out/_temporary/_attempt_201008051742_0135_m_000007_0, expected: hdfs://zw-hadoop-master.:9000     
    at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352)  
5. 新建一个Map/Reduce Project,新建Mapper,Reducer,Driver类,注意,自动生成的代码是基于老版本的Hadoop,自己修改:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
[java] view plaincopyprint?
<span>package</span> <span>com.sohu.hadoop.test</span><span>;</span>     

<span>import</span> <span>java.util.StringTokenizer</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.io.IntWritable</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.io.Text</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.mapreduce.Mapper</span><span>;</span>     

<span>public</span> <span>class</span> MapperTest <span>extends</span> Mapper<span><</span>Object, Text, Text, IntWritable<span>></span> <span>{</span>     
    <span>private</span> <span>final</span> <span>static</span> IntWritable one <span>=</span> <span>new</span> IntWritable<span>(</span><span>1</span><span>)</span><span>;</span>     

    <span>public</span> <span>void</span> map<span>(</span><span>Object</span> key, Text value, <span>Context</span> context<span>)</span>     
            <span>throws</span> <span>IOException</span>, <span>InterruptedException</span> <span>{</span>     
        <span>String</span> userid <span>=</span> value.<span>toString</span><span>(</span><span>)</span>.<span>split</span><span>(</span><span>"[|]"</span><span>)</span><span>[</span><span>2</span><span>]</span><span>;</span>     
        context.<span>write</span><span>(</span><span>new</span> Text<span>(</span>userid<span>)</span>, <span>new</span> IntWritable<span>(</span><span>1</span><span>)</span><span>)</span><span>;</span>     
    <span>}</span>     
<span>}</span>     


<span>package</span> <span>com.sohu.hadoop.test</span><span>;</span>     

<span>import</span> <span>java.io.IOException</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.io.IntWritable</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.io.Text</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.mapreduce.Reducer</span><span>;</span>     

<span>public</span> <span>class</span> ReducerTest <span>extends</span> Reducer<span><</span>Text, IntWritable, Text, IntWritable<span>></span> <span>{</span>     

    <span>private</span> IntWritable result <span>=</span> <span>new</span> IntWritable<span>(</span><span>)</span><span>;</span>     

    <span>public</span> <span>void</span> reduce<span>(</span>Text key, Iterable<span><</span>IntWritable<span>></span> values, <span>Context</span> context<span>)</span>     
            <span>throws</span> <span>IOException</span>, <span>InterruptedException</span> <span>{</span>     
        <span>int</span> sum <span>=</span> <span>0</span><span>;</span>     
        <span>for</span> <span>(</span>IntWritable val <span>:</span> values<span>)</span> <span>{</span>     
            sum <span>+=</span> val.<span>get</span><span>(</span><span>)</span><span>;</span>     
        <span>}</span>     
        result.<span>set</span><span>(</span>sum<span>)</span><span>;</span>     
        context.<span>write</span><span>(</span>key, result<span>)</span><span>;</span>     
    <span>}</span>     
<span>}</span>     


<span>package</span> <span>com.sohu.hadoop.test</span><span>;</span>     

<span>import</span> <span>org.apache.hadoop.conf.Configuration</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.fs.Path</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.io.IntWritable</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.io.Text</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.io.compress.CompressionCodec</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.io.compress.GzipCodec</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.mapreduce.Job</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.mapreduce.lib.input.FileInputFormat</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.mapreduce.lib.output.FileOutputFormat</span><span>;</span>     
<span>import</span> <span>org.apache.hadoop.util.GenericOptionsParser</span><span>;</span>     

<span>public</span> <span>class</span> DriverTest <span>{</span>     
    <span>public</span> <span>static</span> <span>void</span> main<span>(</span><span>String</span><span>[</span><span>]</span> args<span>)</span> <span>throws</span> <span>Exception</span> <span>{</span>     
        Configuration conf <span>=</span> <span>new</span> Configuration<span>(</span><span>)</span><span>;</span>     
        <span>String</span><span>[</span><span>]</span> otherArgs <span>=</span> <span>new</span> GenericOptionsParser<span>(</span>conf, args<span>)</span>     
                .<span>getRemainingArgs</span><span>(</span><span>)</span><span>;</span>     
        <span>if</span> <span>(</span>otherArgs.<span>length</span> <span>!=</span> <span>2</span><span>)</span>      
        <span>{</span>     
            <span>System</span>.<span>err</span>.<span>println</span><span>(</span><span>"Usage: DriverTest <in> <out>"</span><span>)</span><span>;</span>     
            <span>System</span>.<span>exit</span><span>(</span><span>2</span><span>)</span><span>;</span>     
        <span>}</span>     
        Job job <span>=</span> <span>new</span> Job<span>(</span>conf, <span>"Driver Test"</span><span>)</span><span>;</span>     
        job.<span>setJarByClass</span><span>(</span>DriverTest.<span>class</span><span>)</span><span>;</span>     
        job.<span>setMapperClass</span><span>(</span>MapperTest.<span>class</span><span>)</span><span>;</span>     
        job.<span>setCombinerClass</span><span>(</span>ReducerTest.<span>class</span><span>)</span><span>;</span>     
        job.<span>setReducerClass</span><span>(</span>ReducerTest.<span>class</span><span>)</span><span>;</span>     
        job.<span>setOutputKeyClass</span><span>(</span>Text.<span>class</span><span>)</span><span>;</span>     
        job.<span>setOutputValueClass</span><span>(</span>IntWritable.<span>class</span><span>)</span><span>;</span>     

        conf.<span>setBoolean</span><span>(</span><span>"mapred.output.compress"</span>, <span>true</span><span>)</span><span>;</span>     
        conf.<span>setClass</span><span>(</span><span>"mapred.output.compression.codec"</span>, GzipCodec.<span>class</span>,CompressionCodec.<span>class</span><span>)</span><span>;</span>     

        FileInputFormat.<span>addInputPath</span><span>(</span>job, <span>new</span> Path<span>(</span>otherArgs<span>[</span><span>0</span><span>]</span><span>)</span><span>)</span><span>;</span>     
        FileOutputFormat.<span>setOutputPath</span><span>(</span>job, <span>new</span> Path<span>(</span>otherArgs<span>[</span><span>1</span><span>]</span><span>)</span><span>)</span><span>;</span>     

        <span>System</span>.<span>exit</span><span>(</span>job.<span>waitForCompletion</span><span>(</span><span>true</span><span>)</span> <span>?</span> <span>0</span> <span>:</span> <span>1</span><span>)</span><span>;</span>     
    <span>}</span>     
<span>}</span>  
6. 在DriverTest上,点击Run As —> Run on Hadoop,选择对应的Hadoop Locaion即可


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-14475-1-1.html 上篇帖子: cgywin下 hadoop运行 问题 下篇帖子: Hadoop学习总结之Map-Reduce的过程解析
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表