设为首页 收藏本站
查看: 683|回复: 0

[经验分享] 32位hadoop编译实现与64位操作系统兼容

[复制链接]

尚未签到

发表于 2016-12-9 09:13:13 | 显示全部楼层 |阅读模式
没有安装过集群的朋友,可能没有发现,hadoop版本没有64位的,我们在安装hadoop之前需要将hadoop源码包进行编译,否则lib下的部分jar包无法使用【有人可能会说hadoop不分操作系统的bit数,这个问题我有怎么会悄悄告诉你呢!!!!哈哈,开玩笑,接下来,给大家分享一下我第一次编译出现的糗事】
如果不编译会出现啥问题呢??你可以看俺

遇到的问题描述:
[iyunv@db96 hadoop]# hadoop dfs -put ./in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


14/07/17 17:07:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: `./in': No such file or directory
原因查找:
查看本地文件:
[iyunv@db96 hadoop]# file /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 32-bit LSB shared object, 
 Intel 80386, version 1 (SYSV), dynamically linked, not stripped
 是32位的hadoop,安装在了64位的linux系统上。lib包编译环境不一样,所以不能使用。
 悲剧了,装好的集群没法用

【编译环境准备】
要重新指定yum源
网上下载了version-groups.conf到/etc/yum/,删除系统原有的源文件
1. 安装必要的包:
[iyunv@db99 data]# yum install autoconfautomake libtool cmake ncurses-devel openssl-devel gcc* --nogpgcheck


2. 安装maven,下载并解压。(版本不能比这新,有压缩包)
http://maven.apache.org/download.cgi  //下载对应的压缩包
apache-maven-3.2.1-bin.tar
[iyunv@db99 ~]# tar -xvf apache-maven-3.2.1-bin.tar
[iyunv@db99 ~]# tar -xvf apache-maven-3.2.1-bin.tar
[iyunv@db99 ~]# ln -s /usr/local/apache-maven-3.2.1/ /usr/local/maven
[iyunv@db99 local]# vim /etc/profile  //添加环境变量中
export MAVEN_HOME=/usr/local/maven
export PATH=$MAVEN_HOME/bin:$PATH 


3. 安装protobuf(版本不要变,有压缩包)
https://code.google.com/p/protobuf/downloads/detail?name=protobuf-2.5.0.tar.gz
下载:protobuf-2.5.0.tar.gz  并解压
[iyunv@db99 protobuf-2.5.0]# pwd
/root/protobuf-2.5.0
[iyunv@db99 protobuf-2.5.0]# ./configure --prefix=/usr/local/protoc/
[iyunv@db99 protobuf-2.5.0]# make
[iyunv@db99 protobuf-2.5.0]# make check
[iyunv@db99 protobuf-2.5.0]# make install
bin目录下执行 protoc --version
libprotoc 2.5.0
安装成功。
添加环境变量:
vi /etc/profile
export MAVEN_HOME=/usr/local/maven
export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:/usr/local/protoc/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH

4. 编译hadoop(将hadoop源码包解压就会出现如下的包)
[iyunv@db99 release-2.2.0]# pwd
/data/release-2.2.0
[iyunv@db99 release-2.2.0]# ls
BUILDING.txt       hadoop-common-project     hadoop-maven-plugins  hadoop-tools
dev-support        hadoop-dist               hadoop-minicluster    hadoop-yarn-project
hadoop-assemblies  hadoop-hdfs-project       hadoop-project        pom.xml
hadoop-client      hadoop-mapreduce-project  hadoop-project-dist
[iyunv@db99 release-2.2.0]# mvn package -Pdist,native -DskipTests -Dtar (往后看不要急着编译,这是maven命令)
..............编译需要较长时间大概1个小时左右。
如果出现如下错误:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hadoop-auth: Compilation failure: Compilation failure:
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[88,11] error: cannot access AbstractLifeCycle
[ERROR] class file for org.mortbay.component.AbstractLifeCycle not found
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,29] error: cannot access LifeCycle
[ERROR] class file for org.mortbay.component.LifeCycle not found
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[98,10] error: cannot find symbol
[ERROR] symbol:   method start()
[ERROR] location: variable server of type Server
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[104,12] error: cannot find symbol
[ERROR] -> [Help 1]


需要修改源码下边的hadoop-common-project/hadoop-auth/pom.xml
vi ~/hadoop-common-project/hadoop-auth/pom.xml
[iyunv@db99 release-2.2.0]# vim /data/release-2.2.0/hadoop-common-project/hadoop-auth/pom.xml 
在第55行下添加:
 56     <dependency>
 57         <groupId>org.mortbay.jetty</groupId>
 58         <artifactId>jetty-util</artifactId>
 59         <scope>test</scope>                                                                          
 60     </dependency>
 保存退出,重新编译即可。
最后编译成功:
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
[INFO] Building jar: /data/release-2.2.0/hadoop-minicluster/target/hadoop-minicluster-2.2.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................ SUCCESS [  1.386 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [  1.350 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [  2.732 s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [  0.358 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [  2.048 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [  3.450 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [ 16.114 s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 13.317 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [05:22 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 16.925 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [  0.044 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [02:51 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 28.601 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [ 27.589 s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [  3.966 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.044 s]
[INFO] hadoop-yarn ....................................... SUCCESS [ 52.846 s]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 41.700 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [ 25.945 s]
[INFO] hadoop-yarn-server ................................ SUCCESS [  0.105 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [  8.436 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 15.659 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [  3.647 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 12.495 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [  0.684 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [  5.266 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [  0.102 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [  2.666 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [  0.093 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 20.092 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [  2.783 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [  0.225 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 36.636 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 16.645 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [  3.058 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [  9.441 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [  5.482 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [  7.615 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [  2.473 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [  6.183 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [  6.454 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [  4.802 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 27.635 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [  2.850 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [  6.092 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [  4.742 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [  3.155 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [  3.317 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [  9.791 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [  2.680 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [  0.036 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [ 20.765 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [  6.476 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [  0.215 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16:32 min
[INFO] Finished at: 2014-07-18T01:18:24+08:00
[INFO] Final Memory: 117M/314M
[INFO] ------------------------------------------------------------------------


此时编译好的文件位于 ~/hadoop-dist/target/hadoop-2.2.0/ 目录中
拷贝hadoop-2.2.0到安装目录下,/usr/local/ 重新修改其配置文件,重新并格式化,启动,即可。
到此已经不会报错,可以使用。
[iyunv@db96 hadoop]# hadoop dfs -put ./in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


put: `.': No such file or directory
[iyunv@db96 hadoop]# file /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped


测试使用:上传一个文件,下载一个文件,查看上传文件的内容:


[iyunv@db96 ~]# cat wwn.txt 
# This is a text txt
# by coco
# 2014-07-18
[iyunv@db96 ~]# hdfs dfs -mkdir /test
[iyunv@db96 ~]# hdfs dfs -put wwn.txt /test
[iyunv@db96 ~]# hdfs dfs -cat /test/wwn.txt
[iyunv@db96 ~]# hdfs dfs -get /test/wwn.txt /tmp
[iyunv@db96 hadoop]# hdfs dfs -rm /test/wwn.txt
[iyunv@db96 tmp]# ll
总用量 6924
-rw-r--r-- 1 root root      70 7月  18 11:50 wwn.txt
[iyunv@db96 ~]# hadoop dfs -ls /test           
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


Found 2 items
-rw-r--r--   2 root supergroup    6970105 2014-07-18 11:44 /test/gc_comweight.txt
-rw-r--r--   2 root supergroup         59 2014-07-18 14:56 /test/hello.txt
到此我们的hdfs文件系统已经能正常使用。









按照在64位CentOS上编译 Hadoop 2.2.0的步骤,进行对hadoop2.2在rhel6.2上进行编译,
cd hadoop-2.2.0-src
mvn package -DskipTests -Pdist,native -Dtar
大概10分钟左右时报错如下:
Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-project: Execution ……………………

使用以下命令从新编译:
mvn package -DskipTests -Pdist,native -Dtar -Dmaven.javadoc.skip=true

大概40分钟后,编译完成
INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-minicluster ---
[INFO] No sources in project. Archive not created.
[INFO]
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-minicluster ---
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
[INFO] Skipping javadoc generation
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [  2.978 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [  8.844 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [01:58 min]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [  0.616 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 43.968 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 46.198 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [16:14 min]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 11.632 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [04:25 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 54.758 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [  0.055 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [02:11 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 11.402 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [01:19 min]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [  3.266 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.032 s]
[INFO] hadoop-yarn ....................................... SUCCESS [01:21 min]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 43.147 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [01:00 min]
[INFO] hadoop-yarn-server ................................ SUCCESS [  0.084 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [  4.851 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 42.176 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [  0.837 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [  5.910 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [  1.097 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [  1.100 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [  0.083 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [  1.529 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [  0.110 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [  7.229 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [  0.806 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [  0.310 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 18.240 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [  3.875 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [  0.911 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [  2.599 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [  1.372 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [  4.471 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [  0.676 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [  1.054 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [  7.665 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [  1.322 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 21.522 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [  0.677 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [  1.394 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [  2.027 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [  0.727 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [  1.062 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [  9.102 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [  4.286 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [  0.024 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [  8.841 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [ 16.353 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [  0.212 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 35:15 min
[INFO] Finished at: 2014-11-25T22:43:39+08:00
[INFO] Final Memory: 95M/368M
[INFO] ------------------------------------------------------------------------

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-311754-1-1.html 上篇帖子: 32位hadoop编译实现与64位操作系统兼容 下篇帖子: hadoop改进方面的胡思乱想
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表