设为首页 收藏本站
查看: 925|回复: 0

[经验分享] 安装hadoop过程详解

[复制链接]

尚未签到

发表于 2018-10-29 12:33:24 | 显示全部楼层 |阅读模式
  wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz  hadoop的下载文件
  安装jdk
  http://www.linuxidc.com/Linux/2014-08/105906.htm
  安装hadoop
  进入
  /root/zby/hadoop/hadoop-1.2.1/conf
  配置hadoop,主要是配置core-site.xml,hdfs-site.xml,mapred-site.xml三个配置文件
  4个文件需要编辑:
  第一个文件改个jdk按照路径即可
  hadoop-env.sh
  export HADOOP_HEAPSIZE=256  修改hadoop所用内存
  #export JAVA_HOME=/usr/lib/jvm/jdk7   这行需要编辑
  路径不知道可以用如下命令进行查找
  [root@iZ28c21psoeZ conf]# echo $JAVA_HOME
  /usr/lib/jvm/jdk7
  第二个文件:打开文件直接进行替换,如下中文 注释都删除后粘贴。。。
  cd /opt/hadoop-1.2.1/conf
  vim core-site.xml
  
  
  
  
  
  hadoop.tmp.dir
  /hadoop
  
  
  dfs.name.dir
  hadoop/name
  
  第三个文件:如下中文 注释都删除后粘贴。。。
  vim hdfs-site.xml
  
  
  
  
  
  dfs.data.dir
  /hadoop/data
  
  
  第四个文件:如下中文注释都删除后粘贴。。。
  vim mapred-site.xml
  
  
  
  
  
  mapred.job.tracker
  ldy:9001
  
  
  接下来还需要修改下vim /etc/profile
  将如下代码放置在最后,如果前5行在安装jdk时已经生效可以不用添加。
  export JAVA_HOME=/usr/lib/jvm/jdk7
  export JRE_HOME=${JAVA_HOME}/jre

  export>  export PATH=${JAVA_HOME}/bin:$PATH
  export HADOOP_HOME=/opt/hadoop-1.2.1
  export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$PATH
  接下来 进入该目录:
  /opt/hadoop-1.2.1/bin
  对hadoop进行一个格式化操作:
  hadoop -namenode -format
  如果遇到如下错误:
  Warning: $HADOOP_HOME is deprecated.
  /opt/hadoop-1.2.1/bin/hadoop: line 350: /usr/lib/jdk7/bin/java: No such file or directory
  /opt/hadoop-1.2.1/bin/hadoop: line 434: /usr/lib/jdk7/bin/java: No such file or directory
  /opt/hadoop-1.2.1/bin/hadoop: line 434: exec: /usr/lib/jdk7/bin/java: cannot execute: No such file or directory
  查看第一个文件是否正确
  [root@iZ28c21psoeZ conf]# echo $JAVA_HOME
  /usr/lib/jvm/jdk7
  接着执行,又报错了。。
  [root@iZ28c21psoeZ bin]# hadoop -namenode -format
  Warning: $HADOOP_HOME is deprecated.
  Unrecognized option: -namenode
  Error: Could not create the Java Virtual Machine.
  Error: A fatal exception has occurred. Program will exit.
  [root@iZ28c21psoeZ bin]#
  可以修改的地方有两个
  第一个(次要的):/opt/hadoop/conf/hadoop-env.sh
  修改参数: export HADOOP_HEAPSIZE=256   #默认值为2000M,为Java虚拟机占用的内存的大小
  第二个(主要的):将如下源码放在hadoop最下方保存
  查看/opt/hadoop/bin/hadoop 源码:
  ####################################################################
  if [[$EUID -eq 0 ]]; then
  HADOOP_OPTS="$HADOOP_OPTS-jvm server $HADOOP_DATANODE_OPTS"
  else
  HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"
  fi
  ####################################################################
  重新执行,看看结果,貌似又报错了。
  [root@iZ28c21psoeZ bin]# ./hadoop namenode -format
  Warning: $HADOOP_HOME is deprecated.
  16/07/04 18:49:04 INFO namenode.NameNode: STARTUP_MSG:
  /************************************************************
  STARTUP_MSG: Starting NameNode
  STARTUP_MSG:   host = iZ28c21psoeZ/10.251.57.77
  STARTUP_MSG:   args = [-format]
  STARTUP_MSG:   version = 1.2.1
  STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
  STARTUP_MSG:   java = 1.7.0_60
  ************************************************************/
  [Fatal Error] core-site.xml:11:3: The element type "property" must be terminated by the matching end-tag "".
  16/07/04 18:49:04 FATAL conf.Configuration: error parsing conf file: org.xml.sax.SAXParseException; systemId: file:/opt/hadoop-1.2.1/conf/core-site.xml; lineNumber: 11; columnNumber: 3; The element type "property" must be terminated by the matching end-tag "".
  16/07/04 18:49:04 ERROR namenode.NameNode: java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/opt/hadoop-1.2.1/conf/core-site.xml; lineNumber: 11; columnNumber: 3; The element type "property" must be terminated by the matching end-tag "".
  at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1249)
  at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1107)
  at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1053)
  at org.apache.hadoop.conf.Configuration.set(Configuration.java:420)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.setStartupOption(NameNode.java:1374)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1463)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
  Caused by: org.xml.sax.SAXParseException; systemId: file:/opt/hadoop-1.2.1/conf/core-site.xml; lineNumber: 11; columnNumber: 3; The element type "property" must be terminated by the matching end-tag "".
  at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
  at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:347)
  at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:177)
  at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1156)
  ... 6 more
  16/07/04 18:49:04 INFO namenode.NameNode: SHUTDOWN_MSG:
  /************************************************************
  SHUTDOWN_MSG: Shutting down NameNode at iZ28c21psoeZ/10.251.57.77
  ************************************************************/
  [root@iZ28c21psoeZ bin]#
  根据日志提示是3大配置文件中有错误:
  果然:
     写成了
  重新执行一遍看看:
  [root@iZ28c21psoeZ bin]# ./hadoop namenode -format
  Warning: $HADOOP_HOME is deprecated.
  16/07/04 18:55:26 INFO namenode.NameNode: STARTUP_MSG:
  /************************************************************
  STARTUP_MSG: Starting NameNode
  STARTUP_MSG:   host = iZ28c21psoeZ/10.251.57.77
  STARTUP_MSG:   args = [-format]
  STARTUP_MSG:   version = 1.2.1
  STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
  STARTUP_MSG:   java = 1.7.0_60
  ************************************************************/
  16/07/04 18:55:27 INFO util.GSet: Computing capacity for map BlocksMap
  16/07/04 18:55:27 INFO util.GSet: VM type       = 64-bit
  16/07/04 18:55:27 INFO util.GSet: 2.0% max memory = 259522560
  16/07/04 18:55:27 INFO util.GSet: capacity      = 2^19 = 524288 entries
  16/07/04 18:55:27 INFO util.GSet: recommended=524288, actual=524288
  16/07/04 18:55:32 INFO namenode.FSNamesystem: fsOwner=root
  16/07/04 18:55:33 INFO namenode.FSNamesystem: supergroup=supergroup
  16/07/04 18:55:33 INFO namenode.FSNamesystem: isPermissionEnabled=true
  16/07/04 18:55:42 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
  16/07/04 18:55:42 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  16/07/04 18:55:42 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
  16/07/04 18:55:42 INFO namenode.NameNode: Caching file names occuring more than 10 times

  16/07/04 18:55:45 INFO common.Storage: Image file /hadoop/dfs/name/current/fsimage of>  16/07/04 18:55:47 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/hadoop/dfs/name/current/edits
  16/07/04 18:55:47 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/hadoop/dfs/name/current/edits
  16/07/04 18:55:48 INFO common.Storage: Storage directory /hadoop/dfs/name has been successfully formatted.
  16/07/04 18:55:48 INFO namenode.NameNode: SHUTDOWN_MSG:
  /************************************************************
  SHUTDOWN_MSG: Shutting down NameNode at iZ28c21psoeZ/10.251.57.77
  ************************************************************/
  完美:接着做:
  cd /opt/hadoop-1.2.1/bin
  [root@iZ28c21psoeZ bin]# start-all.sh
  Warning: $HADOOP_HOME is deprecated.
  starting namenode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-root-namenode-iZ28c21psoeZ.out
  localhost: socket: Address family not supported by protocol
  localhost: ssh: connect to host localhost port 22: Address family not supported by protocol
  localhost: socket: Address family not supported by protocol
  localhost: ssh: connect to host localhost port 22: Address family not supported by protocol
  starting jobtracker, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-iZ28c21psoeZ.out
  localhost: socket: Address family not supported by protocol
  localhost: ssh: connect to host localhost port 22: Address family not supported by protocol
  [root@iZ28c21psoeZ bin]#
  翻译一下:
  警告:$ HADOOP_HOME弃用。
  namenode开始,日志/ opt / hadoop-1.2.1 / libexec / . . /日志/ hadoop-root-namenode-iZ28c21psoeZ.out
  localhost:套接字:家庭地址不支持的协议
  localhost:ssh连接到主机本地主机端口22:家庭地址不支持的协议
  localhost:套接字:家庭地址不支持的协议
  localhost:ssh连接到主机本地主机端口22:家庭地址不支持的协议
  jobtracker开始,日志/ opt / hadoop-1.2.1 / libexec / . . /日志/ hadoop-root-jobtracker-iZ28c21psoeZ.out
  localhost:套接字:家庭地址不支持的协议
  在修改下代码:
  根据日志所示是端口不对,将hadoop的端口改成和服务器的ssh端口一致即可。
  在conf/hadoop-env.sh里改下 新增一条  export HADOOP_SSH_OPTS="-p 1234"
  在执行一下:
  [root@ldy bin]# sh start-all.sh
  Warning: $HADOOP_HOME is deprecated.
  starting namenode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-root-namenode-ldy.out
  localhost: starting datanode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-ldy.out
  localhost: starting secondarynamenode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-root-secondarynamenode-ldy.out
  starting jobtracker, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-ldy.out
  localhost: starting tasktracker, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-ldy.out
  [root@ldy bin]# jps
  27054 DataNode
  26946 NameNode
  27374 TaskTracker
  27430 Jps
  27250 JobTracker
  27165 SecondaryNameNode
  ok现在6个端口都起来了,成功。。


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-628006-1-1.html 上篇帖子: 学习笔记-hadoop的安全模式和目录快照 下篇帖子: hadoop的hdfs命令
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表