设为首页 收藏本站
查看: 891|回复: 0

[经验分享] 面向生产环境的大集群模式安装Hadoop

[复制链接]

尚未签到

发表于 2015-7-14 07:39:26 | 显示全部楼层 |阅读模式
  一、实验说明
   1、本实验将使用DNS而不是hosts文件解析主机名;
   2、使用NFS共享密钥文件,而不是逐个手工拷贝添加密钥;
   3、复制Hadoop时使用批量拷贝脚本而不是逐台复制。
  测试环境:

HostnameIPHadoop版本Hadoop功能系统            
hadoop1192.168.1.1610.20.0namenodenfs服务器端 rhel5.4x86
hadoop2192.168.1.1620.20.0datanodedns+nfs客服端  rhel5.4 x86
hadoop3192.168.1.1630.20.0datanodenfs客户端  rhel5.4 x86
  二、DNS的安装与配置
    1、上传dns目录:



1 [iyunv@hadoop2 dns]# ls
2 dnsmasq.conf  dnsmasq.hosts  dnsmasq.resolv.conf  pid  start.sh  stop.sh
    2、修改dns目录中的文件:



----dnsmasq.conf为dnsmasq的配置文件----
[iyunv@hadoop2 dns]# cat dnsmasq.conf
cache-size=50000
dns-forward-max=1000
resolv-file=/dns/dnsmasq.resolv.conf
addn-hosts=/dns/dnsmasq.hosts
----dnsmasq缓存下来的域名,不使用/etc/hosts----
[iyunv@hadoop2 dns]# cat dnsmasq.hosts
192.168.1.161 hadoop1
192.168.1.162 hadoop2
192.168.1.163 hadoop3
----在dnsmasq.resolv.conf添加上游dns的地址----
[iyunv@hadoop2 dns]# cat dnsmasq.resolv.conf
### /etc/resolv.conf file autogenerated by netconfig!
#
# Before you change this file manually, consider to define the
# static DNS configuration using the following variables in the
# /etc/sysconfig/network/config file:
#     NETCONFIG_DNS_STATIC_SEARCHLIST
#     NETCONFIG_DNS_STATIC_SERVERS
#     NETCONFIG_DNS_FORWARDER
# or disable DNS configuration updates via netconfig by setting:
#     NETCONFIG_DNS_POLICY=''
#
# See also the netconfig(8) manual page and other documentation.
#
# Note: Manual change of this file disables netconfig too, but
# may get lost when this file contains comments or empty lines
# only, the netconfig settings are same with settings in this
# file and in case of a "netconfig update -f" call.
#
nameserver 218.108.248.228
nameserver 218.108.248.200

[iyunv@hadoop2 dns]# cat start.sh
#!/bin/sh
killall dnsmasq
dnsmasq --port=53 --pid-file=/dns/pid --conf-file=/dns/dnsmasq.conf
[iyunv@hadoop2 dns]# cat stop.sh
#!/bin/sh
killall dnsmasq
    3、启动dns,并在hadoop2上进行测试:



[iyunv@hadoop2 dns]# dig @hadoop2 www.qq.com
;  DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5  @hadoop2 www.qq.com
; (1 server found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADERHEADER /mnt/authorized_keys
[iyunv@hadoop3 ~]# cat /home/hadoop/.ssh/id_rsa.pub >> /mnt/authorized_keys
    5、查看authorized_keys内容



[hadoop@hadoop1 ~]$ cat /mnt/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA32vNwXv/23k0yF7QeITb61J5uccudHB3gQBtqCnB7wsOtIhdUsVIfcxGmPnWp6S9V+Ob+b73Vrl2xsxP4i0N8Cu1l2ZcU9jevc+o37yX4nW2oTBFVEP31y9E9fXkYf3cKiF0UrvunL59qgNnVUbq8qRtFr5QPAx6lGY0TYZiPaPr+POwNKF1IZvToqABsOnNimv0DNmAhbd3QyM7GaR/ZRQKOCMF8NYljo6exoDk9xPq/wCHC/rBnAU3gUlwi7Kn/tk2dirwvYZuqP3VO+w5zd6sYxscD8+UNK99XdOARzTlc8/iEPHy+JSBa6sQI2hOAOCAuHBtTymoJFUDH9YqXQ== hadoop@hadoop1
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4lTx6JTZlhoLI4Yyo0a6YeDmIgz60pYwYKwVL+p4wfp9OWB2/sEyf9iCsK8i94mnWMfNsRehqAG2ucPmWz1s/Kufxu/6uc8hJjDlOOMUOE7ENyN0Zre5MHj8jauDRhY4y37Rh3Crx86wzq79isDqJOWnKyjPQDjUH45780Hvtk87ckwNNSFhwuRgTFKhz0bQloJuHazU1/W924wmicqeEUSGhUFEkXUeJu7FqQjJcPjoRNqyTEuCHiYVh9HjOrUPdosfYqmQfuZ/x2gmsGRUdfTl32rkoZW43ay8CFV/MKqAFucEOiiHW7xttmm3zJgcyLptGhjo7NtvAQwKkPfG6w== hadoop@hadoop2
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAs7fkzQMR6yVqLBVAnJqTxFPO9NNngrmYDNZMbWXDz6V8J4Z7zC46odUERe3CNjC+v3X8rwvUWlALYtvMNonQwhnpvqe2s0CpDithSFkOt5fQarRYP5JtAjHvF5b22NqcyltF+ywLT4zKAg4tjgGV5nLafI2hsNjgljUOXkRjpwSSUpLmLayWnepLIwioCPPGIkM40balUOEWEASzaI4DaPoywmoVUrByou71i1F1VizXpbhIWW+LE2cANAy1xmP0zYBa+/O4mvpgZjWLtLpKFR/1nRZPh1emy+OB6RcoJl3Awmhcsyyjd4Q8jfOYsH78PKpnwJfyhtUEIENrzUV63w== hadoop@hadoop3

    6、做软连接(此步骤不需要在nfs服务器端做,只在客户端做)



[hadoop@hadoop2 ~]$ ln -s /mnt/authorized_keys /home/hadoop/.ssh/authorized_keys
[hadoop@hadoop2 ~]$ ll /home/hadoop/.ssh/authorized_keys
lrwxrwxrwx 1 hadoop hadoop 20 Aug 25 11:14 /home/hadoop/.ssh/authorized_keys -> /mnt/authorized_keys
[hadoop@hadoop3 ~]$ ln -s /mnt/authorized_keys /home/hadoop/.ssh/authorized_keys
[hadoop@hadoop3 ~]$ ll /home/hadoop/.ssh/authorized_keys
lrwxrwxrwx 1 hadoop hadoop 20 Aug 25 11:15 /home/hadoop/.ssh/authorized_keys -> /mnt/authorized_keys
    7、修改权限



[hadoop@hadoop1 ~]$ chmod 700 /home/hadoop/.ssh/
  备注:如果不修改的话,在进行登陆的时候会出现需要密码。
  8、测试是否实验无密码登陆



[hadoop@hadoop1 ~]$ ssh hadoop2
The authenticity of host 'hadoop2 (192.168.1.162)' can't be established.
RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop2,192.168.1.162' (RSA) to the list of known hosts.
[hadoop@hadoop2 ~]$ ssh hadoop3
The authenticity of host 'hadoop3 (192.168.1.163)' can't be established.
RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop3,192.168.1.163' (RSA) to the list of known hosts.
[hadoop@hadoop3 ~]$
  五、批量安装Hadoop
  1、先在hadoop1上把namenode安装完成,安装hadoop分布式可以参考:Hadoop集群安装



[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves | awk '{print "scp -rp hadoop-0.20.2 hadoop@"$1":/home/hadoop/"}' >  scp.sh
[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves | awk '{print "scp -rp temp hadoop@"$1":/home/hadoop/"}' >>  scp.sh
[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves | awk '{print "scp -rp user hadoop@"$1":/home/hadoop/"}' >>  scp.sh
[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves | awk '{print "scp -rp jdk1.7 hadoop@"$1":/home/hadoop/"}' >>  scp.sh
[hadoop@hadoop1 ~]$ ls
hadoop-0.20.2  jdk1.7  scp.sh  temp  user
[hadoop@hadoop1 ~]$ cat scp.sh
scp -rp hadoop-0.20.2 hadoop@192.168.1.162:/home/hadoop/
scp -rp hadoop-0.20.2 hadoop@192.168.1.163:/home/hadoop/
scp -rp temp hadoop@192.168.1.162:/home/hadoop/
scp -rp temp hadoop@192.168.1.163:/home/hadoop/
scp -rp user hadoop@192.168.1.162:/home/hadoop/
scp -rp user hadoop@192.168.1.163:/home/hadoop/
scp -rp jdk1.7 hadoop@192.168.1.162:/home/hadoop/
scp -rp jdk1.7 hadoop@192.168.1.163:/home/hadoop/
    2、格式化namenode



[hadoop@hadoop1 ~]$ hadoop-0.20.2/bin/hadoop namenode -format
13/08/25 11:52:39 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop1/192.168.1.161
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /home/hadoop/user/name ? (Y or N) Y
13/08/25 11:52:46 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
13/08/25 11:52:46 INFO namenode.FSNamesystem: supergroup=supergroup
13/08/25 11:52:46 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/08/25 11:52:47 INFO common.Storage: Image file of size 96 saved in 0 seconds.
13/08/25 11:52:48 INFO common.Storage: Storage directory /home/hadoop/user/name has been successfully formatted.
13/08/25 11:52:48 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.1.161
************************************************************/
    3、启动hadoop



[hadoop@hadoop1 ~]$ hadoop-0.20.2/bin/start-all.sh
starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-hadoop1.out
192.168.1.163: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoop3.out
192.168.1.162: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoop2.out
The authenticity of host '192.168.1.161 (192.168.1.161)' can't be established.
RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.
Are you sure you want to continue connecting (yes/no)? yes
192.168.1.161: Warning: Permanently added '192.168.1.161' (RSA) to the list of known hosts.
192.168.1.161: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop1.out
starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-hadoop1.out
192.168.1.162: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoop2.out
192.168.1.163: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoop3.out
    4、查看各个节点



[hadoop@hadoop1 ~]$ jdk1.7/bin/jps
4416 Jps
4344 JobTracker
4306 SecondaryNameNode
4157 NameNode
[hadoop@hadoop2 ~]$ jdk1.7/bin/jps
3699 TaskTracker
3636 DataNode
3752 Jps
[hadoop@hadoop3 ~]$ jdk1.7/bin/jps
4763 TaskTracker
4834 Jps
4653 DataNode
   六、重点说明
  1、如果重启以后无法自动挂载nfs,可以在/etc/rc.d/rc.local文件中添加:
  /bin/mount -a
  2、如果IP是自动获取的,请在DNS主机的/etc/rc.d/rc.local文件添加:



/bin/cat /app/resolv.conf > /etc/resolv.conf


[iyunv@node1 ~]# cat /app/resolv.conf
; generated by /sbin/dhclient-script
#search localdomain
#nameserver 192.168.1.151
  其它主机的/etc/rc.d/rc.local添加:



/bin/cat /app/resolv.conf > /etc/resolv.conf


[iyunv@node2 ~]# cat /app/resolv.conf
; generated by /sbin/dhclient-script
#search localdomain
nameserver 192.168.1.151
  
  
  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-86344-1-1.html 上篇帖子: hadoop streaming示例 下篇帖子: CDH安装Hadoop
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表