设为首页 收藏本站
查看: 950|回复: 0

[经验分享] hadoop配置自动化之一ssh自动化

[复制链接]

尚未签到

发表于 2016-12-11 07:44:42 | 显示全部楼层 |阅读模式
  此篇博客参考:SSH无密码登录-多节点自动化部署SHELL篇。
  测试环境:ubuntu12.04.2 server 64bit 、expect version 5.45、GNU bash, version 4.2.24(1)-release (x86_64-pc-linux-gnu)
  说明:hadoop自动化配置出来的结果是:整个集群一个namenode、一个secondary、一个JobTracker,且这三个进程在同一个机器上面,datanode和tasktracker在其他slaves机器上面(如果有需要可以修改相应的shell script即可)
  hadoop配置自动化怎么做?这个应该涉及到很多方面的内容。假如说这个集群全部是裸机,所有机器共用一个用户名和密码,配置了expect工具,默认在/etc/hosts里面配置了集群所有机器的机器名和ip。那么应该可以按照下面的思路来做:(1)ssh的自动化部署配置;(2)jdk的自动化部署配置;(3)hadoop的自动化配置;
  (1)ssh的自动化部署配置的思路用户首先要在namenode节点上面配置一个slaves.host的文件,该文件包含所有集群slaves的机器名,然后运行脚本在namenode上面自动生成id_rsa.pub文件,并且产生authorized_keys授权文件,然后把该文件分发到slaves的集群上面完成ssh的配置;
  (2)jdk的配置,jdk主要是把一个jdk.tar.gz包进行解压,然后配置.bashrc文件,然后把.bashrc文件和解压后的jdk.tar.gz文件分发到slaves集群;即可完成对jdk的配置;
  (3)hadoop的自动化配置,这个配置要是配置conf文件夹下面的文件,首先下载hadoop的安装包,然后解压修改conf里面的一些常规配置,然后根据namenode节点上面的jdk路径和namenode机器名以及salves机器名配置相应的.xml、.env文件,最后把修改后的hadoop解压包分发到各个slaves即可;
  这里首先贴上第一篇ssh的配置shell代码:

#!/bin/bash
# auto generate ssh key and distribute the authorized_keys to the salves machine
# the script should run on the namenode manchine
if [ $# -lt 2 ]; then
cat << HELP
generate_ssh_v1 --generate ssh key for login without typing password;
this script should run on the namenode machine and user should edit the ip-list file
USAGE: ./generate_ssh_v1 user pasaword
EXAMPLE: ./generate_ssh_v1 hadoop1 1234
HELP
exit 0
fi
user=$1
ip=$HOSTNAME
pass=$2
rm -rf ~/.ssh
echo ''
echo "####################################################"
echo " generate the rsa public key on $HOSTNAME ..."
echo "####################################################"
expect -c "
set timeout 3
spawn ssh $user@$ip
expect \"yes/no\"
send -- \"yes\r\"
expect \"password:\"
send -- \"$pass\r\"
expect \"$\"
send \"ssh-keygen -t rsa -P '' -f $HOME/.ssh/id_rsa\r\"
expect \"$\"
send \"ssh-copy-id -i $HOME/.ssh/id_rsa.pub $HOSTNAME\r\"
expect \"password\"
send -- \"$pass\r\"
expect eof
"
echo ''
echo "####################################################"
echo " copy the namenode's authorized_keys to slaves ..."
echo "####################################################"
for slave in `cat slaves.host`
do
expect -c "
set timeout 3
spawn ssh $user@$slave
expect \"yes/no\"
send -- \"yes\r\"
expect \"password\"
send -- \"$pass\r\"
expect \"$\"
send \"rm -rf $HOME/.ssh\r\"
expect \"$\"
send \"mkdir $HOME/.ssh\r\"
expect \"$\"
expect eof
"
done
for slave in `cat slaves.host`
do
expect -c "
set timeout 3
spawn scp $HOME/.ssh/authorized_keys $user@$slave:$HOME/.ssh/
expect \"password\"
send -- \"$pass\r\"
expect eof
"
done
/etc/hosts :
192.168.128.138hadoop
192.168.128.130ubuntu
  slaves.host:

hadoop
  测试信息:

hadoop1@ubuntu:~$ ./generate_ssh_v1 hadoop1 1234
####################################################
generate the rsa public key on ubuntu ...
####################################################
spawn ssh hadoop1@ubuntu
The authenticity of host 'ubuntu (192.168.128.130)' can't be established.
ECDSA key fingerprint is 53:c7:7a:dc:3b:bc:34:00:4a:6d:18:1c:5e:87:e7:e8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ubuntu,192.168.128.130' (ECDSA) to the list of known hosts.
hadoop1@ubuntu's password:
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.5.0-23-generic x86_64)
* Documentation:  https://help.ubuntu.com/
Last login: Mon Sep 23 15:22:03 2013 from ubuntu
ssh-keygen -t rsa -P '' -f /home/hadoop1/.ssh/id_rsa
hadoop1@ubuntu:~$ ssh-keygen -t rsa -P '' -f /home/hadoop1/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /home/hadoop1/.ssh/id_rsa.
Your public key has been saved in /home/hadoop1/.ssh/id_rsa.pub.
The key fingerprint is:
e1:5e:20:9d:4e:11:f8:dc:05:35:08:83:5d:ce:99:ed hadoop1@ubuntu
The key's randomart image is:
+--[ RSA 2048]----+
|       +=+o+o    |
|      o..*.+..   |
|      .o*.=..    |
|       =oo..     |
|        S . E    |
|       . .       |
|        .        |
|                 |
|                 |
+-----------------+
hadoop1@ubuntu:~$ ssh-copy-id -i /home/hadoop1/.ssh/id_rsa.pub ubuntu
hadoop1@ubuntu's password:
Now try logging into the machine, with "ssh 'ubuntu'", and check in:
~/.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
hadoop1@ubuntu:~$
####################################################
copy the namenode's authorized_keys to slaves ...
####################################################
spawn ssh hadoop1@hadoop
The authenticity of host 'hadoop (192.168.128.138)' can't be established.
ECDSA key fingerprint is 10:8f:d1:8e:63:0a:af:1e:fb:d9:a8:bb:9a:39:ab:46.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop,192.168.128.138' (ECDSA) to the list of known hosts.
hadoop1@hadoop's password:
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.5.0-23-generic i686)
* Documentation:  https://help.ubuntu.com/
System information as of Tue Aug  6 20:11:49 CST 2013
System load:  0.1               Processes:           76
Usage of /:   24.8% of 7.12GB   Users logged in:     2
Memory usage: 34%               IP address for eth0: 192.168.128.138
Swap usage:   0%
Graph this data and manage this system at https://landscape.canonical.com/
85 packages can be updated.
45 updates are security updates.
Last login: Tue Aug  6 20:11:16 2013 from 192.168.128.130
hadoop1@hadoop:~$ rm -rf /home/hadoop1/.ssh
hadoop1@hadoop:~$ mkdir /home/hadoop1/.ssh
hadoop1@hadoop:~$ spawn scp /home/hadoop1/.ssh/authorized_keys hadoop1@hadoop:/home/hadoop1/.ssh/
hadoop1@hadoop's password:
authorized_keys                                          100%  396     0.4KB/s   00:00   
hadoop1@ubuntu:~$ ssh ubuntu
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.5.0-23-generic x86_64)
* Documentation:  https://help.ubuntu.com/
Last login: Mon Sep 23 16:13:39 2013 from ubuntu
hadoop1@ubuntu:~$ exit
logout
Connection to ubuntu closed.
hadoop1@ubuntu:~$ ssh hadoop
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.5.0-23-generic i686)
* Documentation:  https://help.ubuntu.com/
System information as of Tue Aug  6 20:12:17 CST 2013
System load:  0.12              Processes:           76
Usage of /:   24.8% of 7.12GB   Users logged in:     2
Memory usage: 34%               IP address for eth0: 192.168.128.138
Swap usage:   0%
Graph this data and manage this system at https://landscape.canonical.com/
85 packages can be updated.
45 updates are security updates.
Last login: Tue Aug  6 20:11:50 2013 from 192.168.128.130
hadoop1@hadoop:~$ exit
logout
Connection to hadoop closed.
总结:刚开始编写shell的时候连着写 spawn 然后直接敲shell的命令,结果老是expect后面的读不到。。。  分享,成长,快乐
  转载请注明blog地址:http://blog.csdn.net/fansy1990

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-312490-1-1.html 上篇帖子: hadoop节点运行的reduce和map任务数 下篇帖子: hadoop中mapred.tasktracker.map.tasks.maximum的设置
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表