1.2 更改主机名:
vi /etc/hostname 在里面添加
hadoop1-DataNode
hadoop1-DataNode:/soft #hostname -F /etc/hostname
2. SSH设置
如代码清单 1 所示:
[iyunv@hadoop0 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a8:97:55:d3:95:d6:fe:f5:08:ca:4c:6e:24:62:b4:22 root@hadoop0
Ssh-keygen -t rsa命令将为 hadoop0-NameNode 上的当前用户 root 生成密钥对,密钥对的保存路径使用缺省的 /root/.ssh/id_rsa, 要求输入passphrase 的时候,直接回车。这样生成的证书以及公钥将存储在 /root/.ssh 目录,形成如下两个文件 id_rsa,id_rsa.pub。
然后将 id_rsa.pub文件的内容复制到hadoop0-NameNode(包括本机)、hadoop1-DataNode的/root/.ssh/authorized_keys文件的尾部。若机器上不存在这个/root/.ssh/authorized_keys文件,可以自行创建一个。
2.2 SSH连接测试
从 hadoop0-NameNode 分别向 hadoop0-NameNode, hadoop1-DataNode发起 SSH 连接请求,确保不需要输入密码就能 SSH 连接成功。
命令:
hadoop0-NameNode:/soft #ssh hadoop0-NameNode
hadoop0-NameNode:/soft #ssh hadoop1-DataNode
注意第一次 SSH 连接时会出现如下提示信息:
The authenticity of host [homer06] can't be established. The key fingerprint is:
74:32:91:f2:9c:dc:2e:80:48:73:d4:53:ab:e4:d3:1a
Are you sure you want to continue connecting (yes/no)?
输入 yes, 这样 OpenSSH 会把连接过来的这台主机的信息自动加到 /root/.ssh/know_hosts 文件中去,第二次再连接时,就不会有这样的提示信息。
以后就不需要输入密码就可以建立ssh连接了,配置成功。
附:secondarynamenode配置
假设增加一台机器为192.168.0.12
首先修改namenode和secondarynamenode两台机器的master文件,把secondarynamenode的主机名添加上去(一行一个)
然后登录secondarynamenode修改conf/core-site.xml
<name>fs.checkpoint.dir</name>
<value>/data/work/hdfs/namesecondary</value> 这里存储namenode的镜像
<description>Determines where on the local filesystem the DFS secondary name node should store the temporary images to merge. If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy. </description>
</property>
修改conf/hdfs-site.xml
<property>
<name>dfs.http.address</name>
<value>192.168.0.11:50070</value> 这里修改namenode主机的IP
<description> The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description>
</property>
重新启动hadoop,你在secondarynamenode上用jps会看到secondarynamenode进程