#
# hosts This file describes a number of hostname-to-address
# mappings for the TCP/IP subsystem. It is mostly
# used at boot time, when no name servers are running.
# On small systems, this file can be used instead of a
# "named" name server.
# Syntax:
#
# IP-Address Full-Qualified-Hostname Short-Hostname
127.0.0.1 localhosts
10.137.169.148 master
10.137.169.149 slavel
10.137.169.150 slave2
# special IPv6 addresses
::1 bigdata-01 ipv6-localhost ipv6-loopback
master:~ #su hadoop
hadoop@master:/> ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_dsa)://按下Enter Enterpassphrase (empty for no passphrase): //按下Enter
Enter same passphrase again: //按下Enter
Your identification has been saved in /home/hadoop/.ssh/id_dsa.
Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.
The key fingerprint is:
a9:4d:a6:2b:bf:09:8c:b2:30:aa:c1:05:be:0a:27:09 hadoop@bigdata-01
The key's randomart image is:
+--[ DSA 1024]----+
| |
| |
| . |
|. . . |
|E. . S |
|o.oo * |
|Ooo o o . |
|=B .. o |
|* o=. |
+-----------------+
//将id_dsa.pub拷贝到authorized_keys中
hadoop@master:/> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
//将其他slave节点的公钥拷贝到master节点中的authorized_keys,
//有几个slave节点就需要运行几次命令,slave1,slave2是节点主机名
hadoop@master:/> ssh slave1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'slave1 (10.137.169.149)' can't be established.
RSA key fingerprint is 0f:5d:31:ba:dc:7a:84:15:6a:aa:20:a1:85:ec:c8:60.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave1,10.137.169.149' (RSA) to the list of known hosts.
Password: //填写之前设置的hadoop用户密码
hadoop@master:/> ssh slave2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'slave2 (10.137.169.150)' can't be established.
RSA key fingerprint is 0f:5d:31:ba:dc:7a:84:15:6a:aa:20:a1:85:ec:c8:60.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave2,10.137.169.150' (RSA) to the list of known hosts.
Password: //填写之前设置的hadoop用户密码
//将authorized_keys文件拷贝回每一个节点,slave1,slave2是节点名称
hadoop@master:/> scp ~/.ssh/authorized_keys slave1:~/.ssh/authorized_keys
hadoop@master:/> scp ~/.ssh/authorized_keys slave2:~/.ssh/authorized_keys
4.权限设置
对master 和各个slave节点的hadoop目录进行权限更改
hadoop@master:/>cd home
hadoop@master:/home> chmod 755 hadoop
Where to store container logs. An application's localized log directory
will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
Individual containers' log directories will be below this, in directories
named container_{$contid}. Each container directory will contain the files
stderr, stdin, and syslog generated by that container.
Where to aggregate logs to.
yarn.nodemanager.remote-app-log-dir
/home/hadoop/hadoop-2.0.1/logs
List of directories to store localized files in. An
application's localized file directory will be found in:
${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
Individual containers' work directories, called container_${contid}, will
be subdirectories of this.