mlx@hadoop0 sbin]$ ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 43:3d:d0:2c:13:de:b1:c4:da:72:34:ba:c9:a3:a2:64.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Mon Feb 17 15:01:01 2014 from hadoop1
[mlx@hadoop0 ~]$
然后将rsa文件拷到另外两台机器上面
14/02/17 15:54:10 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-630c8102-043a-46ca-b9dd-c2c12a96965d
14/02/17 15:54:11 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/02/17 15:54:11 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/02/17 15:54:11 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/02/17 15:54:11 INFO util.GSet: Computing capacity for map BlocksMap
14/02/17 15:54:11 INFO util.GSet: VM type = 32-bit
14/02/17 15:54:11 INFO util.GSet: 2.0% max memory = 966.7 MB
14/02/17 15:54:11 INFO util.GSet: capacity = 2^22 = 4194304 entries
14/02/17 15:54:11 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/02/17 15:54:11 INFO blockmanagement.BlockManager: defaultReplication = 1
14/02/17 15:54:11 INFO blockmanagement.BlockManager: maxReplication = 512
14/02/17 15:54:11 INFO blockmanagement.BlockManager: minReplication = 1
14/02/17 15:54:11 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/02/17 15:54:11 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/02/17 15:54:11 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/02/17 15:54:11 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/02/17 15:54:11 INFO namenode.FSNamesystem: fsOwner = mlx (auth:SIMPLE)
14/02/17 15:54:11 INFO namenode.FSNamesystem: supergroup = supergroup
14/02/17 15:54:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/02/17 15:54:11 INFO namenode.FSNamesystem: HA Enabled: false
14/02/17 15:54:11 INFO namenode.FSNamesystem: Append Enabled: true
14/02/17 15:54:11 INFO util.GSet: Computing capacity for map INodeMap
14/02/17 15:54:11 INFO util.GSet: VM type = 32-bit
14/02/17 15:54:11 INFO util.GSet: 1.0% max memory = 966.7 MB
14/02/17 15:54:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/02/17 15:54:11 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/02/17 15:54:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/02/17 15:54:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/02/17 15:54:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/02/17 15:54:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/02/17 15:54:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/02/17 15:54:11 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/02/17 15:54:11 INFO util.GSet: VM type = 32-bit
14/02/17 15:54:11 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
14/02/17 15:54:11 INFO util.GSet: capacity = 2^16 = 65536 entries
14/02/17 15:54:11 INFO common.Storage: Storage directory /usr/hadoop/dfs/name has been successfully formatted.
14/02/17 15:54:11 INFO namenode.FSImage: Saving image file /usr/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
14/02/17 15:54:11 INFO namenode.FSImage: Image file /usr/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 195 bytes saved in 0 seconds.
14/02/17 15:54:11 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/02/17 15:54:11 INFO util.ExitUtil: Exiting with status 0
14/02/17 15:54:11 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop0/192.168.220.131
************************************************************/ 6.2、启动hadoop
到/usr/hadoop/sbin下,输入
./start-all.sh
[mlx@hadoop0 sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop0]
hadoop0: starting namenode, logging to /usr/hadoop/logs/hadoop-mlx-namenode-hadoop0.out
192.168.220.134: starting datanode, logging to /usr/hadoop/logs/hadoop-mlx-datanode-hadoop2.out
192.168.220.133: starting datanode, logging to /usr/hadoop/logs/hadoop-mlx-datanode-hadoop1.out
Starting secondary namenodes [hadoop0]
hadoop0: starting secondarynamenode, logging to /usr/hadoop/logs/hadoop-mlx-secondarynamenode-hadoop0.out
starting yarn daemons
starting resourcemanager, logging to /usr/hadoop/logs/yarn-mlx-resourcemanager-hadoop0.out
192.168.220.134: starting nodemanager, logging to /usr/hadoop/logs/yarn-mlx-nodemanager-hadoop2.out
192.168.220.133: starting nodemanager, logging to /usr/hadoop/logs/yarn-mlx-nodemanager-hadoop1.out 6.3、查看是否启动
在hadoop0下输入jps