10.复制hadoop到slave1,slave2
su hadoop
scp -r /data/hadoop/* hadoop@slave1:/data/hadoop/
scp -r /data/hadoop/* hadoop@slave2:/data/hadoop/
11.su hadoop,hadoop namenode -format 初始化只需要做一次
12.su hadoop,start-dfs.sh and start-yarn.sh
提示logging to /data/hadoop/logs/hadoop-hadoop-namenode-master.out
13.检测是否安装成功
jps
http://192.168.5.30:8088/cluster
14.报错
hadoop-2.3.0安装错误
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform hadoop本地库与系统版本不一致引起的错误
hadoop-2.2.0安装错误
在使用./sbin/start-dfs.sh或./sbin/start-all.sh启动时会报出这样如下警告:
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library
/usr/local/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled
stack guard. The VM will try to fix the stack guard now.
....
Java: ssh: Could not resolve hostname Java: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not
known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
....
这个问题的错误原因会发生在64位的操作系统上,原因是从官方下载的hadoop使用的本地库
文件(例如lib/native/libhadoop.so.1.0.0)都是基于32位编译的,运行在64位系统上就会出
现上述错误。
解决方法之一是在64位系统上重新编译hadoop,另一种方法是在hadoop-env.sh和yarn-env.sh中添加如下两行:
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
15.hadoop使用端口说明
16.ss -an