Hadoop集群环境安装部署
1.环境准备:安装Centos6.5的操作系统
下载hadoop2.7文件
下载jdk1.8文件
2.修改/etc/hosts文件及配置互信:
在/etc/hosts文件中增加如下内容:
192.168.1.61 host61
192.168.1.62 host62
192.168.1.63 host63
配置好各服务器之间的ssh互信
3.添加用户,解压文件并配置环境变量:
useradd hadoop
passwd hadoop
tar -zxvf hadoop-2.7.1.tar.gz
mv hadoop-2.7.1 /usr/local
ln -s hadoop-2.7.1 hadoop
chown -R hadoop:hadoop hadoop-2.7.1
tar -zxvf jdk-8u60-linux-x64.tar.gz
mv jdk1.8.0_60 /usr/local
ln -s jdk1.8.0_60 jdk
chown -R root:root jdk1.8.0_60
echo 'export JAVA_HOME=/usr/local/jdk' >>/etc/profile
echo 'export PATH=/usr/local/jdk/bin:$PATH' >/etc/profile.d/java.sh
4.修改hadoop配置文件:
1)修改hadoop-env.sh文件:
cd /usr/local/hadoop/etc/hadoop/hadoop-env.sh
sed -i 's%#export JAVA_HOME=${JAVA_HOME}%export JAVA_HOME=/usr/local/jdk%g' hadoop-env.sh
2)修改core-site.xml,在最后添加如下内容:
fs.default.name
hdfs://host61:9000/
hadoop.tmp.dir
/home/hadoop/temp
3)修改hdfs-site.xml文件:
dfs.replication
3
4)修改mapred-site.xml
mapred.job.tracker
host61:9001
5)配置masters
host61
6)配置slaves
host62
host63
5.用同样的方式配置host62及host63
6.格式化分布式文件系统
/usr/local/hadoop/bin/hadoop -namenode format
7.运行hadoop
1)/usr/local/hadoop/sbin/start-dfs.sh
2)/usr/local/hadoop/sbin/start-yarn.sh
8.检查:
# jps
4532 ResourceManager
4197 NameNode
4793 Jps
4364 SecondaryNameNode
# jps
32052 DataNode
32133 NodeManager
32265 Jps
# jps
6802 NodeManager
6963 Jps
6717 DataNode
页:
[1]