asdrtu 发表于 2018-10-31 13:59:33

hadoop完全分布式安装

  安装环境如下
  centos6.0 32位操作系统,jdk目前最新版jdk1.7,hadoop为0.20.2
  第一步:安装环境准备工作
  a、在现有的两台exsi服务器准备实验环境,
  b、安装配置三台虚拟机,操作系统为censos6.0操作系统,配置内存1GB,硬盘20GB,安装一台后,另外两台使用克隆的方式进行安装

  c、服务器的Ip地址、服务器名称如下,注意三台服务器的hosts文件下面需加上对方的服务器名称和IP的解析,使三台服务器可以相互之前在IP地址和名称间切换
  hadoop-1:172.168.16.61
  hadoop-2:172.168.16.62
  hadoop-3:172.168.16.63
  配置如下:
  #hostname hadoop-1
  # vim /etc/sysconfig/network
  NETWORKING=yes
  HOSTNAME=hadoop-1
  #vim /etc/hosts
  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  172.168.16.61   hadoop-1
  172.168.16.62   hadoop-2
  172.168.16.63   hadoop-3
  c、下载最新版jdk
  #wget http://download.oracle.com/otn-pub/java/jdk/7u21-b11/jdk-7u21-linux-i586.tar.gz
  # ls
  debughadoop-0.20.2.tar.gzjdk-7u21-linux-i586.tar.gzkernels
  #tar xvfjdk-7u21-linux-i586.tar.gz -C /usr/local/
  d、配置环境变量,在/etc/profile最后面加上如下内容,三台服务器的JAVA环境都需这样配置,使java环境能正常运行。
  #vim /etc/etcprofile
  export JAVA_HOME=/usr/local/jdk1.7.0_21
  export JAVR_JRE=$JAVA_HOME/jre

  export>  export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
  #source /etc/profile
  e、检查java是否正常运行
  #echo $JAVA_HOME
  /usr/local/jdk1.7.0_21
  # java -version
  java version "1.7.0_21"
  Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
  Java HotSpot(TM) Client VM (build 23.21-b01, mixed mode)
  实验环境准备工作完成
  第一步、配置host文件,已在准备工作中完成
  第二步、配置hadoop用户
  # useradd hadoop
  # useradd hadoop
  # useradd hadoop
  第三步、配置ssh免密码登入
  #su hadoop
  $ ssh-keygen -t rsa   此步骤一路回车
  $ls /home/hadoop/.ssh/
  authorized_keysid_rsaid_rsa.pubknown_hosts             生成id_rsa和id_rsa.pub文件
  $cp /home/hadoop/.ssh/id_rsa.pub /home/hadoop/.ssh/authorized_keys
  同样在haddoop-2执行以上步骤
  #su hadoop
  $ ssh-keygen -t rsa   此步骤一路回车
  $ls /home/hadoop/.ssh/
  authorized_keysid_rsaid_rsa.pubknown_hosts             生成id_rsa和id_rsa.pub文件
  $cp /home/hadoop/.ssh/id_rsa.pub /home/hadoop/.ssh/authorized_keys
  同样在haddoop-3执行以上步骤
  #su hadoop
  $ ssh-keygen -t rsa   此步骤一路回车
  $ls /home/hadoop/.ssh/
  authorized_keysid_rsaid_rsa.pubknown_hosts             生成id_rsa和id_rsa.pub文件
  $cp /home/hadoop/.ssh/id_rsa.pub /home/hadoop/.ssh/authorized_keys
  把hadoop-1中id_rsa.pub中的内容追加到hadoop_2和hadoop_3中的authorized_keys文件中。注意追加的时候要用">>"重定向功能,请不要用直接拷的方式,易有问题
  # scp /home/hadoop/.ssh/id_rsa.pub 172.168.16.62:/home/hadoop
  # scp /home/hadoop/.ssh/id_rsa.pub 172.168.16.63:/home/hadoop

  $ cat>  同样在用上面的方法。把hadoop_2和hadoop_3上面的内容追加到另两台不同的机器。方法在此不再列出,配置后文件内容样式如下
  $ cat .ssh/authorized_keys
  ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqaahAX+SuzUlpBeyUoODd51NvUqQfGZKjrC60lUR76FCrRs3wPMDITES9TF86MK4xFk0bzNuK+WZVleq9ZilnOnxJsyz7NoaqOwwy5ACMjsRDMM0C5dFQ21xAODP6jDQ1LsCve0yHeuW6MlbKVERC94LRE5oTt3RFH7gxSMrDmMIOoIFDJXjEYDmHM6/kN7hmUiEH6X6k5sBwQA1dUaIORjy6zUV/4Sz+QPsQmF558V+Lw/CO2EdGYAgw97CHMxbybIG+b9A5IlCw+47d+zcdrX2vUUF1VGxnTlw4OYZCbfYqhvvpE1F9UY3+0RTCAuayGBCqWIFMd06KV2Np9FYfw== hadoop@hadoop-2
  ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1jyyWKj8/DgqTa0UZkDSX/12Vky/eQXmHccLmmwNSye1bjfGrotX4p05EFT46lzRsLlixwtWF4iWv2kLg/5bn4JJ83MWBW+ANcrqZLdF/lS97xa928lSq7ry4D00wSgLR9fybqo/wv7midn8mxZeI92jbSzMYE/6I5eyRb5GNySFSpGjnxkO0a9QvRSSvgJDZrQ80JNiw6FGUiRacf6kzP1/6qJwWPJnUgHHso/oQN66cmBtjZuCDy7/OGBwjJ1iHgjO8fnAdI3bmTPn7X3LslEUVPFoAXE1XciVM9Mk0Xh8Ixpc50XMG8jKboh4SdSu0QcGOI0R4Yy7rRDNt2QqcQ== hadoop@hadoop-1
  ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAuvT4KuNQeKwarbdvNiCEpUktNzpocvQsGjWYkwWbsU/M2fxyPYrUzgQqfF/NGXeEvf8BzWVgV7pQH9/Ajg2bOUOafcwubLIiimw+wzraQ4MGQERMYKOdd6Su+w+yR5vpohY/x6S5lMiYgmaBTNVhgitD9GjuVX/N5Mbn0c5sTt/TlWSMfgKOp6hNORWlf01JaTyKcCpap+I9gBtAq4vPD1YppvYyrfv9TeW8NdcVVxswGE6XHxPD2b1/+JyBLYE3zN5XfWWaIfqC8gBxJ4brHNxdBFMp+IQ8LJXRyAklwd882P9qxXNFEE/IqFtwm8PvxlV2Ad4APptfDgdRreyWXQ== hadoop@hadoop-3
  $
  检验ssh是否名密码验证,同时这样做的目的把第一次连结的提示“yes\no”去掉,一定要进行一次本机的连结,进行自身对自身的验证.
  # ssh 172.168.16.61
  Last login: Fri Apr 26 02:45:56 2013 from hadoop-1
  #
  # ssh 172.168.16.62
  Last login: Fri Apr 26 02:46:01 2013 from hadoop-1
  #
  # ssh 172.168.16.63
  Last login: Fri Apr 26 01:17:26 2013 from aca81034.ipt.aol.com
  #
  第四步、下载并解压hadoop
  $ wget http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz
  $tar xvf hadoop-0.20.2.tar.gz -C /opt/
  $ls /opt/
  $ ls -al /opt/                                    注意,此步骤是在root权限下完成。完成后需对hadoop-0.20.2目录进行权限更改
  total 12
  drwxr-xr-x.3 root   root   4096 Apr 25 12:56 .
  dr-xr-xr-x. 22 root   root   4096 Apr 25 03:32 ..
  drwxr-xr-x. 15 hadoop hadoop 4096 Apr 25 13:58 hadoop-0.20.2
  更改hadoop目录的权限
  #chown -R hadoop:hadoop /opt/hadoop-0.20.2
  同以上步骤在hadoop-2和hadoop-3上面执行
  第五步、修改配置文件
  a、修改hadoop-env文件
  $ cat /opt/hadoop-0.20.2/conf/hadoop-env.sh
  # Set Hadoop-specific environment variables here.
  # The only required environment variable is JAVA_HOME.All others are
  # optional.When running a distributed configuration it is best to
  # set JAVA_HOME in this file, so that it is correctly defined on
  # remote nodes.
  # The java implementation to use.Required.
  # export JAVA_HOME=/usr/lib/j2sdk1.5-sun
  export JAVA_HOME=/usr/local/jdk1.7.0_21
  b、修改core-site.xml,配置namenode节,此配置文件中,注意配置tmp目录,以免重启hadoop服务器,造成服务器不能启动
  $ cat /opt/hadoop-0.20.2/conf/core-site.xml
  
  
  
  
  
  fs.default.name
  hdfs://hadoop-1:9000
  true
  
  
  hadoop.tmp.dir
  /opt/hadoop-0.20.2/tmp
  A base for other temporary directory
  
  
  c、修改hdfs-site.xml文件,配置数据目录和name目录。存入name节点数据和文件数据,同时配置复制数量为“2”,因这里只有两台datanode
  $ cat /opt/hadoop-0.20.2/conf/hdfs-site.xml
  
  
  
  
  
  dfs.name.dir
  /opt/hadoop-0.20.2/hdfs/name
  true
  
  
  dfs.data.dir
  /opt/hadoop-0.20.2/hdfs/data
  true
  
  
  dfs.replication
  2
  true
  
  
  d、配置jobtacker节点。
  $ cat /opt/hadoop-0.20.2/conf/mapred-site.xml
  
  
  
  
  
  mapred.job.tracker
  hadoop-1:9001
  true
  
  
  e、配置senondarynode节点
  $ cat /opt/hadoop-0.20.2/conf/masters
  hadoop-1
  f、配置datanode和tasktracter节点
  $ cat /opt/hadoop-0.20.2/conf/slaves
  hadoop-2
  hadoop-3
  以上配置文件的修改,需同时在hadoop-2和hadoop-3上面进行,并保证所有内容是一样的。
  第六步、配置环境变量,为了以后使用上的方便,需把hadoop的目录做为环境变量设置,具体配置后的效果如下
  $ cat /etc/profile
  export JAVA_HOME=/usr/local/jdk1.7.0_21
  export JAVR_JRE=$JAVA_HOME/jre

  export>  export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_INSTALL/bin:$PATH
  export HADOOP_INSTALL=/opt/hadoop-0.20.2
  第七步:格式化hadoop
  $
  hadoopnamenode -format
  第七步:启动hadoop
  $start-all.sh
  第八步:验证hadoop
  $ jps
  5948 SecondaryNameNode
  6019 JobTracker
  5802 NameNode
  6784 Jps
  $ jps
  4199 TaskTracker
  9288 Jps
  4111 DataNode
  $
  $ jps
  6673 Jps
  1591 TaskTracker
  1512 DataNode
  $
  $ /opt/hadoop-0.20.2/bin/hadoop dfsadmin -report
  Configured Capacity: 37073182720 (34.53 GB)
  Present Capacity: 32527679488 (30.29 GB)
  DFS Remaining: 32527589376 (30.29 GB)
  DFS Used: 90112 (88 KB)
  DFS Used%: 0%
  Under replicated blocks: 0
  Blocks with corrupt replicas: 0
  Missing blocks: 0
  -------------------------------------------------
  Datanodes available: 2 (2 total, 0 dead)
  Name: 172.168.16.62:50010
  Decommission Status : Normal
  Configured Capacity: 18536591360 (17.26 GB)
  DFS Used: 45056 (44 KB)
  Non DFS Used: 2272829440 (2.12 GB)
  DFS Remaining: 16263716864(15.15 GB)
  DFS Used%: 0%
  DFS Remaining%: 87.74%
  Last contact: Fri Apr 26 03:18:45 CST 2013
  Name: 172.168.16.63:50010
  Decommission Status : Normal
  Configured Capacity: 18536591360 (17.26 GB)
  DFS Used: 45056 (44 KB)
  Non DFS Used: 2272673792 (2.12 GB)
  DFS Remaining: 16263872512(15.15 GB)
  DFS Used%: 0%
  DFS Remaining%: 87.74%
  Last contact: Fri Apr 26 03:18:46 CST 2013
  也可以通过以下验证
  http://172.168.16.61:50030/jobtracker.jsp

  http://172.168.16.61:50070/jobtracker.jsp


页: [1]
查看完整版本: hadoop完全分布式安装