设为首页 收藏本站
查看: 601|回复: 0

[经验分享] yarn+hdfs

[复制链接]

尚未签到

发表于 2016-12-14 06:09:08 | 显示全部楼层 |阅读模式
使用yum安装CDH5
时间2014-02-10 09:02:00 CSDN博客
原文  http://blog.csdn.net/beckham008/article/details/19028853
安装ZooKeeper(集群模式)

Node Type:
node229, node452, node440


1.所有节点安装zookeeper, zookeeper-server

yum install -y zookeeper zookeeper-server


2.所有节点修改zookeeper配置文件

vi /etc/zookeeper/conf/zoo.cfg

增加节点的配置
server.1=node229:2888:3888
server.2=node452:2888:3888
server.3=node440:2888:3888


3.所有节点初始化zookeeper-server

每个节点的myid唯一

node229:service zookeeper-server init --myid=1

node452:service zookeeper-server init --myid=2

node440:service zookeeper-server init --myid=3


4.所有节点启动zookeeper

service zookeeper-server start


5.查看zookeeper状态

zookeeper-server status


安装CDH(集群模式,HDFS+YARN)

Node Type:
namenode: node229

datanode: node229, node452, node440

yarn-resourcemanager: node452

yarn-nodemanager: node229, node452, node440

mapreduce-historyserver: node440

yarn-proxyserver: node440


node1:
yum install hadoop-hdfs-namenode

node2:
yum install hadoop-yarn-resourcemanager

node3:
yum install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver

所有节点:
yum install hadoop-client

yum install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce


部署CDH
1.部署HDFS
(1) 配置文件
core-site.xml
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://node229:8020</value>
  </property>


  <property>
    <name>fs.trash.interval</name>
    <value>1440</value>
  </property>


hdfs-site.xml
  <property>
    <name>dfs.permissions.superusergroup</name>
    <value>hadoop</value>
  </property>


  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/hadoop/hdfs/namenode</value>
  </property>

  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/hadoop/hdfs/datanode</value>
  </property>


  <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
  </property>


slaves
node209
node452
node440

(2)创建namenode和datanode文件夹

namenode:
mkdir -p /hadoop/hdfs/namenode
chown -R hdfs:hdfs /hadoop/hdfs/namenode
chmod 700 /hadoop/hdfs/namenode

datanode:
mkdir -p /hadoop/hdfs/datanode
chown -R hdfs:hdfs /hadoop/hdfs/datanode
chmod 700 /hadoop/hdfs/datanode


(3)格式化namenode

sudo -u hdfs hadoop namenode -format


(4)启动hdfs
namenode(node209):

service hadoop-hdfs-namenode start

datanode(node209, node452, node440):

service hadoop-hdfs-datanode start

(for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done)


(5)查看hdfs状态

sudo -u hdfs hdfs dfsadmin -report

sudo -u hdfs hadoop fs -ls -R -h /


(6)创建HDFS临时文件夹

sudo -u hdfs hadoop fs -mkdir /tmp

sudo -u hdfs hadoop fs -chmod -R 1777 /tmp


http://101.227.253.62:50070


2.部署YARN
(1)配置YARN
mapred-site.xml:

  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>

  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>node440:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>node440:19888</value>
  </property>


yarn-site.xml
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>node452:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>node452:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>node452:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>node452:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>node452:8033</value>
  </property>
  <property>
    <description>Classpath for typical applications.</description>
     <name>yarn.application.classpath</name>
     <value>
        $HADOOP_CONF_DIR,
        $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
        $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
        $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
          $HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
     </value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services. mapreduce_shuffle .class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/hadoop/data/yarn/local</value>
  </property>
  <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>/hadoop/data/yarn/logs</value>
  </property>
  <property>
    <name>yarn.aggregation.enable</name>
    <value>true</value>
  </property>
  <property>
    <description>Where to aggregate logs</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/var/log/hadoop-yarn/apps</value>
  </property>

  <property>
    <name>yarn.app.mapreduce.am.staging-dir</name>
    <value>/user</value>
  </property>


(2)所有nodemanager创建本地目录

sudo mkdir -p /hadoop/data/yarn/local
sudo chown -R yarn:yarn /hadoop/data/yarn/local


sudo mkdir -p /hadoop/data/yarn/logs
sudo chown -R yarn:yarn /hadoop/data/yarn/logs


(3)创建HDFS目录

sudo -u hdfs hadoop fs -mkdir -p /user/history
sudo -u hdfs hadoop fs -chmod -R 1777 /user/history
sudo -u hdfs hadoop fs -chown yarn /user/history


sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn
sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn


(4)启动YARN
ResourceManager(node452):
sudo service hadoop-yarn-resourcemanager start


NodeManager(node209, node452, node440):
sudo service hadoop-yarn-nodemanager start


MapReduce JobHistory Server(node440):
sudo service hadoop-mapreduce-historyserver start


(5)创建YARN的HDFS用户目录

sudo -u hdfs hadoop fs -mkdir -p /user/$USER
sudo -u hdfs hadoop fs -chown $USER /user/$USER


(6)测试
查看节点状态
yarn node -all -list

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomwriter input


(7)关闭
sudo service hadoop-yarn-resourcemanager stop

sudo service hadoop-yarn-nodemanager stop

sudo service hadoop-mapreduce-historyserver stop


http://101.227.253.63:8088/


安装和部署HBase

Node Type:
hbase-master: node229, node440
hbase-regionserver: node229, node452, node440
hbase-thrift: node440
hbase-rest: node229, node452, node440


1.安装HBase
(1)修改配置
/etc/security/limits.conf,增加配置
hdfs - nofile 32768
hbase - nofile 32768


hdfs-site.xml,增加配置

  <property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
  </property>


(2)安装HBase
hbase-master:
sudo yum install hbase hbase-master
hbase-regionserver:

sudo yum install hbase hbase-regionserver
hbase-thrift:

sudo yum install hbase-thrift
hbase-rest:

sudo yum install hbase-rest


(3)配置HBase
hbase-site.xml
  <property>
    <name>hbase.rest.port</name>
    <value>60050</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>node229, node452, node440</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.tmp.dir</name>
    <value>/hadoop/hbase</value>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://node229:8020/hbase/ </value>

  </property>

(4)创建本地目录

mkdir -p /hadoop/hbase

chown -R hbase:hbase /hadoop/hbase


(5)创建hbase的HDFS目录

sudo -u hdfs hadoop fs -mkdir /hbase/

sudo -u hdfs hadoop fs -chown hbase /hbase


(6)启动HBase
hbase-master:
sudo service hbase-master start

hbase-regionserver:
sudo service hbase-regionserver start

hbase-thrift:
sudo service hbase-thrift start

hbase-rest:
sudo service hbase-rest start


http://101.227.253.62:60010

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-313785-1-1.html 上篇帖子: hbase-writer 下篇帖子: hadoop2.2.0 源码远程调试
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表