1、解压 [uplooking@uplooking01 ~]$ tar -zvxf soft/hadoop-2.6.4.tar.gz -C app/
2、重命名
[uplooking@uplooking01 ~]$ mv app/hadoop-2.6.4/ app/hadoop
3、修改配置文件
hadoop-env.sh、yarn-env.sh、hdfs-site.xml、core-site.xml、mapred-site.xml、yarn-site.xml、slaves
1°、hadoop-env.sh
export JAVA_HOME=/opt/jdk
2°、yarn-env.sh
export JAVA_HOME=/opt/jdk
3°、slaves
uplooking02
uplooking03
4°、hdfs-site.xml
dfs.nameservices
ns1
dfs.ha.namenodes.ns1
nn1,nn2
dfs.namenode.rpc-address.ns1.nn1
uplooking01:9000
dfs.namenode.http-address.ns1.nn1
uplooking01:50070
dfs.namenode.rpc-address.ns1.nn2
uplooking02:9000
dfs.namenode.http-address.ns1.nn2
uplooking02:50070
dfs.namenode.shared.edits.dir
qjournal://uplooking01:8485;uplooking02:8485;uplooking03:8485/ns1
dfs.journalnode.edits.dir
/home/uplooking/data/hadoop/journal
dfs.namenode.name.dir
/home/uplooking/data/hadoop/name
dfs.datanode.data.dir
/home/uplooking/data/hadoop/data
dfs.ha.automatic-failover.enabled
true
dfs.client.failover.proxy.provider.ns1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fencing.methods
sshfence
shell(/bin/true)
dfs.ha.fencing.ssh.private-key-files
/home/uplooking/.ssh/id_rsa
dfs.ha.fencing.ssh.connect-timeout
30000
5°、core-site.xml
fs.defaultFS
hdfs://ns1
hadoop.tmp.dir
/home/uplooking/data/hadoop/tmp
ha.zookeeper.quorum
uplooking01:2181,uplooking02:2181,uplooking03:2181
6°、mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
uplooking02:10020
mapreduce.jobhistory.webapp.address
uplooking02:19888
yarn.app.mapreduce.am.staging-dir
/history
mapreduce.map.log.level
INFO
mapreduce.reduce.log.level
INFO
7°、yarn-site.xml
yarn.resourcemanager.ha.enabled
true
yarn.resourcemanager.ha.rm-ids
rm1,rm2
yarn.resourcemanager.hostname.rm1
uplooking02
yarn.resourcemanager.hostname.rm2
uplooking03
yarn.resourcemanager.zk-address
uplooking01:2181,uplooking02:2181,uplooking03:2181
yarn.nodemanager.aux-services
mapreduce_shuffle
4、准备hadoop所需要的几个目录
[uplooking@uplooking01 hadoop]$ mkdir -p /home/uplooking/data/hadoop/journal
[uplooking@uplooking01 hadoop]$ mkdir -p /home/uplooking/data/hadoop/name
[uplooking@uplooking01 hadoop]$ mkdir -p /home/uplooking/data/hadoop/data
[uplooking@uplooking01 hadoop]$ mkdir -p /home/uplooking/data/hadoop/tmp
5、同步到uplooking02和uplooking03
[uplooking@uplooking01 ~]$ scp -r data/hadoop uplooking@uplooking02:/home/uplooking/data/
[uplooking@uplooking01 ~]$ scp -r data/hadoop uplooking@uplooking03:/home/uplooking/data/
[uplooking@uplooking01 ~]$ scp -r app/hadoop uplooking@uplooking02:/home/uplooking/app/
[uplooking@uplooking01 ~]$ scp -r app/hadoop uplooking@uplooking03:/home/uplooking/app/
6、格式化&启动
1°、启动zk
2°、启动jouralnode
hadoop-deamon.sh start journalnode
3°、在uplooking01或者uplooking02中的一台机器上面格式化hdfs
hdfs namenode -format
18/03/02 11:16:20 INFO common.Storage: Storage directory /home/uplooking/data/hadoop/name has been successfully formatted.
说明格式化成功
将格式化后的namenode的元数据信息拷贝到另外一台namenode之上就可以了
将uplooking01上面产生的namenode的元数据信息,拷贝到uplooking02上面,
scp -r /home/uplooking/data/hadoop/name uplooking@uplooking02:/home/uplooking/data/hadoop/
4°、格式化zkfc
hdfs zkfc -formatZK
实际上是在zookeeper中创建一个目录节点/hadoop-ha/ns1
5°、启动hdfs
在uplooking01机器上面或者uplooking02上面启动、start-dfs.sh
6、启动yarn
在yarn配置的机器上面启动start-yarn.sh
在uplooking02上面启动start-yarn.sh
在uplooking03上面启动脚本
yarn-daemon.sh start resourcemanager(在3上没有resourcemanager进程,需要手动启动一下)
(hadoop的bug,在u2上启动yarn后,2上是有resourcemanager进程的,但是3上是没有的,所以3上面是需要手动启动的)
7°、要启动hdfs中某一个节点,使用脚本hadoop-daemon.sh start 节点进程名
(
Note:在保证已经格式化hdfs和zkfc后,可以直接使用start-dfs.sh start来启动,这时会依次启动:namenode datanode journalnode zkfc
Starting namenodes on [uplooking01 uplooking02]
uplooking01: starting namenode, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-namenode-uplooking01.out
uplooking02: starting namenode, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-namenode-uplooking02.out
uplooking03: starting datanode, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-datanode-uplooking03.out
uplooking02: starting datanode, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-datanode-uplooking02.out
Starting journal nodes [uplooking01 uplooking02 uplooking03]
uplooking03: starting journalnode, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-journalnode-uplooking03.out
uplooking02: starting journalnode, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-journalnode-uplooking02.out
uplooking01: starting journalnode, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-journalnode-uplooking01.out
18/03/04 01:00:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java> Starting ZK Failover Controllers on NN hosts [uplooking01 uplooking02]
uplooking02: starting zkfc, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-zkfc-uplooking02.out
uplooking01: starting zkfc, logging to /home/uplooking/app/hadoop/logs/hadoop-uplooking-zkfc-uplooking01.out
)
7、访问和验证
1°、访问
web
hdfs
http://uplooking01:50070
http://uplooking02:50070
其中一个是active,一个是standby
yarn
http://uplooking02:8088
http://uplooking03:8088
在浏览的时候standby会重定向跳转之active对应的页面
shell
我们是无法操作standby对应的hdfs的,只能操作active的namenode
Operation category READ is not supported in state standby
2、ha的验证
NameNode HA
访问:
uplooking01:50070
uplooking02:50070
其中一个active的状态,一个是StandBy的状态
当访问standby的namenode时候:
Operation category READ is not supported in state standby
主备切换验证:
在uplooking01上kill -9 namenode的进程
这时访问uplooking02:50070发现变成了active的
然后在uplooking01上重新启动namenode,发现启动后状态变成standby的
Yarn HA
web访问:默认端口是8088
uplooking02:8088
uplooking03:8088
This is standby RM. Redirecting to the current active RM: http://uplooking02:8088/
主备切换验证:
在uplooking02上kill -9 resourcemanager的进程
这时可以访问uplooking03:8088
然后在uplooking02上重新启动resourcemanager,再访问时就是跳转到uplooking03:8088
主备切换结论:
原来的主再恢复时,为了系统的稳定性,不会再进行主备的切换。
3、简单操作
cd /home/uplooking/app/hadoop/share/hadoop/mapreduce
[uplooking@uplooking01 mapreduce]$ yarn jar hadoop-mapreduce-examples-2.6.4.jar wordcount /hello /output/mr/wc
运维网声明
1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网 享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com