设为首页 收藏本站
查看: 871|回复: 0

[经验分享] hadoop redis mongodb

[复制链接]
累计签到:1 天
连续签到:1 天
发表于 2016-4-29 17:41:37 | 显示全部楼层 |阅读模式
一、环境
系统        CentOS7.0 64位
namenode01    192.168.0.220
namenode02    192.168.0.221
datanode01    192.168.0.222
datanode02    192.168.0.223
datanode03    192.168.0.224
二、配置基础环境
在所有的机器上添加本地hosts文件解析

1
2
3
4
5
6
[iyunv@namenode01 ~]# tail -5 /etc/hosts
192.168.0.220   namenode01
192.168.0.221   namenode02
192.168.0.222   datanode01
192.168.0.223   datanode02
192.168.0.224   datanode03



在5台机器上创建hadoop用户,并设置密码是hadoop,这里只以naemenode01为例子
1
2
3
4
5
6
7
[iyunv@namenode01 ~]# useradd hadoop
[iyunv@namenode01 ~]# passwd hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.



配置5台机器hadoop用户之间互相免密码ssh登录
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
#namenode01的操作
[iyunv@namenode01 ~]# su - hadoop
[hadoop@namenode01 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
1c:7e:89:9d:14:9a:10:fc:69:1e:11:3d:6d:18:a5:01 hadoop@namenode01
The key's randomart image is:
+--[ RSA 2048]----+
|     .o.E++=.    |
|      ...o++o    |
|       .+ooo     |
|       o== o     |
|       oS.=      |
|        ..       |
|                 |
|                 |
|                 |
+-----------------+
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@namenode01 ~]$ ssh namenode01 hostname
namenode01
[hadoop@namenode01 ~]$ ssh namenode02 hostname
namenode02
[hadoop@namenode01 ~]$ ssh datanode01 hostname
datanode01
[hadoop@namenode01 ~]$ ssh datanode02 hostname
datanode02
[hadoop@namenode01 ~]$ ssh datanode03 hostname
datanode03

#在namenode02上操作
[iyunv@namenode02 ~]# su - hadoop
[hadoop@namenode02 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
a9:f5:0d:cb:c9:88:7b:71:f5:71:d8:a9:23:c6:85:6a hadoop@namenode02
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|            .  o.|
|         . ...o.o|
|        S +....o |
|       +.E.O o.  |
|      o ooB o .  |
|       ..        |
|      ..         |
+-----------------+

[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@namenode02 ~]$ ssh namenode01 hostname
namenode01
[hadoop@namenode02 ~]$ ssh namenode02 hostname
namenode02
[hadoop@namenode02 ~]$ ssh datanode01 hostname
datanode01
[hadoop@namenode02 ~]$ ssh datanode02 hostname
datanode02
[hadoop@namenode02 ~]$ ssh datanode03 hostname
datanode03

#在datanode01上操作
[iyunv@datanode01 ~]# su - hadoop
[hadoop@datanode01 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
48:72:20:69:64:e7:81:b7:03:64:41:5e:fa:88:db:5e hadoop@datanode01
The key's randomart image is:
+--[ RSA 2048]----+
| +O+=            |
| +=*.o           |
| .ooo.o          |
| . oo+ .         |
|. . ... S        |
| o               |
|. . E            |
| . .             |
|  .              |
+-----------------+

[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@datanode01 ~]$ ssh namenode01 hostname
namenode01
[hadoop@datanode01 ~]$ ssh namenode02 hostname
namenode02
[hadoop@datanode01 ~]$ ssh datanode01 hostname
datanode01
[hadoop@datanode01 ~]$ ssh datanode02 hostname
datanode02
[hadoop@datanode01 ~]$ ssh datanode03 hostname
datanode03

#datanode02上操作
[hadoop@datanode02 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
32:aa:88:fa:ce:ec:51:6f:de:f4:06:c9:4e:9c:10:31 hadoop@datanode02
The key's randomart image is:
+--[ RSA 2048]----+
|      E.         |
|      ..         |
|       .         |
|      .          |
|    . o+So       |
|   . o oB        |
|  . . oo..       |
|.+ o o o...      |
|=+B   . ...      |
+-----------------+

[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@datanode02 ~]$ ssh namenode01 hostname
namenode01
[hadoop@datanode02 ~]$ ssh namenode02 hostname
namenode02
[hadoop@datanode02 ~]$ ssh datanode01 hostname
datanode01
[hadoop@datanode02 ~]$ ssh datanode02 hostname
datanode02
[hadoop@datanode02 ~]$ ssh datanode03 hostname
datanode03

#datanode03上操作
[iyunv@datanode03 ~]# su - hadoop
[hadoop@datanode03 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
f3:f3:3c:85:61:c6:e4:82:58:10:1f:d8:bf:71:89:b4 hadoop@datanode03
The key's randomart image is:
+--[ RSA 2048]----+
|      o=.        |
|      ..o.. .    |
|       o.+ * .   |
|      . . E O    |
|        S  B o   |
|         o. . .  |
|          o  .   |
|           +.    |
|            o.   |
+-----------------+

[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@datanode03 ~]$ ssh namenode01 hostname
namenode01
[hadoop@datanode03 ~]$ ssh namenode02 hostname
namenode02
[hadoop@datanode03 ~]$ ssh datanode01 hostname
datanode01
[hadoop@datanode03 ~]$ ssh datanode02 hostname
datanode02
[hadoop@datanode03 ~]$ ssh datanode03 hostname
datanode03



三、安装jdk环境
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[iyunv@namenode01 ~]# wget http://download.oracle.com/otn-p ... dfd253a6332a5871e06
[iyunv@namenode01 ~]# tar xf jdk-8u74-linux-x64.tar.gz -C /usr/local/

#配置环境变量配置文件
[iyunv@namenode01 ~]# cat /etc/profile.d/java.sh
JAVA_HOME=/usr/local/jdk1.8.0_74
JAVA_BIN=/usr/local/jdk1.8.0_74/bin
JRE_HOME=/usr/local/jdk1.8.0_74/jre
PATH=$PATH:/usr/local/jdk1.8.0_74/bin:/usr/local/jdk1.8.0_74/jre/bin
CLASSPATH=/usr/local/jdk1.8.0_74/jre/lib:/usr/local/jdk1.8.0_74/lib:/usr/local/jdk1.8.0_74/jre/lib/charsets.jar
export JAVA_HOME PATH

#加载环境变量
[iyunv@namenode01 ~]# source /etc/profile.d/java.sh
[iyunv@namenode01 ~]# which java
/usr/local/jdk1.8.0_74/bin/java

#测试结果
[iyunv@namenode01 ~]# java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

#将环境变量配置文件和二进制包复制到其余的4台机器上
[iyunv@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 namenode02:/usr/local/
[iyunv@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode01:/usr/local/
[iyunv@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode02:/usr/local/
[iyunv@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode03:/usr/local/
[iyunv@namenode01 ~]# scp /etc/profile.d/java.sh namenode02:/etc/profile.d/                                                                                                      100%  308     0.3KB/s   00:00   
[iyunv@namenode01 ~]# scp /etc/profile.d/java.sh datanode01:/etc/profile.d/                                                                                            100%  308     0.3KB/s   00:00   
[iyunv@namenode01 ~]# scp /etc/profile.d/java.sh datanode02:/etc/profile.d/                                                                                                         100%  308     0.3KB/s   00:00   
[iyunv@namenode01 ~]# scp /etc/profile.d/java.sh datanode03:/etc/profile.d/

#测试结果,以namenode02为例子
[iyunv@namenode02 ~]# source /etc/profile.d/java.sh
[iyunv@namenode02 ~]# java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)



四、安装hadoop
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
#下载hadoop软件
[iyunv@namenode01 ~]# wget http://apache.fayea.com/hadoop/c ... hadoop-2.5.2.tar.gz
[iyunv@namenode01 ~]# tar xf hadoop-2.5.2.tar.gz -C /usr/local/
[iyunv@namenode01 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/
[iyunv@namenode01 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop
‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’

#添加hadoop的环境变量配置文件
[iyunv@namenode01 ~]# cat /etc/profile.d/hadoop.sh
HADOOP_HOME=/usr/local/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_BASE PATH

#切换到hadoop用户下,检查jdk环境是否正常
[iyunv@namenode01 ~]# su - hadoop
Last login: Thu Apr 28 15:17:16 CST 2016 from datanode01 on pts/1
[hadoop@namenode01 ~]$ java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

#开始编辑hadoop的配置文件
#编辑hadoop的环境变量文件
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_74        #修改JAVA_HOME变量的值

#编辑core-site.xml文件
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/temp</value>
        </property>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://mycluster</value>
        </property>
        <property>
                <name>io.file.buffers.size</name>
                <value>131072</value>
        </property>
</configuration>

#编辑hdfs-site.xml文件
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/hdfs/dfs/name</value>    #namenode目录
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data/hdfs/data</value>        #datanode目录
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>        #和core-site.xml文件中保持一致
    </property>
    <property>
        <name>dfs.ha.namenodes.mycluster</name>
        <value>namenode01,namenode02</value>        #namenode节点
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.namenode01</name>
        <value>namenode01:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.namenode02</name>
        <value>namenode02:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.namenode01</name>
        <value>namenode01:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.namenode02</name>
        <value>namenode02:50070</value>
    </property>
    <property>
            #namenode往journalnode写edits文件,填写所有的journalnode节点
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485;datanode02:8485;datanode03:8485/mycluster</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/hdfs/journal</value>    #journalnode目录
    </property>
    <property>
        <name>dfs.client.faliover.proxy.provider.mycluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.fening.methods</name>
        <value>sshfence</value>        #通过什么方法进行fence操作
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>    #主机之间的认证
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>6000</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>false</value>    #关闭主备自动切换,后面通过zookeeper来切换
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>        #replicaion的数量,默认为3分,少于这个数量会报错
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
</configuration>

#编辑yarn-site.xml文件
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml
<configuration>
    <property>
        <name>yarn.nodemanager.aux-service</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>namenode01:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>namenode01:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>namenode01:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>namenode01:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>namenode01:8033</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>15360</value>
    </property>
</configuration>

#编辑mapred-site.xml文件
[hadoop@namenode01 ~]$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapredue.jobtracker.http.address</name>
        <value>namenode01:50030</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>namenode01:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>namenode01:19888</value>
    </property>
</configuration>

#编辑slaves配置文件
[hadoop@namenode01 ~]$ cat /usr/local/hadoop/etc/hadoop/slaves
datanode01
datanode02
datanode03

#在namenodee01上切换到root用户下,创建相应的目录
[iyunv@namenode01 ~]# mkdir /data/hdfs
[iyunv@namenode01 ~]# chown hadoop.hadoop /data/hdfs/

#将hadoop用户的环境变量配置文件复制到其余4台机器上
[iyunv@namenode01 ~]# scp /etc/profile.d/hadoop.sh namenode02:/etc/profile.d/
[iyunv@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode01:/etc/profile.d/
[iyunv@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode02:/etc/profile.d/  
[iyunv@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode03:/etc/profile.d/

#复制hadoop安装文件到其余的4台机器上
[iyunv@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ namenode02:/usr/local/
[iyunv@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode01:/usr/local/
[iyunv@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode02:/usr/local/
[iyunv@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode03:/usr/local/

#修改目录的权限,以namenode02为例
[iyunv@namenode02 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/
[iyunv@namenode02 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop
‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’
[iyunv@namenode02 ~]# ll /usr/local |grep hadoop
lrwxrwxrwx  1 root   root     24 Apr 28 17:19 hadoop -> /usr/local/hadoop-2.5.2/
drwxr-xr-x  9 hadoop hadoop  139 Apr 28 17:16 hadoop-2.5.2

#创建目录
[iyunv@namenode02 ~]# mkdir /data/hdfs
[iyunv@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/

#检查jdk环境
[iyunv@namenode02 ~]# su - hadoop
Last login: Thu Apr 28 15:12:24 CST 2016 on pts/0
[hadoop@namenode02 ~]$ java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)
[hadoop@namenode02 ~]$ which hadoop
/usr/local/hadoop/bin/hadoop



五、启动hadoop

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#在所有服务器执行hadoop-daemon.sh start journalnode,要在hadoop用户下执行
#只贴出namenoe01的过程
[hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-hadoop-journalnode-namenode01.out

#在namenode01上执行
[hadoop@namenode01 ~]$ hadoop namenode -format
#说明:第一次启动的时候需要执行hadoop namenoe -format,非首次启动则运行hdfs namenode  -initializeSharedEdits
这里需要解释一下。   
首次启动是指安装的时候就配置了HA,hdfs还没有数据。这时需要用format命令把namenode1格式化。   
非首次启动是指原来有一个没有配置HA的HDFS已经在运行了,HDFS上已经有数据了,现在需要配置HA而加入一台namenode。这时候namenode1通过initializeSharedEdits命令来初始化journalnode,把edits文件共享到journalnode上。

#开始启动namenode节点
#在namenode01上执行
[hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode

#在namenode02上执行
[hadoop@namenode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode-bootstrapStandby

#启动datanode节点
[hadoop@datanode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode
[hadoop@datanode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode
[hadoop@datanode03 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode

#验证结果
#查看namenode01结果
[hadoop@namenode01 ~]$ jps
2467 NameNode        #namenode角色
2270 JournalNode
2702 Jps

#查看namenode02的结果
[hadoop@namenode01 ~]$ ssh namenode02 jps
2264 JournalNode
2680 Jps

#查看datanode01的结果
[hadoop@namenode01 ~]$ ssh datanode01 jps
2466 Jps
2358 DataNode        #datanode角色
2267 JournalNode

#查看datannode02的结果
[hadoop@namenode01 ~]$ ssh datanode02 jps
2691 Jps
2612 DataNode        #datanode角色
2265 JournalNode

#查看datanode03的结果
[hadoop@namenode01 ~]$ ssh datanode03 jps
11987 DataNode        #datanode角色
12067 Jps
11895 JournalNode



六、zookeeper高可用环境搭建
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
#下载软件,使用root用户的身份去安装
[iyunv@namenode01 ~]# wget http://apache.fayea.com/zookeepe ... keeper-3.4.6.tar.gz

#解压文件/usr/local下,并修改权限
[iyunv@namenode01 ~]# tar xf zookeeper-3.4.6.tar.gz -C /usr/local/
[iyunv@namenode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/

#修改zookeeper配置文件
[iyunv@namenode01 ~]# cp /usr/local/zookeeper-3.4.6/conf/zoo_sample.cfg /usr/local/zookeeper-3.4.6/conf/zoo.cfg
[iyunv@namenode01 ~]# egrep -v "^#|^$" /usr/local/zookeeper-3.4.6/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/hdfs/zookeeper/data
dataLogDir=/data/hdfs/zookeeper/logs
clientPort=2181
server.1=namenode01:2888:3888
server.2=namenode02:2888:3888
server.3=datanode01:2888:3888
server.4=datanode02:2888:3888
server.5=datanode03:2888:3888

#配置zookeeper环境变量
[iyunv@namenode01 ~]# cat /etc/profile.d/zookeeper.sh
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin

#在namenode01上创建相关的目录和myid文件
[iyunv@namenode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}
[iyunv@namenode01 ~]# tree /data/hdfs/zookeeper
/data/hdfs/zookeeper
├── data
└── logs
[iyunv@namenode01 ~]# echo "1" >/data/hdfs/zookeeper/data/myid
[iyunv@namenode01 ~]# cat /data/hdfs/zookeeper/data/myid
1
[iyunv@namenode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper
[iyunv@namenode01 ~]# ll /data/hdfs/
total 0
drwxrwxr-x 3 hadoop hadoop 17 Apr 29 10:05 dfs
drwxrwxr-x 3 hadoop hadoop 22 Apr 29 10:05 journal
drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:42 zookeeper

#将zookeeper安装目录和环境变量配置文件复制到其余的几台机器上,以复制到namenode02为例
[iyunv@namenode01 ~]# scp -r /usr/local/zookeeper-3.4.6 namenode02:/usr/local/
[iyunv@namenode01 ~]# scp /etc/profile.d/zookeeper.sh namenode02:/etc/profile.d/

#namenode02上创建相关的目录和文件,并修改相应目录的权限
[iyunv@namenode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/
[iyunv@namenode02 ~]# ll /usr/local/ |grep zook
drwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:47 zookeeper-3.4.6
[iyunv@namenode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}
[iyunv@namenode02 ~]# echo "2" >/data/hdfs/zookeeper/data/myid
[iyunv@namenode02 ~]# cat /data/hdfs/zookeeper/data/myid
2
[iyunv@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper
[iyunv@namenode02 ~]# ll /data/hdfs/ |grep zook
drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:50 zookeeper

#在datanode01上创建相关的目录和文件,并修改相应目录的权限
[iyunv@datanode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/
[iyunv@datanode01 ~]# ll /usr/local/ |grep zook
drwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:48 zookeeper-3.4.6
[iyunv@datanode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}
[iyunv@datanode01 ~]# echo "3" >/data/hdfs/zookeeper/data/myid
[iyunv@datanode01 ~]# cat /data/hdfs/zookeeper/data/myid
3
[iyunv@datanode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper
[iyunv@datanode01 ~]# ll /data/hdfs/ |grep zook
drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:54 zookeeper

#在datanode02上创建相关的目录和文件,并修改相应目录的权限
[iyunv@datanode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/
[iyunv@datanode02 ~]# ll /usr/local/ |grep zook
drwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:49 zookeeper-3.4.6
[iyunv@datanode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}
[iyunv@datanode02 ~]# echo "4" >/data/hdfs/zookeeper/data/myid
[iyunv@datanode02 ~]# cat /data/hdfs/zookeeper/data/myid
4
[iyunv@datanode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper
[iyunv@datanode02 ~]# ll /data/hdfs/ |grep zook
drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:56 zookeeper

#在datanode03上创建相关的目录和文件,并修改相应目录的权限
[iyunv@datanode03 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/
[iyunv@datanode03 ~]# ll /usr/local/ |grep zook
drwxr-xr-x  10 hadoop hadoop 4096 Apr 29 18:49 zookeeper-3.4.6
[iyunv@datanode03 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}
[iyunv@datanode03 ~]# echo "5" >/data/hdfs/zookeeper/data/myid
[iyunv@datanode03 ~]# cat /data/hdfs/zookeeper/data/myid
5
[iyunv@datanode03 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper
[iyunv@datanode03 ~]# ll /data/hdfs/ |grep zook
drwxr-xr-x 4 hadoop hadoop 28 Apr 29 18:57 zookeeper

#在5台机器上已hadoop的身份穷zookeeper
#namenode01上启动
[hadoop@namenode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

#namenode02上启动
[hadoop@namenode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

#datanode01上启动
[hadoop@datanode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

#datanode02上启动
[hadoop@datanode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

#datanode03上启动
[hadoop@datanode03 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

#查看namenode01的结果
[hadoop@namenode01 ~]$ jps
2467 NameNode
3348 QuorumPeerMain    #zookeeper进程
3483 Jps
2270 JournalNode
[hadoop@namenode01 ~]$ zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower

#查看namenode02的结果
[hadoop@namenode01 ~]$ ssh namenode02 jps
2264 JournalNode
2888 QuorumPeerMain
2936 Jps
[hadoop@namenode01 ~]$ ssh namenode02 'zkServer.sh status'
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower

#查看datanode01的结果
[hadoop@namenode01 ~]$ ssh datanode01 jps
2881 QuorumPeerMain
2358 DataNode
2267 JournalNode
2955 Jps
[hadoop@namenode01 ~]$ ssh datanode01 'zkServer.sh status'
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower

#查看datanode02的结果
[hadoop@namenode01 ~]$ ssh datanode02 jps
2849 QuorumPeerMain
2612 DataNode
2885 Jps
2265 JournalNode
[hadoop@namenode01 ~]$ ssh datanode02 'zkServer.sh status'
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower

#查看datanode03的结果
[hadoop@namenode01 ~]$ ssh datanode03 jps
11987 DataNode
12276 Jps
12213 QuorumPeerMain
11895 JournalNode
[hadoop@namenode01 ~]$ ssh datanode03 'zkServer.sh status'
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: leader






运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-210510-1-1.html 上篇帖子: Hadoop1.x版本升级Hadoop2.x 下篇帖子: Hadoop2.5.2 HA高可靠性集群搭建(Hadoop+Zookeeper)
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表