设为首页 收藏本站
查看: 1327|回复: 0

[经验分享] Hadoop 2.2.0部署安装(笔记,单机安装)

[复制链接]

尚未签到

发表于 2015-7-11 09:22:30 | 显示全部楼层 |阅读模式
SSH无密安装与配置

具体配置步骤:
◎ 在root根目录下创建.ssh目录 (必须root用户登录)
cd /root & mkdir .ssh
chmod 700 .ssh & cd .ssh
◎ 创建密码为空的 RSA 密钥对:
ssh-keygen -t rsa -P ""
◎ 在提示的对称密钥名称中输入 id_rsa将公钥添加至 authorized_keys 中:
cat id_rsa.pub >> authorized_keys
chmod 644 authorized_keys # 重要
◎ 编辑 sshd 配置文件 /etc/ssh/sshd_config ,把 #AuthorizedKeysFile  .ssh/authorized_keys 前面的注释取消掉。
◎ 重启 sshd 服务:
service sshd restart
◎ 测试 SSH 连接。连接时会提示是否连接,按回车后会将此公钥加入至 knows_hosts 中:
ssh localhost# 输入用户名密码


Hadoop 2.2.0部署安装

具体步骤如下:
◎ 下载文件。
◎ 解压hadoop 配置环境。
#在root根目录下创建hadoop文件夹
mkdir  hadoop;
cd hadoop;
#将hadoop 2.2.0 安装文件放置到hadoop目录文件夹下。
#解压hadoop 2.2.0 文件
tar -zxvf hadoop-2.2.0.tar.gz
#进入hadoop -2.2.0 文件夹
cd hadoop-2.2.0
#进入hadoop配置文件夹
cd  etc/hadoop
#修改core-site.xml
vi core-site.xml 添加以下信息(hadoop.tmp.dir、fs.default.name):




DSC0000.gif DSC0001.gif








hadoop.tmp.dir
/root/hadoop/hadoop-${user.name}
A base for other temporaydirectories


fs.default.name
hdfs://localhost:8010




#修改hdfs-site.xml配置文件, namenode和datanode存储路径的设置






dfs.namenode.name.dir
/root/hadoop/hdfs/namenode
Determineswhere on the local filesystem the DFS name node should store the name table. Ifthis is a comma-delimited list of directories then the name table is replicatedin all of the directories, for redundancy.
true


dfs.datanode.data.dir
/root/hadoop/hdfs/datanode
Determineswhere on the local filesystem an DFS data node should store its blocks. If thisis a comma-delimited list of directories, then data will be stored in all nameddirectories, typically on different devices.Directories that do not exist areignored.

true


            
dfs.replication
1


   dfs.permissions
    false


View Code  
  
#修改mapred-site.xml
添加 dfs.namenode.name.dir、dfs.datanode.data.dir、dfs.replication、dfs.permissions等参数信息













mapred.job.tracker
localhost:54311
The host and port that the MapReduce job tracker runs
at.  If "local", thenjobs are run in-process as a single map
and reduce task.



mapred.map.tasks
10
As a rule of thumb, use 10x the number of slaves(i.e., number of tasktrackers).


mapred.reduce.tasks
2
As a rule of thumb, use 2x the number of slaveprocessors (i.e., number of tasktrackers).


View Code  
  
设置java环境(接上述步骤)
#修改hadoop-env.sh 设置java路径参数,export JAVA_HOME=/usr/local/jdk1.7







# Copyright 2011 The Apache Software Foundation
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0

#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use.
export JAVA_HOME=/usr/local/jdk1.7
# The jsvc implementation to use. Jsvc is required to run secure datanodes.
#export JSVC_HOME=${JSVC_HOME}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
# Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
if [ "$HADOOP_CLASSPATH" ]; then
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
else
export HADOOP_CLASSPATH=$f
fi
done
# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""
# Extra Java runtime options.  Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"
# On secure datanodes, user to run the datanode as after dropping privileges
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
# Where log files are stored.  $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
#       the user that will run the hadoop daemons.  Otherwise there is the
#       potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
# A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER
View Code  
  
设置hadoop环境变量[HADOOP_HOME]。
vi /etc/profile 输入 export HADOOP_HOME=/root/hadoop/hadoop-2.2.0
source /etc/profile  让环境变量生效。
测试hadoop环境变量是否生效:
echo $HADOOP_HOME
/root/hadoop/hadoop-2.2.0
进入hadoop安装目录,进入bin目录,格式化hdfs。
./hadoop namenode –format
◎  启动hadoop ,进入hadoop安装目录,进入sbin目录。
./start-all.sh
验证安装,登录 http://localhost:50070/ 。
文章转载请注明出处:http://www.iyunv.com/likehua/p/3825810.html
相关推荐:
sqoop安装参考:http://www.iyunv.com/likehua/p/3825489.html
hive安装参考:http://www.iyunv.com/likehua/p/3825479.html

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-85395-1-1.html 上篇帖子: Hadoop伪分布式搭建以及入手小例子——面向纯新手(上) 下篇帖子: 深入剖析Hadoop程序日志
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表