14/02/14 09:57:59 INFO mapreduce.Job: Task Id : attempt_1392341518773_0004_m_000000_0, Status : FAILED
Container launch failed for container_1392341518773_0004_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:152)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
解决办法是:
vim etc/hadoop/yarn-site.xml
root@water:/home/hadoop# sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
vim libexec/hadoop-config.sh
找到 JAVA_HOME is not set and could not be found. 这个错误提示的代码,然后在其代码前面定义JAVA_HOME
export JAVA_HOME=/usr/java/jdk
# Attempt to set JAVA_HOME if it is not set
if [[ -z $JAVA_HOME ]]; then
# On OSX use java_home (or /Library for older versions)
if [ "Darwin" == "$(uname -s)" ]; then
if [ -x /usr/libexec/java_home ]; then
export JAVA_HOME=($(/usr/libexec/java_home))
else
export JAVA_HOME=(/Library/Java/Home)
fi
fi
# Bail if we did not detect it
if [[ -z $JAVA_HOME ]]; then
echo "Error: JAVA_HOME is not set and could not be found." 1>&2
exit 1
fi
fi
错误消失
第四步:运行wordcount:
bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.2.0-sources.jar org.apache.hadoop.examples.WordCount /in /out
14/02/14 10:15:23 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/02/14 10:15:24 INFO input.FileInputFormat: Total input paths to process : 2
14/02/14 10:15:24 INFO mapreduce.JobSubmitter: number of splits:2
14/02/14 10:15:24 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/02/14 10:15:24 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/02/14 10:15:24 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/02/14 10:15:24 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
14/02/14 10:15:24 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/02/14 10:15:24 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/02/14 10:15:24 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/02/14 10:15:24 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/02/14 10:15:24 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/02/14 10:15:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/02/14 10:15:24 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/02/14 10:15:24 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/02/14 10:15:24 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1392344053646_0001
14/02/14 10:15:24 INFO impl.YarnClientImpl: Submitted application application_1392344053646_0001 to ResourceManager at /0.0.0.0:8032
14/02/14 10:15:24 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1392344053646_0001/
14/02/14 10:15:24 INFO mapreduce.Job: Running job: job_1392344053646_0001
14/02/14 10:15:31 INFO mapreduce.Job: Job job_1392344053646_0001 running in uber mode : false
14/02/14 10:15:31 INFO mapreduce.Job: map 0% reduce 0%
14/02/14 10:15:35 INFO mapreduce.Job: map 100% reduce 0%
14/02/14 10:15:40 INFO mapreduce.Job: map 100% reduce 100%
14/02/14 10:15:41 INFO mapreduce.Job: Job job_1392344053646_0001 completed successfully
14/02/14 10:15:41 INFO mapreduce.Job: Counters: 43
运行结果:
2014-02-19 09:46:49,710 WARN [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
这个异常的解决可以 启动完hbase,不关闭,然后再启动一次hbase 就可以。。。。。奇怪 参考连接: