设为首页 收藏本站
查看: 2259|回复: 0

[经验分享] Hadoop 2.2.0学习笔记20131210

[复制链接]

尚未签到

发表于 2015-7-12 13:06:04 | 显示全部楼层 |阅读模式
  伪分布式单节点安装执行pi失败:



[iyunv@server-518 ~]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 5 10
  出错信息:


DSC0000.gif DSC0001.gif


Number of Maps  = 5
Samples per Map = 10
13/12/10 11:04:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
13/12/10 11:04:27 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/10 11:04:27 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0004
13/12/10 11:04:27 ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2.0/QuasiMonteCarlo_1386644665974_1643821138/in
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2.0/QuasiMonteCarlo_1386644665974_1643821138/in
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:285)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:340)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
View Code   打开调试日志:



[iyunv@server-518 hadoop-2.2.0]#  export HADOOP_ROOT_LOGGER=DEBUG,console
  详细的出错信息:





[iyunv@server-518 hadoop-2.2.0]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 3 3
Number of Maps  = 3
Samples per Map = 3
13/12/10 11:36:54 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
13/12/10 11:36:54 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)
13/12/10 11:36:54 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
13/12/10 11:36:54 DEBUG security.Groups:  Creating new Groups object
13/12/10 11:36:54 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
13/12/10 11:36:54 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
13/12/10 11:36:54 DEBUG util.NativeCodeLoader: java.library.path=/root/hadoop-2.2.0/lib
13/12/10 11:36:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/12/10 11:36:54 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Falling back to shell based
13/12/10 11:36:54 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
13/12/10 11:36:54 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000
13/12/10 11:36:54 DEBUG security.UserGroupInformation: hadoop login
13/12/10 11:36:54 DEBUG security.UserGroupInformation: hadoop login commit
13/12/10 11:36:54 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
13/12/10 11:36:54 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)
13/12/10 11:36:54 DEBUG util.Shell: setsid exited with exit code 0
13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
13/12/10 11:36:54 DEBUG impl.MetricsSystemImpl: StartupProgress, NameNode startup progress
13/12/10 11:36:54 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
13/12/10 11:36:54 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@46ea3050
13/12/10 11:36:54 DEBUG hdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socket are disabled.
13/12/10 11:36:54 DEBUG ipc.Client: The ping interval is 60000 ms.
13/12/10 11:36:54 DEBUG ipc.Client: Connecting to /10.10.96.33:8020
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root: starting, having connections 1
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #0
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #0
13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 33ms
13/12/10 11:36:54 DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in: masked=rwxr-xr-x
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #1
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #1
13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 25ms
13/12/10 11:36:54 DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0: masked=rw-r--r--
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #2
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #2
13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: create took 5ms
13/12/10 11:36:54 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0, chunkSize=516, chunksPerPacket=127, packetSize=65532
13/12/10 11:36:54 DEBUG hdfs.LeaseRenewer: Lease renewer daemon for [DFSClient_NONMAPREDUCE_-379311577_1] with renew id 1 started
13/12/10 11:36:54 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part0, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
13/12/10 11:36:54 DEBUG hdfs.DFSClient: Queued packet 0
13/12/10 11:36:54 DEBUG hdfs.DFSClient: Queued packet 1
13/12/10 11:36:54 DEBUG hdfs.DFSClient: Waiting for ack for: 1
13/12/10 11:36:54 DEBUG hdfs.DFSClient: Allocating new block
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #3
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #3
13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 2ms
13/12/10 11:36:54 DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:50010
13/12/10 11:36:54 DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:50010
13/12/10 11:36:54 DEBUG hdfs.DFSClient: Send buf size 131071
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #4
13/12/10 11:36:54 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #4
13/12/10 11:36:54 DEBUG ipc.ProtobufRpcEngine: Call: getServerDefaults took 1ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741859_1035 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 118
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741859_1035 sending packet packet seqno:1 offsetInBlock:118 lastPacketInBlock:true lastByteOffsetInBlock: 118
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Closing old block BP-1542097938-10.10.96.33-1386589677395:blk_1073741859_1035
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #5
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #5
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: complete took 13ms
Wrote input for Map #0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1: masked=rw-r--r--
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #6
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #6
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: create took 12ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1, chunkSize=516, chunksPerPacket=127, packetSize=65532
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part1, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 1
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Waiting for ack for: 1
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Allocating new block
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #7
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #7
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 3ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:50010
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:50010
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Send buf size 131071
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741860_1036 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 118
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741860_1036 sending packet packet seqno:1 offsetInBlock:118 lastPacketInBlock:true lastByteOffsetInBlock: 118
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Closing old block BP-1542097938-10.10.96.33-1386589677395:blk_1073741860_1036
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #8
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #8
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: complete took 8ms
Wrote input for Map #1
13/12/10 11:36:55 DEBUG hdfs.DFSClient: /user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2: masked=rw-r--r--
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #9
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #9
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: create took 4ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2, chunkSize=516, chunksPerPacket=127, packetSize=65532
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/user/root/QuasiMonteCarlo_1386646614155_1445162438/in/part2, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 1
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Waiting for ack for: 1
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Allocating new block
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #10
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #10
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 0ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:50010
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:50010
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Send buf size 131071
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741861_1037 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 118
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741861_1037 sending packet packet seqno:1 offsetInBlock:118 lastPacketInBlock:true lastByteOffsetInBlock: 118
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Closing old block BP-1542097938-10.10.96.33-1386589677395:blk_1073741861_1037
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #11
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #11
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: complete took 12ms
Wrote input for Map #2
Starting Job
13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.connect(Job.java:1233)
13/12/10 11:36:55 DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
13/12/10 11:36:55 DEBUG service.AbstractService: Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
13/12/10 11:36:55 DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:63)
13/12/10 11:36:55 DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
13/12/10 11:36:55 DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
13/12/10 11:36:55 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/10 11:36:55 DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
13/12/10 11:36:55 DEBUG service.AbstractService: Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:329)
13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
13/12/10 11:36:55 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
13/12/10 11:36:55 DEBUG hdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socket are disabled.
13/12/10 11:36:55 DEBUG mapreduce.Cluster: Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Cluster.getFileSystem(Cluster.java:161)
13/12/10 11:36:55 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #12
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #12
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
13/12/10 11:36:55 DEBUG mapred.ResourceMgrDelegate: getStagingAreaDir: dir=/tmp/hadoop-yarn/staging/root/.staging
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #13
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #13
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #14
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #14
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
13/12/10 11:36:55 DEBUG ipc.Client: The ping interval is 60000 ms.
13/12/10 11:36:55 DEBUG ipc.Client: Connecting to /0.0.0.0:8032
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /0.0.0.0:8032 from root: starting, having connections 2
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /0.0.0.0:8032 from root sending #15
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /0.0.0.0:8032 from root got value #15
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getNewApplication took 7ms
13/12/10 11:36:55 DEBUG mapreduce.JobSubmitter: Configuring job job_1386598961500_0007 with /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007 as the submit dir
13/12/10 11:36:55 DEBUG mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:[hdfs://10.10.96.33:8020]
13/12/10 11:36:55 DEBUG mapreduce.JobSubmitter: default FileSystem: hdfs://10.10.96.33:8020
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #16
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #16
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007: masked=rwxr-xr-x
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #17
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #17
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 7ms
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #18
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #18
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 6ms
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #19
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #19
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar: masked=rw-r--r--
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #20
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #20
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: create took 13ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=0, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=0, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=65024, blockSize=134217728, appendChunk=false
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Allocating new block
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=1, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=65024
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #21
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #21
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 1ms
13/12/10 11:36:55 DEBUG hdfs.DFSClient: pipeline = 10.10.96.33:50010
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Connecting to datanode 10.10.96.33:50010
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Send buf size 131071
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 65024
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=1, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=130048, blockSize=134217728, appendChunk=false
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 1
13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:1 offsetInBlock:65024 lastPacketInBlock:false lastByteOffsetInBlock: 130048
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=2, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=130048
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=2, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=195072, blockSize=134217728, appendChunk=false
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 2
13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:2 offsetInBlock:130048 lastPacketInBlock:false lastByteOffsetInBlock: 195072
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=3, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=195072
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk packet full seqno=3, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, bytesCurBlock=260096, blockSize=134217728, appendChunk=false
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 3
13/12/10 11:36:55 DEBUG hdfs.DFSClient: computePacketChunkSize: src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, chunkSize=516, chunksPerPacket=127, packetSize=65532
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:3 offsetInBlock:195072 lastPacketInBlock:false lastByteOffsetInBlock: 260096
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient writeChunk allocating new packet seqno=4, src=/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007/job.jar, packetSize=65532, chunksPerPacket=127, bytesCurBlock=260096
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 4
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Queued packet 5
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Waiting for ack for: 5
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:4 offsetInBlock:260096 lastPacketInBlock:false lastByteOffsetInBlock: 270227
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 3 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 4 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DataStreamer block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038 sending packet packet seqno:5 offsetInBlock:270227 lastPacketInBlock:true lastByteOffsetInBlock: 270227
13/12/10 11:36:55 DEBUG hdfs.DFSClient: DFSClient seqno: 5 status: SUCCESS downstreamAckTimeNanos: 0
13/12/10 11:36:55 DEBUG hdfs.DFSClient: Closing old block BP-1542097938-10.10.96.33-1386589677395:blk_1073741862_1038
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #22
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #22
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: complete took 6ms
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #23
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #23
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: setReplication took 6ms
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #24
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #24
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: setPermission took 12ms
13/12/10 11:36:55 DEBUG mapreduce.JobSubmitter: Creating splits at hdfs://10.10.96.33:8020/tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007
13/12/10 11:36:55 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1386598961500_0007
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #25
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #25
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: delete took 11ms
13/12/10 11:36:55 ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2.0/QuasiMonteCarlo_1386646614155_1445162438/in
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root sending #26
13/12/10 11:36:55 DEBUG ipc.Client: IPC Client (666537607) connection to /10.10.96.33:8020 from root got value #26
13/12/10 11:36:55 DEBUG ipc.ProtobufRpcEngine: Call: delete took 12ms
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/root/hadoop-2.2.0/QuasiMonteCarlo_1386646614155_1445162438/in
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:285)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:340)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
View Code   

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-85859-1-1.html 上篇帖子: 在mesos上安装Hadoop总结 下篇帖子: hadoop 点点滴滴(三)
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表