设为首页 收藏本站
查看: 682|回复: 0

[经验分享] org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 6 ac

[复制链接]

尚未签到

发表于 2016-12-9 10:06:09 | 显示全部楼层 |阅读模式
  今天做了两件事:其一,编译打包hadoop-eclipse-plugin-1.0.2.jar;其二,使用mapreduce操控hbase(上面两个操作都在eclipse完成)。
  先说下版本吧:hadoop:1.0.2; hbase:0.94.0,系统是Ubuntu11.10.
  打包编译感觉还好,hadoop1.0.2没有现成的eclipse插件,所以要自己编译打包才行,我参考了下面的文章进行了编译,打包,参考文章:http://www.cnblogs.com/siwei1988/archive/2012/08/03/2621589.html,编译打包好的jar文件,有需要但觉得编译打包繁琐的同学可以在http://download.csdn.net/detail/fansy1990/4534905下载,或者私信或者留个邮箱,我发给你。
  因为我要使用mapreduce操作hbase,所以我把hbase下所有的.jar文件都导入了eclipse下的mapreduce工程,在操作hbase时,遇到了下面的问题,弄了好久也不知道问题的所在,提示如下:
  12/08/29 18:56:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/08/29 18:56:26 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost.localdomain
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_34
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk/jdk1.6.0_34/jre
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/fansy/workspace/MRHbaseDemo02/bin:/home/fansy/hadoop-1.0.2/lib/xmlenc-0.52.jar:/home/fansy/hadoop-1.0.2/lib/commons-configuration-1.6.jar:/home/fansy/hadoop-1.0.2/lib/asm-3.2.jar:/home/fansy/hadoop-1.0.2/lib/mockito-all-1.8.5.jar:/home/fansy/hadoop-1.0.2/lib/commons-httpclient-3.0.1.jar:/home/fansy/hadoop-1.0.2/lib/hadoop-fairscheduler-1.0.2.jar:/home/fansy/hadoop-1.0.2/lib/jersey-json-1.8.jar:/home/fansy/hadoop-1.0.2/lib/commons-codec-1.4.jar:/home/fansy/hadoop-1.0.2/lib/jasper-compiler-5.5.12.jar:/home/fansy/hadoop-1.0.2/lib/commons-collections-3.2.1.jar:/home/fansy/hadoop-1.0.2/lib/jackson-core-asl-1.8.8.jar:/home/fansy/hadoop-1.0.2/lib/slf4j-api-1.4.3.jar:/home/fansy/hadoop-1.0.2/lib/kfs-0.2.2.jar:/home/fansy/hadoop-1.0.2/lib/oro-2.0.8.jar:/home/fansy/hadoop-1.0.2/lib/hadoop-thriftfs-1.0.2.jar:/home/fansy/hadoop-1.0.2/lib/log4j-1.2.15.jar:/home/fansy/hadoop-1.0.2/lib/junit-4.5.jar:/home/fansy/hadoop-1.0.2/lib/aspectjrt-1.6.5.jar:/home/fansy/hadoop-1.0.2/lib/core-3.1.1.jar:/home/fansy/hadoop-1.0.2/lib/jsch-0.1.42.jar:/home/fansy/hadoop-1.0.2/lib/commons-logging-1.1.1.jar:/home/fansy/hadoop-1.0.2/lib/aspectjtools-1.6.5.jar:/home/fansy/hadoop-1.0.2/lib/Htable.jar:/home/fansy/hadoop-1.0.2/lib/commons-el-1.0.jar:/home/fansy/hadoop-1.0.2/lib/commons-net-1.4.1.jar:/home/fansy/hadoop-1.0.2/lib/commons-daemon-1.0.1.jar:/home/fansy/hadoop-1.0.2/lib/jasper-runtime-5.5.12.jar:/home/fansy/hadoop-1.0.2/lib/jdeb-0.8.jar:/home/fansy/hadoop-1.0.2/lib/jets3t-0.6.1.jar:/home/fansy/hadoop-1.0.2/lib/commons-beanutils-1.7.0.jar:/home/fansy/hadoop-1.0.2/lib/jersey-core-1.8.jar:/home/fansy/hadoop-1.0.2/lib/hadoop-capacity-scheduler-1.0.2.jar:/home/fansy/hadoop-1.0.2/lib/commons-logging-api-1.0.4.jar:/home/fansy/hadoop-1.0.2/lib/commons-digester-1.8.jar:/home/fansy/hadoop-1.0.2/lib/hsqldb-1.8.0.10.jar:/home/fansy/hadoop-1.0.2/lib/jackson-mapper-asl-1.8.8.jar:/home/fansy/hadoop-1.0.2/lib/commons-math-2.1.jar:/home/fansy/hadoop-1.0.2/lib/commons-lang-2.4.jar:/home/fansy/hadoop-1.0.2/lib/commons-beanutils-core-1.8.0.jar:/home/fansy/hadoop-1.0.2/lib/jersey-server-1.8.jar:/home/fansy/hadoop-1.0.2/lib/jetty-util-6.1.26.jar:/home/fansy/hadoop-1.0.2/lib/commons-cli-1.2.jar:/home/fansy/hadoop-1.0.2/lib/jetty-6.1.26.jar:/home/fansy/hadoop-1.0.2/lib/servlet-api-2.5-20081211.jar:/home/fansy/hadoop-1.0.2/lib/slf4j-log4j12-1.4.3.jar:/home/fansy/hadoop-1.0.2/hadoop-client-1.0.2.jar:/home/fansy/hadoop-1.0.2/hadoop-tools-1.0.2.jar:/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar:/home/fansy/hadoop-1.0.2/hadoop-ant-1.0.2.jar:/home/fansy/hadoop-1.0.2/hadoop-minicluster-1.0.2.jar:/home/fansy/hbase-0.94.0/lib/activation-1.1.jar:/home/fansy/hbase-0.94.0/lib/asm-3.1.jar:/home/fansy/hbase-0.94.0/lib/avro-1.5.3.jar:/home/fansy/hbase-0.94.0/lib/avro-ipc-1.5.3.jar:/home/fansy/hbase-0.94.0/lib/commons-beanutils-1.7.0.jar:/home/fansy/hbase-0.94.0/lib/commons-beanutils-core-1.8.0.jar:/home/fansy/hbase-0.94.0/lib/commons-cli-1.2.jar:/home/fansy/hbase-0.94.0/lib/commons-codec-1.4.jar:/home/fansy/hbase-0.94.0/lib/commons-collections-3.2.1.jar:/home/fansy/hbase-0.94.0/lib/commons-configuration-1.6.jar:/home/fansy/hbase-0.94.0/lib/commons-digester-1.8.jar:/home/fansy/hbase-0.94.0/lib/commons-el-1.0.jar:/home/fansy/hbase-0.94.0/lib/commons-httpclient-3.1.jar:/home/fansy/hbase-0.94.0/lib/commons-io-2.1.jar:/home/fansy/hbase-0.94.0/lib/commons-lang-2.5.jar:/home/fansy/hbase-0.94.0/lib/commons-logging-1.1.1.jar:/home/fansy/hbase-0.94.0/lib/commons-math-2.1.jar:/home/fansy/hbase-0.94.0/lib/commons-net-1.4.1.jar:/home/fansy/hbase-0.94.0/lib/core-3.1.1.jar:/home/fansy/hbase-0.94.0/lib/guava-r09.jar:/home/fansy/hbase-0.94.0/lib/hadoop-core-1.0.2.jar:/home/fansy/hbase-0.94.0/lib/high-scale-lib-1.1.1.jar:/home/fansy/hbase-0.94.0/lib/httpclient-4.1.2.jar:/home/fansy/hbase-0.94.0/lib/httpcore-4.1.3.jar:/home/fansy/hbase-0.94.0/lib/jackson-core-asl-1.5.5.jar:/home/fansy/hbase-0.94.0/lib/jackson-jaxrs-1.5.5.jar:/home/fansy/hbase-0.94.0/lib/jackson-mapper-asl-1.5.5.jar:/home/fansy/hbase-0.94.0/lib/jackson-xc-1.5.5.jar:/home/fansy/hbase-0.94.0/lib/jamon-runtime-2.3.1.jar:/home/fansy/hbase-0.94.0/lib/jasper-compiler-5.5.23.jar:/home/fansy/hbase-0.94.0/lib/jasper-runtime-5.5.23.jar:/home/fansy/hbase-0.94.0/lib/jaxb-api-2.1.jar:/home/fansy/hbase-0.94.0/lib/jaxb-impl-2.1.12.jar:/home/fansy/hbase-0.94.0/lib/jersey-core-1.4.jar:/home/fansy/hbase-0.94.0/lib/jersey-json-1.4.jar:/home/fansy/hbase-0.94.0/lib/jersey-server-1.4.jar:/home/fansy/hbase-0.94.0/lib/jettison-1.1.jar:/home/fansy/hbase-0.94.0/lib/jetty-6.1.26.jar:/home/fansy/hbase-0.94.0/lib/jetty-util-6.1.26.jar:/home/fansy/hbase-0.94.0/lib/jruby-complete-1.6.5.jar:/home/fansy/hbase-0.94.0/lib/jsp-2.1-6.1.14.jar:/home/fansy/hbase-0.94.0/lib/jsp-api-2.1-6.1.14.jar:/home/fansy/hbase-0.94.0/lib/libthrift-0.8.0.jar:/home/fansy/hbase-0.94.0/lib/log4j-1.2.16.jar:/home/fansy/hbase-0.94.0/lib/netty-3.2.4.Final.jar:/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar:/home/fansy/hbase-0.94.0/lib/servlet-api-2.5-6.1.14.jar:/home/fansy/hbase-0.94.0/lib/slf4j-api-1.5.8.jar:/home/fansy/hbase-0.94.0/lib/slf4j-log4j12-1.5.8.jar:/home/fansy/hbase-0.94.0/lib/snappy-java-1.0.3.2.jar:/home/fansy/hbase-0.94.0/lib/stax-api-1.0.1.jar:/home/fansy/hbase-0.94.0/lib/velocity-1.7.jar:/home/fansy/hbase-0.94.0/lib/xmlenc-0.52.jar:/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar:/home/fansy/hbase-0.94.0/hbase-0.94.0.jar:/home/fansy/hbase-0.94.0/hbase-0.94.0-tests.jar
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/jdk/jdk1.6.0_34/jre/lib/i386/server:/usr/jdk/jdk1.6.0_34/jre/lib/i386:/usr/jdk/jdk1.6.0_34/jre/../lib/i386:/usr/jdk/jdk1.6.0_34/jre/lib/i386/client:/usr/jdk/jdk1.6.0_34/jre/lib/i386::/usr/java/packages/lib/i386:/lib:/usr/lib
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.38-14-generic
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:user.name=fansy
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/fansy
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/fansy/workspace/MRHbaseDemo02
12/08/29 18:56:26 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
12/08/29 18:56:26 INFO zookeeper.ClientCnxn: Opening socket connection to server /0:0:0:0:0:0:0:1:2181
12/08/29 18:56:26 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: 无法定位登录配置 occurred when trying to find JAAS configuration.
12/08/29 18:56:26 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 12028@fansy-Lenovo-G450
12/08/29 18:56:26 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work,please fix your JAAS configuration.
12/08/29 18:56:26 INFO zookeeper.ClientCnxn: Socket connection established to fansy-Lenovo-G450/0:0:0:0:0:0:0:1:2181, initiating session
12/08/29 18:56:26 INFO zookeeper.ClientCnxn: Session establishment complete on server fansy-Lenovo-G450/0:0:0:0:0:0:0:1:2181, sessionid = 0x13971ca392d0012, negotiated timeout = 40000
12/08/29 18:56:27 INFO mapreduce.TableOutputFormat: Created table instance for mrtable
****hdfs://localhost:9000/user/fansy/input/mrtest.txt
12/08/29 18:56:27 INFO input.FileInputFormat: Total input paths to process : 1
12/08/29 18:56:27 WARN snappy.LoadSnappy: Snappy native library not loaded
12/08/29 18:56:27 INFO filecache.TrackerDistributedCacheManager: Creating hbase-0.94.0.jar in /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar-work--2952961500529514442 with rwxr-xr-x
12/08/29 18:56:27 INFO filecache.TrackerDistributedCacheManager: Extracting /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar-work--2952961500529514442/hbase-0.94.0.jar to /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar-work--2952961500529514442
12/08/29 18:56:27 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/hbase-0.94.0.jar as /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/hbase-0.94.0.jar as /tmp/hadoop-fansy/mapred/local/archive/4245802504176908348_-2038994071_176065566/file/home/fansy/hbase-0.94.0/hbase-0.94.0.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Creating zookeeper-3.4.3.jar in /tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar-work--6348729395114744014with rwxr-xr-x
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Extracting /tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar-work--6348729395114744014/zookeeper-3.4.3.jar to/tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar-work--6348729395114744014
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar as /tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar as /tmp/hadoop-fansy/mapred/local/archive/-2485968383474441328_-358222725_176052566/file/home/fansy/hbase-0.94.0/lib/zookeeper-3.4.3.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Creating hadoop-core-1.0.2.jar in /tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar-work--116863529921692065 withrwxr-xr-x
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Extracting /tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar-work--116863529921692065/hadoop-core-1.0.2.jar to/tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar-work--116863529921692065
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar as /tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar as /tmp/hadoop-fansy/mapred/local/archive/3604634766753366292_1190305369_1193831860/file/home/fansy/hadoop-1.0.2/hadoop-core-1.0.2.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Creating protobuf-java-2.4.0a.jar in /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar-work--139328093195795474with rwxr-xr-x
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Extracting /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar-work--139328093195795474/protobuf-java-2.4.0a.jarto /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar-work--139328093195795474
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar as /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar
12/08/29 18:56:28 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar as /tmp/hadoop-fansy/mapred/local/archive/8764005306386187952_1486071988_176053566/file/home/fansy/hbase-0.94.0/lib/protobuf-java-2.4.0a.jar
12/08/29 18:56:28 INFO mapred.JobClient: Running job: job_local_0001
12/08/29 18:56:28 INFO mapreduce.TableOutputFormat: Created table instance for mrtable
12/08/29 18:56:28 INFO util.ProcessTree: setsid exited with exit code 0
12/08/29 18:56:28 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@578073
12/08/29 18:56:29 WARN mapred.FileOutputCommitter: Output path is null in cleanup
12/08/29 18:56:29 WARN mapred.LocalJobRunner: job_local_0001
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 6 actions: DoNotRetryIOException: 6 times, servers with issues: localhost.localdomain:34995,
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1591)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1367)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:945)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:982)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:109)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:651)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
12/08/29 18:56:29 INFO mapred.JobClient: map 0% reduce 0%
12/08/29 18:56:29 INFO mapred.JobClient: Job complete: job_local_0001
12/08/29 18:56:29 INFO mapred.JobClient: Counters: 0

  RetriesExhaustedWithDetailsException: 这个问题上网查也没有好的解决方案,还是自己解决吧。
  下面就说下我的解决方案:
  我使用的是hbase里面的example里的改良的程序,如下:
  package org.fansy.demo02;


import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class SampleUploaderOne {
/**
* @param args
*/
public static void main(String[] args) throws Exception{
// TODO Auto-generated method stub
Configuration conf = HBaseConfiguration.create();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if(otherArgs.length != 2) {
System.err.println("Wrong number of arguments: " + otherArgs.length);
System.err.println("Usage: <input> <tablename>");
System.exit(-1);
}
Job job=new Job(conf,"Hbaseuploadone");
job.setJarByClass(SampleUploaderOne.class);
job.setMapperClass(UploaderMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapOutputValueClass(Put.class);
TableMapReduceUtil.initTableReducerJob(args[1], null, job);
job.setNumReduceTasks(0);
FileInputFormat.setInputPaths(job, args[0]);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}

public static class UploaderMapper extends Mapper<Object,Text,ImmutableBytesWritable,Put>{
public void map(Object key,Text line,Context context)throws IOException, InterruptedException{
String[] values=line.toString().split(",");
if(values.length != 4) {
return;
}
// Extract each value
byte [] row = Bytes.toBytes(values[0]);
byte [] family = Bytes.toBytes(values[1]);
byte [] qualifier = Bytes.toBytes(values[2]);
byte [] value = Bytes.toBytes(values[3]);
Put put=new Put(row);
put.add(family,qualifier,value);
context.write(new ImmutableBytesWritable(row), put);
}
}

}

  我的输入文件如下:
  1,f1,name,fansy
2,f1,name,tom
3,f1,name,jake
4,f1,age,22
5,f1,age,23
6,f1,age,27
  就目前的情况来看,问题是出在建立的表上面,前面建立的表是(在 hbase shell下):create 'mrtable','t',然后我改为 :create 'mrtable','f1'就没有出错了,所以应该是建立的表的family的名字应该和文件里的一样才行。在hbase shell 下看 mrtable的数据如下:
DSC0000.png

  hadoop程序运行提示部分如下:
  12/08/29 19:21:18 INFO mapred.JobClient: Running job: job_local_0001
12/08/29 19:21:18 INFO mapreduce.TableOutputFormat: Created table instance for mrtable
12/08/29 19:21:18 INFO util.ProcessTree: setsid exited with exit code 0
12/08/29 19:21:18 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7e9ce2
12/08/29 19:21:18 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/08/29 19:21:19 INFO mapred.JobClient: map 0% reduce 0%
12/08/29 19:21:21 INFO mapred.LocalJobRunner:
12/08/29 19:21:21 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
12/08/29 19:21:21 WARN mapred.FileOutputCommitter: Output path is null in cleanup
12/08/29 19:21:22 INFO mapred.JobClient: map 100% reduce 0%
12/08/29 19:21:22 INFO mapred.JobClient: Job complete: job_local_0001
12/08/29 19:21:22 INFO mapred.JobClient: Counters: 13
12/08/29 19:21:22 INFO mapred.JobClient: File Output Format Counters
12/08/29 19:21:22 INFO mapred.JobClient: Bytes Written=0
12/08/29 19:21:22 INFO mapred.JobClient: File Input Format Counters
12/08/29 19:21:22 INFO mapred.JobClient: Bytes Read=82
12/08/29 19:21:22 INFO mapred.JobClient: FileSystemCounters
12/08/29 19:21:22 INFO mapred.JobClient: FILE_BYTES_READ=9683090
12/08/29 19:21:22 INFO mapred.JobClient: HDFS_BYTES_READ=82
12/08/29 19:21:22 INFO mapred.JobClient: FILE_BYTES_WRITTEN=9817966
12/08/29 19:21:22 INFO mapred.JobClient: Map-Reduce Framework
12/08/29 19:21:22 INFO mapred.JobClient: Map input records=7
12/08/29 19:21:22 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
12/08/29 19:21:22 INFO mapred.JobClient: Spilled Records=0
12/08/29 19:21:22 INFO mapred.JobClient: Total committed heap usage (bytes)=76677120
12/08/29 19:21:22 INFO mapred.JobClient: CPU time spent (ms)=0
12/08/29 19:21:22 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
12/08/29 19:21:22 INFO mapred.JobClient: SPLIT_RAW_BYTES=114
12/08/29 19:21:22 INFO mapred.JobClient: Map output records=6

至此,这个问题算是解决了,虽然不知道原理是什么,但是还是使用了mapreduce操作了hbase数据库。

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-311826-1-1.html 上篇帖子: Hadoop中的集群配置和使用技巧——分布式计算开源框架Hadoop入门实践(二) 下篇帖子: 如何在Hadoop里面实现二次排序
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表