设为首页 收藏本站
查看: 882|回复: 0

[经验分享] 自定义 hadoop MapReduce InputFormat 切分输入文件

[复制链接]

尚未签到

发表于 2016-12-11 08:43:09 | 显示全部楼层 |阅读模式
在上一篇中,我们实现了按 cookieId 和 time 进行二次排序,现在又有新问题:假如我需要按 cookieId 和 cookieId&time 的组合进行分析呢?此时最好的办法是自定义 InputFormat,让 mapreduce 一次读取一个 cookieId 下的所有记录,然后再按 time 进行切分 session,逻辑伪码如下:
for OneSplit in MyInputFormat.getSplit() // OneSplit 是某个 cookieId 下的所有记录
    for session in OneSplit // session 是按 time 把 OneSplit 进行了二次分割
        for line in session // line 是 session 中的每条记录,对应原始日志的某条记录
1、原理:
 
InputFormat是MapReduce中一个很常用的概念,它在程序的运行中到底起到了什么作用呢?
InputFormat其实是一个接口,包含了两个方法:
public interface InputFormat<K, V> {
  InputSplit[] getSplits(JobConf job, int numSplits) throws IOException;

  RecordReader<K, V> createRecordReader(InputSplit split, 
                                  TaskAttemptContext context)  throws IOException;
  }
 
这两个方法有分别完成着以下工作:
      方法 getSplits 将输入数据切分成splits,splits的个数即为map tasks的个数,splits的大小默认为块大小,即64M
     方法 getRecordReader 将每个 split  解析成records, 再依次将record解析成<K,V>对
也就是说 InputFormat完成以下工作:
 InputFile -->  splits  -->  <K,V>
 
系统常用的  InputFormat 又有哪些呢?
                       DSC0000.png
其中Text InputFormat便是最常用的,它的 <K,V>就代表 <行偏移,该行内容>
 
然而系统所提供的这几种固定的将  InputFile转换为 <K,V>的方式有时候并不能满足我们的需求:
此时需要我们自定义   InputFormat ,从而使Hadoop框架按照我们预设的方式来将
InputFile解析为<K,V>
在领会自定义   InputFormat 之前,需要弄懂一下几个抽象类、接口及其之间的关系:
 
InputFormat(interface), FileInputFormat(abstract class), TextInputFormat(class),RecordReader (interface), Line RecordReader(class)的关系
      FileInputFormat implements  InputFormat
      TextInputFormat extends  FileInputFormat
      TextInputFormat.get RecordReader calls  Line RecordReader
      Line RecordReader  implements  RecordReader

 
对于InputFormat接口,上面已经有详细的描述
再看看 FileInputFormat,它实现了 InputFormat接口中的 getSplits方法,而将 getRecordReader与isSplitable留给具体类(如 TextInputFormat )实现, isSplitable方法通常不用修改,所以只需要在自定义的 InputFormat中实现
getRecordReader方法即可,而该方法的核心是调用 Line RecordReader(即由LineRecorderReader类来实现 " 将每个s plit解析成records, 再依次将record解析成<K,V>对" ),该方法实现了接口RecordReader
 
  public interface RecordReader<K, V> {
  boolean   next(K key, V value) throws IOException; 
  K   createKey(); 
  V   createValue(); 
  long   getPos() throws IOException; 
  public void   close() throws IOException; 
  float   getProgress() throws IOException; 
}
 
     因此自定义InputFormat的核心是自定义一个实现接口RecordReader类似于LineRecordReader的类,该类的核心也正是重写接口RecordReader中的几大方法,
     定义一个InputFormat的核心是定义一个类似于LineRecordReader的,自己的RecordReader
 
  2、代码: 
 
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
package MyInputFormat;
 
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
 
public class TrackInputFormat extends FileInputFormat<LongWritable, Text> {
 
    @SuppressWarnings("deprecation")
    @Override
    public RecordReader<LongWritable, Text> createRecordReader(
            InputSplit split, TaskAttemptContext context) {
        return new TrackRecordReader();
    }
 
    @Override
    protected boolean isSplitable(JobContext context, Path file) {
        CompressionCodec codec = new CompressionCodecFactory(
                context.getConfiguration()).getCodec(file);
        return codec == null;
    }
 
}



 
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
package MyInputFormat;
 
import java.io.IOException;
import java.io.InputStream;
 
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
 
/**
 * Treats keys as offset in file and value as line.
 *
 * @deprecated Use
 *             {@link org.apache.hadoop.mapreduce.lib.input.LineRecordReader}
 *             instead.
 */
public class TrackRecordReader extends RecordReader<LongWritable, Text> {
    private static final Log LOG = LogFactory.getLog(TrackRecordReader.class);
 
    private CompressionCodecFactory compressionCodecs = null;
    private long start;
    private long pos;
    private long end;
    private NewLineReader in;
    private int maxLineLength;
    private LongWritable key = null;
    private Text value = null;
    // ----------------------
    // 行分隔符,即一条记录的分隔符
    private byte[] separator = "END\n".getBytes();
 
    // --------------------
 
    public void initialize(InputSplit genericSplit, TaskAttemptContext context)
            throws IOException {
        FileSplit split = (FileSplit) genericSplit;
        Configuration job = context.getConfiguration();
        this.maxLineLength = job.getInt("mapred.linerecordreader.maxlength",
                Integer.MAX_VALUE);
        start = split.getStart();
        end = start + split.getLength();
        final Path file = split.getPath();
        compressionCodecs = new CompressionCodecFactory(job);
        final CompressionCodec codec = compressionCodecs.getCodec(file);
 
        FileSystem fs = file.getFileSystem(job);
        FSDataInputStream fileIn = fs.open(split.getPath());
        boolean skipFirstLine = false;
        if (codec != null) {
            in = new NewLineReader(codec.createInputStream(fileIn), job);
            end = Long.MAX_VALUE;
        } else {
            if (start != 0) {
                skipFirstLine = true;
                this.start -= separator.length;//
                // --start;
                fileIn.seek(start);
            }
            in = new NewLineReader(fileIn, job);
        }
        if (skipFirstLine) { // skip first line and re-establish "start".
            start += in.readLine(new Text(), 0,
                    (int) Math.min((long) Integer.MAX_VALUE, end - start));
        }
        this.pos = start;
    }
 
    public boolean nextKeyValue() throws IOException {
        if (key == null) {
            key = new LongWritable();
        }
        key.set(pos);
        if (value == null) {
            value = new Text();
        }
        int newSize = 0;
        while (pos < end) {
            newSize = in.readLine(value, maxLineLength,
                    Math.max((int) Math.min(Integer.MAX_VALUE, end - pos),
                            maxLineLength));
            if (newSize == 0) {
                break;
            }
            pos += newSize;
            if (newSize < maxLineLength) {
                break;
            }
 
            LOG.info("Skipped line of size " + newSize + " at pos "
                    + (pos - newSize));
        }
        if (newSize == 0) {
            key = null;
            value = null;
            return false;
        } else {
            return true;
        }
    }
 
    @Override
    public LongWritable getCurrentKey() {
        return key;
    }
 
    @Override
    public Text getCurrentValue() {
        return value;
    }
 
    /**
     * Get the progress within the split
     */
    public float getProgress() {
        if (start == end) {
            return 0.0f;
        } else {
            return Math.min(1.0f, (pos - start) / (float) (end - start));
        }
    }
 
    public synchronized void close() throws IOException {
        if (in != null) {
            in.close();
        }
    }
 
    public class NewLineReader {
        private static final int DEFAULT_BUFFER_SIZE = 64 * 1024;
        private int bufferSize = DEFAULT_BUFFER_SIZE;
        private InputStream in;
        private byte[] buffer;
        private int bufferLength = 0;
        private int bufferPosn = 0;
 
        public NewLineReader(InputStream in) {
            this(in, DEFAULT_BUFFER_SIZE);
        }
 
        public NewLineReader(InputStream in, int bufferSize) {
            this.in = in;
            this.bufferSize = bufferSize;
            this.buffer = new byte[this.bufferSize];
        }
 
        public NewLineReader(InputStream in, Configuration conf)
                throws IOException {
            this(in, conf.getInt("io.file.buffer.size", DEFAULT_BUFFER_SIZE));
        }
 
        public void close() throws IOException {
            in.close();
        }
 
        public int readLine(Text str, int maxLineLength, int maxBytesToConsume)
                throws IOException {
            str.clear();
            Text record = new Text();
            int txtLength = 0;
            long bytesConsumed = 0L;
            boolean newline = false;
            int sepPosn = 0;
            do {
                // 已经读到buffer的末尾了,读下一个buffer
                if (this.bufferPosn >= this.bufferLength) {
                    bufferPosn = 0;
                    bufferLength = in.read(buffer);
                    // 读到文件末尾了,则跳出,进行下一个文件的读取
                    if (bufferLength <= 0) {
                        break;
                    }
                }
                int startPosn = this.bufferPosn;
                for (; bufferPosn < bufferLength; bufferPosn++) {
                    // 处理上一个buffer的尾巴被切成了两半的分隔符(如果分隔符中重复字符过多在这里会有问题)
                    if (sepPosn > 0 && buffer[bufferPosn] != separator[sepPosn]) {
                        sepPosn = 0;
                    }
                    // 遇到行分隔符的第一个字符
                    if (buffer[bufferPosn] == separator[sepPosn]) {
                        bufferPosn++;
                        int i = 0;
                        // 判断接下来的字符是否也是行分隔符中的字符
                        for (++sepPosn; sepPosn < separator.length; i++, sepPosn++) {
                            // buffer的最后刚好是分隔符,且分隔符被不幸地切成了两半
                            if (bufferPosn + i >= bufferLength) {
                                bufferPosn += i - 1;
                                break;
                            }
                            // 一旦其中有一个字符不相同,就判定为不是分隔符
                            if (this.buffer[this.bufferPosn + i] != separator[sepPosn]) {
                                sepPosn = 0;
                                break;
                            }
                        }
                        // 的确遇到了行分隔符
                        if (sepPosn == separator.length) {
                            bufferPosn += i;
                            newline = true;
                            sepPosn = 0;
                            break;
                        }
                    }
                }
                int readLength = this.bufferPosn - startPosn;
                bytesConsumed += readLength;
                // 行分隔符不放入块中
                if (readLength > maxLineLength - txtLength) {
                    readLength = maxLineLength - txtLength;
                }
                if (readLength > 0) {
                    record.append(this.buffer, startPosn, readLength);
                    txtLength += readLength;
                    // 去掉记录的分隔符
                    if (newline) {
                        str.set(record.getBytes(), 0, record.getLength()
                                - separator.length);
                    }
                }
            } while (!newline && (bytesConsumed < maxBytesToConsume));
            if (bytesConsumed > (long) Integer.MAX_VALUE) {
                throw new IOException("Too many bytes before newline: "
                        + bytesConsumed);
            }
 
            return (int) bytesConsumed;
        }
 
        public int readLine(Text str, int maxLineLength) throws IOException {
            return readLine(str, maxLineLength, Integer.MAX_VALUE);
        }
 
        public int readLine(Text str) throws IOException {
            return readLine(str, Integer.MAX_VALUE, Integer.MAX_VALUE);
        }
    }
}



 
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
package MyInputFormat;
 
import java.io.IOException;
 
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
 
public class TestMyInputFormat {
 
    public static class MapperClass extends Mapper<LongWritable, Text, Text, Text> {
 
        public void map(LongWritable key, Text value, Context context) throws IOException,
                InterruptedException {
            System.out.println("key:\t " + key);
            System.out.println("value:\t " + value);
            System.out.println("-------------------------");
        }
    }
 
    public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
        Configuration conf = new Configuration();
         Path outPath = new Path("/hive/11");
         FileSystem.get(conf).delete(outPath, true);
        Job job = new Job(conf, "TestMyInputFormat");
        job.setInputFormatClass(TrackInputFormat.class);
        job.setJarByClass(TestMyInputFormat.class);
        job.setMapperClass(TestMyInputFormat.MapperClass.class);
        job.setNumReduceTasks(0);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);
 
        FileInputFormat.addInputPath(job, new Path(args[0]));
        org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job, outPath);
 
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}



3、测试数据:
  cookieId    time     url                 cookieOverFlag
 
?
1
2
3
4
5
6
7
8
9
1       a        1_hao123
1       a        1_baidu
1       b        1_google       2END
2       c        2_google
2       c        2_hao123
2       c        2_google       1END
3       a        3_baidu
3       a        3_sougou
3       b        3_soso         2END



4、结果:
 
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
key:     0
value:   1  a   1_hao123   
1   a    1_baidu   
1   b    1_google   2
-------------------------
key:     47
value:   2  c    2_google  
2   c    2_hao123  
2   c    2_google   1
-------------------------
key:     96
value:   3  a    3_baidu   
3   a    3_sougou  
3   b    3_soso 2
-------------------------



REF:
自定义hadoop map/reduce输入文件切割InputFormat
http://hi.baidu.com/lzpsky/item/0d9d84c05afb43ba0c0a7b27
MapReduce高级编程之自定义InputFormat
http://datamining.xmu.edu.cn/bbs/home.php?mod=space&uid=91&do=blog&id=190
http://irwenqiang.iyunv.com/blog/1448164
 
http://my.oschina.net/leejun2005/blog/133424

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-312561-1-1.html 上篇帖子: 006_hadoop中MapReduce详解_3 下篇帖子: 基于Hadoop系统的MapReduce数据流优化
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表