public class CombineAvroInputFormat
extends org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<org.apache.hadoop.io.NullWritable,co.cask.cdap.api.data.format.StructuredRecord>
| Modifier and Type | Class and Description |
|---|---|
static class |
CombineAvroInputFormat.WrapperReader
A wrapper class that's responsible for delegating to a corresponding RecordReader in
PathTrackingInputFormat. |
| Constructor and Description |
|---|
CombineAvroInputFormat() |
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.NullWritable,co.cask.cdap.api.data.format.StructuredRecord> |
createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Creates a RecordReader that delegates to some other RecordReader for each path in the input split.
|
createPool, createPool, getFileBlockLocations, getSplits, isSplitable, setMaxSplitSize, setMinSplitSizeNode, setMinSplitSizeRackaddInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, listStatus, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSizepublic org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.NullWritable,co.cask.cdap.api.data.format.StructuredRecord> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
throws IOException
createRecordReader in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<org.apache.hadoop.io.NullWritable,co.cask.cdap.api.data.format.StructuredRecord>IOExceptionCopyright © 2018 Cask Data, Inc. Licensed under the Apache License, Version 2.0.