MultiResourceItemReader reads all files sequentially.
I want once one file read completely, it should call processor/writer.it should not read next file.
Since file content is not constant, i can't go with chunk size.
Any idea on chunk policy to decide end of file content?
I think you should write a step which read/process/write only one file with a "single file item reader" (like FlatFileItemReader). And repeat the step while there are files remainig.
Spring batch gives you a feature to do so : conditional flows and in particular the programmatic flow decision which gives you a smart way to decide when to stop a loop between steps (when there is not file any more)
And since you will not be able to give a constant input file name to your reader, you should also have a look at Late binding section.
Hope this will be enough to help you. Please, make comments if you need more details.
Using MultiResourceItemReader, assigning multiple file reasources.
Using custom file reader as delegate, reading a file completely
For reading file completely, come up with a logic
#Bean
public MultiResourceItemReader<SimpleFileBean> simpleReader()
{
Resource[] resourceList = getFileResources();
if(resourceList == null) {
System.out.println("No input files available");
}
MultiResourceItemReader<SimpleFileBean> resourceItemReader = new MultiResourceItemReader<SimpleFileBean>();
resourceItemReader.setResources(resourceList);
resourceItemReader.setDelegate(simpleFileReader());
return resourceItemReader;
}
#Bean
SimpleInboundReader simpleFileReader() {
return new SimpleInboundReader(customSimpleFileReader());
}
#Bean
public FlatFileItemReader customSimpleFileReader() {
return new FlatFileItemReaderBuilder()
.name("customFileItemReader")
.lineMapper(new PassThroughLineMapper())
.build();
}
public class SimpleInboundReader implements ResourceAwareItemReaderItemStream{
private Object currentItem = null;
private FileModel fileModel = null;
private String fileName = null;
private boolean fileRead = false;
private ResourceAwareItemReaderItemStream<String> delegate;
public SimpleInboundReader(ResourceAwareItemReaderItemStream<String> delegate) {
this.delegate = delegate;
}
#Override
public void open(ExecutionContext executionContext) throws ItemStreamException {
delegate.open(executionContext);
}
#Override
public void update(ExecutionContext executionContext) throws ItemStreamException {
delegate.update(executionContext);
}
#Override
public void close() throws ItemStreamException {
delegate.close();
}
#Override
public void setResource(Resource resource) {
fileName = resource.getFilename();
this.delegate.setResource(resource);
}
String getNextLine() throws UnexpectedInputException, ParseException, NonTransientResourceException, Exception {
return delegate.read();
}
#Override
public SimpleFileBean read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
SimpleFileBean simpleFileBean = null;
String currentLine = null;
currentLine=delegate.read();
if(currentLine != null) {
simpleFileBean = new SimpleFileBean();
simpleFileBean.getLines().add(currentLine);
while ((currentLine = getNextLine()) != null) {
simpleFileBean.getLines().add(currentLine);
}
return simpleFileBean;
}
return null;
}
}
I am using Hadoop 2.7 and I have got an issue when using a custom Writable "TextPair" (page 104 of the Definitive Guide). Basically, my program works fine when I am using just Text whereas it outputs "test.TextTuple#3b86249a test.TextTuple#63cd18fd" when using the TextPair.
Please, Any idea of what is wrong with my code (below)?
============
Mapper1:
public class KWMapper extends Mapper<LongWritable, Text, TextTuple, TextTuple> {
#Override
public void map(LongWritable k, Text v, Mapper.Context c) throws IOException, InterruptedException {
String keywordRelRecord[] = v.toString().split(",");
String subTopicID = keywordRelRecord[0];
String paperID = keywordRelRecord[1];
//set the KEY
TextTuple key = new TextTuple();
key.setNaturalKey(new Text(subTopicID));
key.setSecondaryKey(new Text("K"));
//set the VALUE
TextTuple value = new TextTuple();
value.setNaturalKey(new Text(paperID));
value.setSecondaryKey(new Text("K"));
c.write(key, value);
}
Mapper2:
public class TDMapper extends Mapper<LongWritable, Text, TextTuple, TextTuple> {
#Override
public void map(LongWritable k, Text v, Mapper.Context c) throws IOException, InterruptedException {
String topicRecord[] = v.toString().split(",");
String superTopicID = topicRecord[0];
String subTopicID = topicRecord[1].substring(1, topicRecord[1].length() - 1);
TextTuple key = new TextTuple();
key.setNaturalKey(new Text(subTopicID));
key.setSecondaryKey(new Text("T"));
TextTuple value = new TextTuple();
value.setNaturalKey(new Text(superTopicID));
value.setSecondaryKey(new Text("T"));
c.write(key, value);
}
REDUCER :
public class TDKRReducer extends Reducer<TextTuple, TextTuple, Text, Text>{
public void reduce(TextTuple k, Iterable<TextTuple> values, Reducer.Context c) throws IOException, InterruptedException{
for (TextTuple val : values) {
c.write(k.getNaturalKey(), val.getNaturalKey());
}
}
}
DRIVER:
public class TDDriver {
public static void main(String args[]) throws IOException, InterruptedException, ClassNotFoundException {
// This class support the user for the configuration of the execution;
Configuration confStage1 = new Configuration();
Job job1 = new Job(confStage1, "TopDecKeywordRel");
// Setting the driver class
job1.setJarByClass(TDDriver.class);
// Setting the input Files and processing them using the corresponding mapper class
MultipleInputs.addInputPath(job1, new Path(args[0]), TextInputFormat.class, TDMapper.class);
MultipleInputs.addInputPath(job1, new Path(args[1]), TextInputFormat.class, KWMapper.class);
job1.setMapOutputKeyClass(TextTuple.class);
job1.setMapOutputValueClass(TextTuple.class);
// Setting the Reducer Class;
job1.setReducerClass(TDKRReducer.class);
// Setting the output class for the Key-value pairs
job1.setOutputKeyClass(Text.class);
job1.setOutputValueClass(Text.class);
// Setting the output file
Path outputPA = new Path(args[2]);
FileOutputFormat.setOutputPath(job1, outputPA);
// Submitting the Job Monitoring the execution of the Job
System.exit(job1.waitForCompletion(true) ? 0 : 1);
//conf.setPartitionerClass(CustomPartitioner.class);
}
}
CUSTOM VARIABLE
public class TextTuple implements Writable, WritableComparable<TextTuple> {
private Text naturalKey;
private Text secondaryKey;
public TextTuple() {
this.naturalKey = new Text();
this.secondaryKey = new Text();
}
public void setNaturalKey(Text naturalKey) {
this.naturalKey = naturalKey;
}
public void setSecondaryKey(Text secondaryKey) {
this.secondaryKey = secondaryKey;
}
public Text getNaturalKey() {
return naturalKey;
}
public Text getSecondaryKey() {
return secondaryKey;
}
#Override
public void write(DataOutput out) throws IOException {
naturalKey.write(out);
secondaryKey.write(out);
}
#Override
public void readFields(DataInput in) throws IOException {
naturalKey.readFields(in);
secondaryKey.readFields(in);
}
//This comparator controls the sort order of the keys.
#Override
public int compareTo(TextTuple o) {
// comparing the naturalKey
int compareValue = this.naturalKey.compareTo(o.naturalKey);
if (compareValue == 0) {
compareValue = this.secondaryKey.compareTo(o.secondaryKey);
}
return -1 * compareValue;
}
}
I have the following classes for MR jobs but when i run the job the job is failing with the below exception kindly suggest.
public class MongoKey implements WritableComparable<MongoKey> {
...
private Text name;
private Text place;
public MongoKey() {
this.name = new Text();
this.place = new Text();
}
public MongoKey(Text name, Text place) {
this.name = name;
this.place = place;
}
public void readFields(DataInput in) throws IOException {
name.readFields(in);
place.readFields(in);
}
public void write(DataOutput out) throws IOException {
name.write(out);
place.write(out);
}
public int compareTo(MongoKey o) {
MongoKey other = (MongoKey)o;
int cmp = name.compareTo(other.name);
if(cmp != 0){
return cmp;
}
return place.compareTo(other.place);
}
}
public class MongoValue implements Writable {
...
public void readFields(DataInput in) throws IOException {
profession.readFields(in);
}
public void write(DataOutput out) throws IOException {
profession.write(out);
}
}
public class MongoReducer extends Reducer<MongoKey, MongoValue, MongoKey, BSONWritable> {
...
context.write(key, new BSONWritable(output)); // line 41
}
public class MongoHadoopJobRunner extends Configured implements Tool {
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.out.println("usage: [input] [output]");
System.exit(-1);
}
Configuration conf = getConf();
for (String arg : args)
System.out.println(arg);
GenericOptionsParser parser = new GenericOptionsParser(conf, args);
conf.set("mongo.output.uri", "mongodb://localhost/demo.logs_aggregate");
MongoConfigUtil.setOutputURI(conf, "mongodb://localhost/demo.logs_aggregate");
MongoConfigUtil.setOutputFormat(conf, MongoOutputFormat.class);
final Job job = new Job(conf, "mongo_hadoop");
job.setOutputFormatClass(MongoOutputFormat.class);
// Job job = new Job();
job.setJarByClass(MongoHadoopJobRunner.class);
// job.setJobName("mongo_hadoop");
job.setNumReduceTasks(1);
job.setMapperClass(MongoMapper.class);
job.setReducerClass(MongoReducer.class);
job.setMapOutputKeyClass(MongoKey.class);
job.setMapOutputValueClass(MongoValue.class);
job.setOutputKeyClass(MongoKey.class);
job.setOutputValueClass(BSONWritable.class);
job.setInputFormatClass(MongoInputFormat.class);
for (String arg2 : parser.getRemainingArgs()) {
System.out.println("remaining: " + arg2);
}
Path inPath = new Path(parser.getRemainingArgs()[0]);
MongoInputFormat.addInputPath(job, inPath);
job.waitForCompletion(true);
return 0;
}
public static void main(String[] pArgs) throws Exception {
Configuration conf = new Configuration();
for (String arg : pArgs) {
System.out.println(arg);
}
GenericOptionsParser parser = new GenericOptionsParser(conf, pArgs);
for (String arg2 : parser.getRemainingArgs()) {
System.out.println("ree" + arg2);
}
System.exit(ToolRunner.run(conf, new MongoHadoopJobRunner(), parser
.getRemainingArgs()));
}
}
With the following exception
java.lang.Exception: java.lang.IllegalArgumentException: can't serialize class com.name.custom.MongoKey
...
...
at com.mongodb.hadoop.output.MongoRecordWriter.write(MongoRecordWriter.java:93)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at com.name.custom.MongoReducer.reduce(MongoReducer.java:41)
at com.name.custom.MongoReducer.reduce(MongoReducer.java:11)
It seems there should not be any issue with the code but why its unable to serialize the fields i am totally clueless.
Thanks very much in advance
As i see from MongoRecordWriter source code it does not support arbitrary WritableComparable object as key. You can use one of these classes as key: BSONWritable, BSONObject, Text, UTF8, simple wrappers like IntWritable. Also i think you can use Serializable object as key. So i can suggest you two workarounds:
Make your MongoKey serializable (implements Serializable, implement writeObject, readObject methods).
Use one of supported classes as key, for example you can use Text as key: Text key = new Text(name.toString() + "\t" + place.toString());
This:
java.lang.Exception: java.lang.IllegalArgumentException: can't serialize class com.name.custom.MongoKey
exception is raised because MongoKey doesn't implement java.io.Serializable.
Add the Serializable to your class declaration
I have this hadoop map reduce code that works on graph data (in adjacency list form) and kind of similar to in-adjacency list to out-adjacency list transformation algorithms. The main MapReduce Task code is following:
public class TestTask extends Configured
implements Tool {
public static class TTMapper extends MapReduceBase
implements Mapper<Text, TextArrayWritable, Text, NeighborWritable> {
#Override
public void map(Text key,
TextArrayWritable value,
OutputCollector<Text, NeighborWritable> output,
Reporter reporter) throws IOException {
int numNeighbors = value.get().length;
double weight = (double)1 / numNeighbors;
Text[] neighbors = (Text[]) value.toArray();
NeighborWritable me = new NeighborWritable(key, new DoubleWritable(weight));
for (int i = 0; i < neighbors.length; i++) {
output.collect(neighbors[i], me);
}
}
}
public static class TTReducer extends MapReduceBase
implements Reducer<Text, NeighborWritable, Text, Text> {
#Override
public void reduce(Text key,
Iterator<NeighborWritable> values,
OutputCollector<Text, Text> output,
Reporter arg3)
throws IOException {
ArrayList<NeighborWritable> neighborList = new ArrayList<NeighborWritable>();
while(values.hasNext()) {
neighborList.add(values.next());
}
NeighborArrayWritable neighbors = new NeighborArrayWritable
(neighborList.toArray(new NeighborWritable[0]));
Text out = new Text(neighbors.toString());
output.collect(key, out);
}
}
#Override
public int run(String[] arg0) throws Exception {
JobConf conf = Util.getMapRedJobConf("testJob",
SequenceFileInputFormat.class,
TTMapper.class,
Text.class,
NeighborWritable.class,
1,
TTReducer.class,
Text.class,
Text.class,
TextOutputFormat.class,
"test/in",
"test/out");
JobClient.runJob(conf);
return 0;
}
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new TestTask(), args);
System.exit(res);
}
}
The auxiliary code is following:
TextArrayWritable:
public class TextArrayWritable extends ArrayWritable {
public TextArrayWritable() {
super(Text.class);
}
public TextArrayWritable(Text[] values) {
super(Text.class, values);
}
}
NeighborWritable:
public class NeighborWritable implements Writable {
private Text nodeId;
private DoubleWritable weight;
public NeighborWritable(Text nodeId, DoubleWritable weight) {
this.nodeId = nodeId;
this.weight = weight;
}
public NeighborWritable () { }
public Text getNodeId() {
return nodeId;
}
public DoubleWritable getWeight() {
return weight;
}
public void setNodeId(Text nodeId) {
this.nodeId = nodeId;
}
public void setWeight(DoubleWritable weight) {
this.weight = weight;
}
#Override
public void readFields(DataInput in) throws IOException {
nodeId = new Text();
nodeId.readFields(in);
weight = new DoubleWritable();
weight.readFields(in);
}
#Override
public void write(DataOutput out) throws IOException {
nodeId.write(out);
weight.write(out);
}
public String toString() {
return "NW[nodeId=" + (nodeId != null ? nodeId.toString() : "(null)") +
",weight=" + (weight != null ? weight.toString() : "(null)") + "]";
}
public boolean equals(Object o) {
if (!(o instanceof NeighborWritable)) {
return false;
}
NeighborWritable that = (NeighborWritable)o;
return (nodeId.equals(that.getNodeId()) && (weight.equals(that.getWeight())));
}
}
and the Util class:
public class Util {
public static JobConf getMapRedJobConf(String jobName,
Class<? extends InputFormat> inputFormatClass,
Class<? extends Mapper> mapperClass,
Class<?> mapOutputKeyClass,
Class<?> mapOutputValueClass,
int numReducer,
Class<? extends Reducer> reducerClass,
Class<?> outputKeyClass,
Class<?> outputValueClass,
Class<? extends OutputFormat> outputFormatClass,
String inputDir,
String outputDir) throws IOException {
JobConf conf = new JobConf();
if (jobName != null)
conf.setJobName(jobName);
conf.setInputFormat(inputFormatClass);
conf.setMapperClass(mapperClass);
if (numReducer == 0) {
conf.setNumReduceTasks(0);
conf.setOutputKeyClass(outputKeyClass);
conf.setOutputValueClass(outputValueClass);
conf.setOutputFormat(outputFormatClass);
} else {
// may set actual number of reducers
// conf.setNumReduceTasks(numReducer);
conf.setMapOutputKeyClass(mapOutputKeyClass);
conf.setMapOutputValueClass(mapOutputValueClass);
conf.setReducerClass(reducerClass);
conf.setOutputKeyClass(outputKeyClass);
conf.setOutputValueClass(outputValueClass);
conf.setOutputFormat(outputFormatClass);
}
// delete the existing target output folder
FileSystem fs = FileSystem.get(conf);
fs.delete(new Path(outputDir), true);
// specify input and output DIRECTORIES (not files)
FileInputFormat.addInputPath(conf, new Path(inputDir));
FileOutputFormat.setOutputPath(conf, new Path(outputDir));
return conf;
}
}
My input is following graph: (in binary format, here I am giving the text format)
1 2
2 1,3,5
3 2,4
4 3,5
5 2,4
According to the logic of the code the output should be:
1 NWArray[size=1,{NW[nodeId=2,weight=0.3333333333333333],}]
2 NWArray[size=3,{NW[nodeId=5,weight=0.5],NW[nodeId=3,weight=0.5],NW[nodeId=1,weight=1.0],}]
3 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=4,weight=0.5],}]
4 NWArray[size=2,{NW[nodeId=5,weight=0.5],NW[nodeId=3,weight=0.5],}]
5 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=4,weight=0.5],}]
But the output is coming as:
1 NWArray[size=1,{NW[nodeId=2,weight=0.3333333333333333],}]
2 NWArray[size=3,{NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],}]
3 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=2,weight=0.3333333333333333],}]
4 NWArray[size=2,{NW[nodeId=5,weight=0.5],NW[nodeId=5,weight=0.5],}]
5 NWArray[size=2,{NW[nodeId=2,weight=0.3333333333333333],NW[nodeId=2,weight=0.3333333333333333],}]
I cannot understand the reason why the expected output is not coming out. Any help will be appreciated.
Thanks.
You're falling foul of object re-use
while(values.hasNext()) {
neighborList.add(values.next());
}
values.next() will return the same object reference, but the underlying contents of that object will change for each iteration (the readFields method is called to re-populate the contents)
Suggest you amend to (you'll need to obtain the Configuration conf variable from a setup method, unless you can obtain it from the Reporter or OutputCollector - sorry i don't use the old API)
while(values.hasNext()) {
neighborList.add(
ReflectionUtils.copy(conf, values.next(), new NeighborWritable());
}
But I still can't understand why my unit test passed then. Here is the code -
public class UWLTInitReducerTest {
private Text key;
private Iterator<NeighborWritable> values;
private NeighborArrayWritable nodeData;
private TTReducer reducer;
/**
* Set up the states for calling the map function
*/
#Before
public void setUp() throws Exception {
key = new Text("1001");
NeighborWritable[] neighbors = new NeighborWritable[4];
for (int i = 0; i < 4; i++) {
neighbors[i] = new NeighborWritable(new Text("300" + i), new DoubleWritable((double) 1 / (1 + i)));
}
values = Arrays.asList(neighbors).iterator();
nodeData = new NeighborArrayWritable(neighbors);
reducer = new TTReducer();
}
/**
* Test method for InitModelMapper#map - valid input
*/
#Test
public void testMapValid() {
// mock the output object
OutputCollector<Text, UWLTNodeData> output = mock(OutputCollector.class);
try {
// call the API
reducer.reduce(key, values, output, null);
// in order (sequential) verification of the calls to output.collect()
verify(output).collect(key, nodeData);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Why didn't this code catch the bug?
I have a situation in which mapper emits as key an object of custom type.
It has two fields an intWritable ID, and a data array IntArrayWritable.
The implementation is as follows.
`
import java.io.*;
import org.apache.hadoop.io.*;
public class PairDocIdPerm implements WritableComparable<PairDocIdPerm> {
public PairDocIdPerm(){
this.permId = new IntWritable(-1);
this.SignaturePerm = new IntArrayWritable();
}
public IntWritable getPermId() {
return permId;
}
public void setPermId(IntWritable permId) {
this.permId = permId;
}
public IntArrayWritable getSignaturePerm() {
return SignaturePerm;
}
public void setSignaturePerm(IntArrayWritable signaturePerm) {
SignaturePerm = signaturePerm;
}
private IntWritable permId;
private IntArrayWritable SignaturePerm;
public PairDocIdPerm(IntWritable permId,IntArrayWritable SignaturePerm) {
this.permId = permId;
this.SignaturePerm = SignaturePerm;
}
#Override
public void write(DataOutput out) throws IOException {
permId.write(out);
SignaturePerm.write(out);
}
#Override
public void readFields(DataInput in) throws IOException {
permId.readFields(in);
SignaturePerm.readFields(in);
}
#Override
public int hashCode() { // same permId must go to same reducer. there fore just permId
return permId.get();//.hashCode();
}
#Override
public boolean equals(Object o) {
if (o instanceof PairDocIdPerm) {
PairDocIdPerm tp = (PairDocIdPerm) o;
return permId.equals(tp.permId) && SignaturePerm.equals(tp.SignaturePerm);
}
return false;
}
#Override
public String toString() {
return permId + "\t" +SignaturePerm.toString();
}
#Override
public int compareTo(PairDocIdPerm tp) {
int cmp = permId.compareTo(tp.permId);
Writable[] ar, other;
ar = this.SignaturePerm.get();
other = tp.SignaturePerm.get();
if (cmp == 0) {
for(int i=0;i<ar.length;i++){
if(((IntWritable)ar[i]).get() == ((IntWritable)other[i]).get()){cmp= 0;continue;}
else if(((IntWritable)ar[i]).get() < ((IntWritable)other[i]).get()){ return -1;}
else if(((IntWritable)ar[i]).get() > ((IntWritable)other[i]).get()){return 1;}
}
}
return cmp;
//return 1;
}
}`
I require the keys with same Id to go to the same reducer with their sort order as coded in the compareTo method.
However when i use this, my job execution status is always map100% reduce 0%.
The reduce never runs to completion. Is there any thing wrong in this implementation?
In general what is the likely problem if reducer status is always 0%.
I think this might be a possible null pointer exception in the read method:
#Override
public void readFields(DataInput in) throws IOException {
permId.readFields(in);
SignaturePerm.readFields(in);
}
permId is null in this case.
So what you have to do is this:
IntWritable permId = new IntWritable();
Either in the field initializer or before the read.
However, your code is horrible to read.