I am currently new to Hadoop. So I have this solved piece of code in MapReduce which finds out the "parts of a country with most 'Data Engineer' jobs for each year" (for example, if the data of the format (Year,Region,Count(Jobs)) is "2016,'XYZ',35" and "2016,'ABC',25" and "2015,'sdf',14", the answer would be "2016,'XYZ',35" and "2015,'sdf',14"), but I am unable to understand the part in the reducer which is as follows:-
if (Top5DataEngineer.size() > 1)
Top5DataEngineer.remove(Top5DataEngineer.firstKey());
}//Ignore this bracket for the time being.
protected void cleanup(Context context) throws IOException,
InterruptedException {
for (Text t : Top5DataEngineer.descendingMap().values())
context.write(NullWritable.get(), t);
}
This is the full code:-
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.Partitioner;
import java.util.TreeMap;
import org.apache.hadoop.mapreduce.Reducer;
public class Q_002a {
public static class Q_002a_Mapper extends
Mapper<LongWritable, Text, Text, LongWritable> {
LongWritable one = new LongWritable(1);
public void map(LongWritable key, Text values, Context context)
throws IOException, InterruptedException {
try {
if (key.get() > 0)
{
String[] token = values.toString().split("\t");
if (token[4].equals("DATA ENGINEER")) {
Text answer = new Text(token[8] + "\t" + token[7]);
context.write(answer, one);
}
}
} catch (ArrayIndexOutOfBoundsException e) {
System.out.println(e.getMessage());
} catch (ArithmeticException e1) {
System.out.println(e1.getMessage());
}
}
}
public static class Q_002a_Partitioner extends Partitioner<Text, LongWritable> {
#Override
public int getPartition(Text key, LongWritable value, int numReduceTasks) {
String[] str = key.toString().split("\t");
if (str[1].equals("2011"))
return 0;
if (str[1].equals("2012"))
return 1;
if (str[1].equals("2013"))
return 2;
if (str[1].equals("2014"))
return 3;
if (str[1].equals("2015"))
return 4;
if (str[1].equals("2016"))
return 5;
else
return 6;
}
}
public static class Q_002a_Reducer extends
Reducer<Text, LongWritable, NullWritable, Text> {
private TreeMap<LongWritable, Text> Top5DataEngineer = new TreeMap<LongWritable, Text>();
long sum = 0;
public void reduce(Text key, Iterable<LongWritable> values,
Context context) throws IOException, InterruptedException {
sum = 0;
for (LongWritable val : values) {
sum += val.get();
}
Top5DataEngineer.put(new LongWritable(sum), new Text(key + ","
+ sum));
if (Top5DataEngineer.size() > 1)
Top5DataEngineer.remove(Top5DataEngineer.firstKey());
}
protected void cleanup(Context context) throws IOException,
InterruptedException {
for (Text t : Top5DataEngineer.descendingMap().values())
context.write(NullWritable.get(), t);
}
}
public static void main(String args[]) throws IOException,
InterruptedException, ClassNotFoundException {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Top 5 Data Engineer in a worksite");
job.setJarByClass(Q_002a.class);
job.setMapperClass(Q_002a_Mapper.class);
job.setPartitionerClass(Q_002a_Partitioner.class);
job.setReducerClass(Q_002a_Reducer.class);
job.setNumReduceTasks(6);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
This is the output I am getting:-
EDIT:- I tried running the code inside the cleanup() method in the reduce() method, but it was not working as expected. It only ran fine when it was in the cleanup() method. Any help regarding this would be appreciated.
cleanup() method will be called when processing stage is completed. And it will be called only once.
In your example reduce() method is "searching" for the biggest sum of Data engineers jobs by city in years partition. Top5DataEngineer TreeMap stores keys in sorted(ascending) order and on each iteration it simply deletes first key(smaller key) if it has more than one key. In other words after processing Iterable<LongWritable> values you will get a city with the biggest number of jobs in every 'years' partition.
When reducer phase is finished, cleanup() method simply writes a result of every processed partition(single/biggest kv-pair in Top5DataEngineer map).
cleanup() method will be called once for every 'years' partition.
Hope it will help you.
Related
I have this in Main...
job.setMapperClass(AverageIntMapper.class);
job.setCombinerClass(AverageIntCombiner.class);
job.setReducerClass(AverageIntReducer.class);
And the Combiner has different code but the Combiner is being completely ignored as the output the Reducer is using is the output from the Mapper.
I understand that a Combiner may not be used but I thought that was the case when the Combiner is the same as the Reducer. I don't really understand the point of being able to create a custom Combiner but the system can still skip its usage.
If that's not supposed to happen, what could be a reason that the Combiner is not being used?
Code...
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class AverageInt {
public static class AverageIntMapper extends Mapper<LongWritable, Text, Text, Text> {
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String n_string = value.toString();
context.write(new Text("Value"), new Text(n_string));
}
}
public static class AverageIntCombiner extends Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
int count = 0;
for(IntWritable value : values) {
int temp = Integer.parseInt(value.toString());
sum += value.get();
count += 1;
}
String sum_count = Integer.toString(sum) + "," + Integer.toString(count);
context.write(key, new Text(sum_count));
}
}
public static class AverageIntReducer extends Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
int total = 0;
int count = 0;
for(Text value : values) {
String temp = value.toString();
String[] split = temp.split(",");
total += Integer.parseInt(split[0]);
count += Integer.parseInt(split[1]);
}
Double average = (double)total/count;
context.write(key, new Text(average.toString()));
}
}
public static void main(String[] args) throws Exception {
if(args.length != 2) {
System.err.println("Usage: AverageInt <input path> <output path>");
System.exit(-1);
}
Job job = new Job();
job.setJarByClass(AverageInt.class);
job.setJobName("Average");
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(AverageIntMapper.class);
job.setCombinerClass(AverageIntCombiner.class);
job.setReducerClass(AverageIntReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
If you look at what your mapper is emitting:
public void map(LongWritable key, Text value, Context context)
Its sending two Text objects, but whilst you've declared the combiner class itself correctly, the reduce method has:
public void reduce(Text key, Iterable<IntWritable> values, Context context)
It should be:
public void reduce(Text key, Iterable<Text> values, Context context)
I'm following the tutorial at http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html and this is my code
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
import java.util.StringTokenizer;
import java.util.Iterator;
public class WordCount {
public static class WordCountMapper extends Mapper<Object, Text, Text, IntWritable> {
private Text word = new Text();
private final IntWritable one = new IntWritable(1);
#Override
public void map(Object key, Text val, Context context) throws IOException, InterruptedException {
String line = val.toString();
StringTokenizer tokenizer = new StringTokenizer(line.toLowerCase());
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> value, Context context) throws IOException, InterruptedException {
int sum = 0;
while (value.hasNext()) {
IntWritable val = (IntWritable) value.next();
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration config = new Configuration();
Job job = Job.getInstance(config, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(WordCountMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setCombinerClass(WordCountReducer.class);
job.setReducerClass(WordCountReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("/user/Icarus/words.txt"));
FileOutputFormat.setOutputPath(job, new Path("/user/Icarus/words.out"));
job.waitForCompletion(true);
}
}
But when I run it instead of calculating the word frequency, I got this:
bye 1
goodbye 1
hadoop 1
hadoop 1
hello 1
hello 1
hello 1
world 1
I must missed something very trivial but I can't figure out what. Help please..
Root cause of this problem is, You are not calling the reduce() with the exact Signature required to call by Hadoop. Signature should be as below (reference here)
protected void reduce(KEYIN key, Iterable<VALUEIN> values, org.apache.hadoop.mapreduce.Reducer.Context context)
throws IOException, InterruptedException
Since your reduce() not matching the Signature, Hadoop will call default IdentityReducer, which output the same input.
So only you are getting the same output of Map as Reduce output.
For this problem, i can suggest you 2 solutions,
First: Try the below code
public static class WordCountReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
Second: And second solution is quite simple,
Instead of you define manually reduce class, Just set the Reducer class to IntSumReducer or LongSumReducer which will do the same as above code.
So don't define the WordCountReducer class and add the following code,
job.setReducerClass(LongSumReducer.class); or
job.setReducerClass(IntSumReducer.class);
based on the count type you want.
Hope it helps!
one file is containing data like this
robert 10,20,30
john 10,30,20
Another file containing data like
surya 10|20|30
sumanth 30|40|10
like this 10 files i want to get the output data what is coma separated and pipe separated using Map Reduce
Here's the code to replace comma delimiter with pipe and combine all the lists for the same surname into one
package my.reader;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import java.io.IOException;
public class ReadRows {
public static class Map extends Mapper<LongWritable, Text, Text, Text> {
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] res = value.toString().split("\t");
if (res[1].contains(",")) {
res[1] = res[1].replace(',','|');
}
context.write(new Text(res[0]), new Text(res[1]));
}
}
public static class Reduce extends Reducer<Text,Text,Text,Text> {
public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException{
String res = "";
for(Text val : values) {
res += "|" + val.toString();
}
context.write(key, new Text(res.substring(1)));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
if (args.length != 2) {
System.err.println("Usage: my.reader.ReadRows <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "ReadRows");
job.setJarByClass(ReadRows.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
And here's the code just to parse them and calculate max:
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] res = value.toString().split("\t");
String[] sal;
if (res[1].contains(",")) {
sal = res[1].split(",");
} else {
sal = res[1].split("\\|");
}
Integer maxSal = 0;
for ( String s : sal ) {
maxSal = max(Integer.valueOf(s), maxSal);
}
context.write(new Text(res[0]), new IntWritable(maxSal));
}
}
public static class Reduce extends Reducer<Text,IntWritable,Text,IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException{
Integer maxSal = 0;
for(IntWritable val : values) {
maxSal = max(val.get(), maxSal);
}
context.write(key, new IntWritable(maxSal));
}
}
I have written a custom input format and configured that in job. Still that inputformat is not getting invoked. I have kept some SOP's to get printed while running the code but none of them are printing. Even when I comment the custom inputformat in the driver class still the output remains same. Where am I missing ?
DriverClass
public class TestDriver {
public static void main(String args[]) throws IOException, InterruptedException, ClassNotFoundException{
Configuration conf = new Configuration();
Job job = new Job(conf,"Custom Format");
job.setMapperClass(CustomInputFormatmapper.class);
job.setReducerClass(CustomInputFormatReducer.class);
job.setInputFormatClass(CustomInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(LongWritable.class);
job.getConfiguration().set("fs.file.impl", "com.learn.WinLocalFileSystem");
String inputPath="In\\VISA_Details.csv";
Path inPath=new Path(inputPath);
String outputPath = "C:\\Users\\Desktop\\Hadoop learning\\output\\run1";
Path outPath=new Path(outputPath);
FileInputFormat.setInputPaths(job, inPath );
FileOutputFormat.setOutputPath(job, outPath);
System.out.println(job.waitForCompletion(true));
}
}
CUSTOM INPUTFORMAT
import org.apache.hadoop.mapred.TaskAttemptContext;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
public class CustomInputFormat extends TextInputFormat{
public RecordReader createRecordReader(InputSplit split, TaskAttemptContext context)
{
System.out.println(" ------------ INSIDE createRecordReader()--------------");
return new CustomRecordReader();
}
}
CUSTOM RECORDREADER
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.util.LineReader;
public class CustomRecordReader extends RecordReader {
private CompressionCodecFactory compressionCodecs;
private final int NLINESTOPROCESS = 3;
private long start;
private long pos;
private long end;
private LineReader in;
private int maxLineLength;
private LongWritable key;
private Text value;
#Override
public void close() throws IOException {
// TODO Auto-generated method stub
}
#Override
public Object getCurrentKey() throws IOException, InterruptedException {
// TODO Auto-generated method stub
return null;
}
#Override
public Object getCurrentValue() throws IOException, InterruptedException {
// TODO Auto-generated method stub
return null;
}
#Override
public float getProgress() throws IOException, InterruptedException {
// TODO Auto-generated method stub
return 0;
}
#Override
public void initialize(InputSplit inputsplit,TaskAttemptContext taskattemptcontext)
throws IOException, InterruptedException {
System.out.println(" ---------- INSIDE INITILISE: THIS IS NOT PRINTING----------");
FileSplit split = (FileSplit)inputsplit;
Configuration job = taskattemptcontext.getConfiguration();
maxLineLength = job.getInt("mapred.linerecordreader.maxlength", 2147483647);
start = split.getStart();
end = start + split.getLength();
Path file = split.getPath();
compressionCodecs = new CompressionCodecFactory(job);
CompressionCodec codec = compressionCodecs.getCodec(file);
FileSystem fs = file.getFileSystem(job);
FSDataInputStream fileIn = fs.open(split.getPath());
boolean skipFirstLine = false;
if(codec != null)
{
in = new LineReader(codec.createInputStream(fileIn), job);
end = 9223372036854775807L;
} else
{
if(start != 0L)
{
skipFirstLine = true;
start--;
fileIn.seek(start);
}
in = new LineReader(fileIn, job);
}
if(skipFirstLine)
start += in.readLine(new Text(), 0, (int)Math.min(2147483647L, end - start));
pos = start;
}
#Override
public boolean nextKeyValue() throws IOException, InterruptedException {
System.out.println(" ---------- INSIDE nextKeyValue()------------");
if(key==null){
key = new LongWritable();
}
if(value==null){
value = new Text();
}
key.set(pos);
value.clear();
final Text newLine = new Text("\n");
Text newVal = new Text();
int newSize = 0;
for(int i =0;i<NLINESTOPROCESS;i++){
Text v = new Text();
while(pos<end){
newSize = in.readLine(v, maxLineLength,Math.max((int)Math.min(Integer.MAX_VALUE, end-pos),maxLineLength));
value.append(v.getBytes(),0, v.getLength());
value.append(newLine.getBytes(),0, newLine.getLength());
if (newSize == 0) {
break;
}
pos += newSize;
if (newSize < maxLineLength) {
break;
}
}
}
return false;
}
}
MAPPER CLASS
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class CustomInputFormatmapper extends Mapper<LongWritable, Text, LongWritable, LongWritable> {
public void map(LongWritable key, Text val, Context context)throws IOException, InterruptedException{
String value = val.toString();
String[] totalRows = value.split("\n");
int count =totalRows.length;
context.write(new LongWritable(Long.valueOf(count)), new LongWritable(1L));
}
}
REDUCER CLASS
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapreduce.Reducer;
public class CustomInputFormatReducer extends Reducer<LongWritable, LongWritable, LongWritable, LongWritable> {
public void reduce(LongWritable key, Iterable<LongWritable> val, Context context) throws IOException, InterruptedException{
System.out.println(" --------REDUCER--------");
long count =0;
for(LongWritable vals: val){
count++;
}
context.write(key, new LongWritable(count));
}
}
I am answering my own question as this will help others to get through the problem which I faced. There was a problem with the package which I was importing.
Mentioning the mistakes which I did.
CUSTOMINPUTFORMAT CLASS
1) Missed the #Override annotation
2) Imported from import org.apache.hadoop.mapred.InputSplit instead of org.apache.hadoop.mapreduce.InputSplit;
CUSTOMRECORDREADER
1) Imports were done from org.apache.hadoop.mapred.* and not from org.apache.hadoop.mapreduce.*;
I have written a MapReduce code for running it on a CDH4 cluster. My requirement was to read the complete file as the value and the file name as the key. For that I wrote custom InputFormat and RecordReader classes.
Custom input format class: FullFileInputFormat.java
import java.io.*;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;
import FullFileRecordReader;
public class FullFileInputFormat extends FileInputFormat<Text, Text> {
#Override
public RecordReader<Text, Text> getRecordReader(InputSplit split, JobConf jobConf, Reporter reporter) throws IOException {
reporter.setStatus(split.toString());
return new FullFileRecordReader((FileSplit) split, jobConf);
}
}
And the custom RecordReader class: FullFileRecordReader.java
import java.io.BufferedReader;
import java.io.IOException;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;
public class FullFileRecordReader implements RecordReader<Text, Text> {
private BufferedReader in;
private boolean processed = false;
private int processedBytes = 0;
private FileSplit fileSplit;
private JobConf conf;
public FullFileRecordReader(FileSplit fileSplit, JobConf conf) {
this.fileSplit = fileSplit;
this.conf = conf;
}
#Override
public void close() throws IOException {
if (in != null) {
in.close();
}
}
#Override
public Text createKey() {
return new Text("");
}
#Override
public Text createValue() {
return new Text("");
}
#Override
public long getPos() throws IOException {
return processedBytes;
}
#Override
public boolean next(Text key, Text value) throws IOException {
Path filePath = fileSplit.getPath();
if (!processed) {
key = new Text(filePath.getName());
value = new Text("");
FileSystem fs = filePath.getFileSystem(conf);
FSDataInputStream fileIn = fs.open(filePath);
byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = fileIn.read(b)) > 0) {
value.append(b, 0, numBytes);
processedBytes += numBytes;
}
processed = true;
return true;
}
return false;
}
#Override
public float getProgress() throws IOException {
return 0;
}
}
Though whenever I try to print the key-value in the RecordReader class, I get their values, but when I print the same in the mapper class, I see blank values for them. I am unable to understand why the Mapper class is unable to get any data for keys and values.
Currently I have only a Map job and no reduce job. The code is:
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;
import FullFileInputFormat;
public class Source {
public static class Map extends MapReduceBase implements Mapper<Text, Text, Text, Text> {
public void map(Text key, Text value, OutputCollector<Text, Text> output, Reporter reporter) throws java.io.IOException {
System.out.println("Processing " + key.toString());
System.out.println("Value: " + value.toString());
}
}
public static void main(String[] args) throws Exception {
JobConf job = new JobConf(Source.class);
job.setJobName("Source");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setJarByClass(Source.class);
job.setInputFormat(FullFileInputFormat.class);
job.setMapperClass(Map.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
JobClient.runJob(job);
}
}
You're creating new instances in your next method - hadoop re-uses objects so you are expected to populate the ones passed. It should be as simple as amending as follows:
#Override
public boolean next(Text key, Text value) throws IOException {
Path filePath = fileSplit.getPath();
if (!processed) {
// key = new Text(filePath.getName());
key.set(filePath.getName());
// value = new Text("");
value.clear();
}
I would also recommend pre-sizing the value text to avoid 'growing' pains of the value's underlying byte array. Text has a private method called setCapacity, so you unforntunately can't call it - but if you used a BytesWritable to buffer the file input, you can call setCapacity in side your next method, passing the fileSplit length (note this may still be wrong if your file is compressed - as the file size is the compressed size).