What exactly is output of mapper and reducer function - hadoop

This is a follow up question of Extracting rows containing specific value using mapReduce and hadoop
Mapper function
public static class MapForWordCount extends Mapper<Object, Text, Text, IntWritable>{
private IntWritable saleValue = new IntWritable();
private Text rangeValue = new Text();
public void map(Object key, Text value, Context con) throws IOException, InterruptedException
{
String line = value.toString();
String[] words = line.split(",");
for(String word: words )
{
if(words[3].equals("40")){
saleValue.set(Integer.parseInt(words[0]));
rangeValue.set(words[3]);
con.write( rangeValue , saleValue );
}
}
}
}
Reducer function
public static class ReduceForWordCount extends Reducer<Text, IntWritable, Text, IntWritable>
{
private IntWritable result = new IntWritable();
public void reduce(Text word, Iterable<IntWritable> values, Context con) throws IOException, InterruptedException
{
for(IntWritable value : values)
{
result.set(value.get());
con.write(word, result);
}
}
}
Output obtained is
40 105
40 105
40 105
40 105
EDIT 1 :
But the Expected output is
40 102
40 104
40 105
What am I doing wrong ?
What exactly is happening here in mapper and reducer function ?

In the context of the original question - you don't need the loop not in the mapper nor in the reducer as you are duplicating entries:
public static class MapForWordCount extends Mapper<Object, Text, Text, IntWritable>{
private IntWritable saleValue = new IntWritable();
private Text rangeValue = new Text();
public void map(Object key, Text value, Context con) throws IOException, InterruptedException
{
String line = value.toString();
String[] words = line.split(",");
if(words[3].equals("40")){
saleValue.set(Integer.parseInt(words[0]));
rangeValue.set(words[3]);
con.write(rangeValue , saleValue );
}
}
}
And in the reducer, as suggested by #Serhiy in the original question you need only one line of code:
public static class ReduceForWordCount extends Reducer<Text, IntWritable, Text, IntWritable>
{
private IntWritable result = new IntWritable();
public void reduce(Text word, Iterable<IntWritable> values, Context con) throws IOException, InterruptedException
{
con.write(word, null);
}
Regrading "Edit 1" - I will leave it a trivial practice :)

What exactly is happening
You are consuming lines of comma-delimited text, splitting the commas, and filtering out some values. con.write() should only be called once per line if all you are doing is extracting only those values.
The mapper will group all the "40" keys that you output and form a list of all the values that were written with that key. And that is what the reducer is reading over.
You should probably try this for your map function.
// Set the values to write
saleValue.set(Integer.parseInt(words[0]));
rangeValue.set(words[3]);
// Filter out only the 40s
if(words[3].equals("40")) {
// Write out "(40, safeValue)" words.length times
for(String word: words )
{
con.write( rangeValue , saleValue );
}
}
If you don't want duplicate values for the length of the split string, then get rid of the for loop.
All your reducer is doing is just printing out what it received from the mapper.

Mapper output would be something like this :
<word,count>
Reducer output would be like this :
<unique word, its total count>
Eg: A line is read and all words in it are counted and put in a <key,value> pair:
<40,1>
<140,1>
<50,1>
<40,1> ..
here 40,50,140, .. are all keys and the value is the count of number of occurrences of that key in a line. This happens in the mapper.
Then, these key,valuepairs are sent to the reducer where similar keys are all reduced to a single key and all the values associates with that key is summed to give a value to the key-value pair. So, the result of the reducer would be something like:
<40,10>
<50,5>
...
In your case, the reducer isn't doing anything. The unique values/words found by the mapper are just given out as the output.
Ideally, you are supposed to reduce & get an output like : "40,150" was found 5 times on the same line.

Related

Get Top N items from mapper output - Mapreduce

My Mapper task returns me following output:
2 c
2 g
3 a
3 b
6 r
I have written reducer code and keycomparator that produces the correct output but how do I get Top 3 out (top N by count) of Mapper Output:
public static class WLReducer2 extends
Reducer<IntWritable, Text, Text, IntWritable> {
#Override
protected void reduce(IntWritable key, Iterable<Text> values,
Context context) throws IOException, InterruptedException {
for (Text x : values) {
context.write(new Text(x), key);
}
};
}
public static class KeyComparator extends WritableComparator {
protected KeyComparator() {
super(IntWritable.class, true);
}
#Override
public int compare(WritableComparable w1, WritableComparable w2) {
// TODO Auto-generated method stub
// Logger.error("--------------------------> writing Keycompare data = ----------->");
IntWritable ip1 = (IntWritable) w1;
IntWritable ip2 = (IntWritable) w2;
int cmp = -1 * ip1.compareTo(ip2);
return cmp;
}
}
This is the reducer output:
r 6
b 3
a 3
g 2
c 2
The expected output from reducer is top 3 by count which is:
r 6
b 3
a 3
Restrict your output from reducer. Something like this.
public static class WLReducer2 extends
Reducer<IntWritable, Text, Text, IntWritable> {
int count=0;
#Override
protected void reduce(IntWritable key, Iterable<Text> values,
Context context) throws IOException, InterruptedException {
for (Text x : values) {
if (count > 3)
context.write(new Text(x), key);
count++;
}
};
}
Set number of reducers to 1. job.setNumReduceTasks(1).
If your Top-N elements could be stored in memory, you could use a TreeMap to store the Top-N elements and if your process could be aggregated using only one reducer.
Instantiate a instance variable TreeMap in the setup() method of your reducer.
Inside your reducer() method you should aggregate all the values for the keygroup and then compare the result with the first (lowest) key in the Tree, map.firstKey(). If your current value is bigger than the lowest value in the Tree then insert the current value into the treemap, map.put(value, Item) and then delete the lowest value from the Tree map.remove(value).
In the reducer's cleanup() method, write to the output all the TreeMap's elements in the required order.
Note: The value to compare your records must be the key in your TreeMap. And the value of your TreeMap should be the description, tag, letter, etc; related with the number.

Hadoop not all values get assembled for one key

I have some data that I would like to aggregate by key using Mapper code and then perform something on all values that belong to a key using Reducer code. For example if I have:
key = 1, val = 1,
key = 1, val = 2,
key = 1, val = 3
I would like to get key=1, val=[1,2,3] in my Reducer.
The thing is, I get something like
key = 1, val=[1,2]
key = 1, val=[3]
Why is that so?
I thought that all the values for one specific key will be assembled in one reducer, but now it seems that there can be more key, val [ ] pairs, since there can be multiple reducers, is that so?
Should I set number of reducers to be 1?
I'm new to Hadoop so this confuses me.
Here's the code
public class SomeJob {
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException
{
Job job = new Job();
job.setJarByClass(SomeJob.class);
FileInputFormat.addInputPath(job, new Path("/home/pera/data/input/some.csv"));
FileOutputFormat.setOutputPath(job, new Path("/home/pera/data/output"));
job.setMapperClass(SomeMapper.class);
job.setReducerClass(SomeReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.waitForCompletion(true);
}
}
public class SomeMapper extends Mapper<LongWritable, Text, Text, Text>{
#Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String parts[] = line.split(";");
context.write(new Text(parts[0]), new Text(parts[4]));
}
}
public class SomeReducer extends Reducer<Text, Text, Text, Text>{
#Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
String properties = "";
for(Text value : values)
{
properties += value + " ";
}
context.write(key, new Text(properties));
}
}

Read values wrapped in Hadoop ArrayWritable

I am new to Hadoop and Java. My mapper outputs text and Arraywritable. I having trouble to read ArrayWritable values. Unbale to cast .get() values to integer. Mapper and reducer code are attached. Can someone please help me to correct my reducer code in order to read ArrayWritable values?
public static class Temp2Mapper extends Mapper<LongWritable, Text, Text, ArrayWritable>{
private static final int MISSING=9999;
#Override public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{
String line = value.toString();
String date = line.substring(07,14);
int maxTemp,minTemp,avgTemp;
IntArrayWritable carrier = new IntArrayWritable();
IntWritable innercarrier[] = new IntWritable[3];
maxTemp=Integer.parseInt(line.substring(39,45));
minTemp=Integer.parseInt(line.substring(47,53));
avgTemp=Integer.parseInt(line.substring(63,69));
if (maxTemp!= MISSING)
innercarrier[0]=new IntWritable(maxTemp); // maximum Temperature
if (minTemp!= MISSING)
innercarrier[1]=new IntWritable(minTemp); //minimum temperature
if (avgTemp!= MISSING)
innercarrier[2]=new IntWritable(avgTemp); // average temperature of 24 hours
carrier.set(innercarrier);
context.write(new Text(date), carrier); // Output Text and ArrayWritable
}
}
public static class Temp2Reducer
extends Reducer<Text, ArrayWritable, Text, IntWritable>{
#Override public void reduce(Text key, Iterable<ArrayWritable> values, Context context )
throws IOException, InterruptedException {
int max = Integer.MIN_VALUE;
int[] arr= new int[3];
for (ArrayWritable val : values) {
arr = (Int) val.get(); // Error: cannot cast Writable to int
max = Math.max(max, arr[0]);
}
context.write( key, new IntWritable(max) );
}
}
ArrayWritable#get method returns an array of Writable.
You can't cast an array of Writable to int. What you can do is:
iterate over this array
cast each item (which will be of type Writable) of the array to IntWritable
use IntWritable#get method to get the int value.
for (ArrayWritable val: values) {
for (Writable writable: val.get()) { // iterate
IntWritable intWritable = (IntWritable)writable; // cast
int value = intWritable.get(); // get
// do your thing with int value
}
}

Writing a value to file without moving to reducer

I have an input of records like this,
a|1|Y,
b|0|N,
c|1|N,
d|2|Y,
e|1|Y
Now, in mapper, i has to check the value of third column. If it is 'Y' then that record has to write directly to output file without moving that record to reducer or else i.e, 'N' value records has to move to reducer for further processing..
So,
a|1|Y,
d|2|Y,
e|1|Y
should not go to reducer but
b|0|N,
c|1|N
should go to reducer and then to output file.
How can i do this??
What you can probably do is use MultipleOutputs - click here to separate out records of 'Y' and 'N' type to two different files from mappers.
Next, you run saparate jobs for the two newly generated 'Y' and 'N' type data sets.
For 'Y' types set number of reducers to 0, so that, Reducers aren't use. And, for 'N' types do it the way you want using reducers.
Hope this helps.
See if this works,
public class Xxxx {
public static class MyMapper extends
Mapper<LongWritable, Text, LongWritable, Text> {
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
FileSystem fs = FileSystem.get(context.getConfiguration());
Random r = new Random();
FileSplit split = (FileSplit)context.getInputSplit();
String fileName = split.getPath().getName();
FSDataOutputStream out = fs.create(new Path(fileName + "-m-" + r.nextInt()));
String parts[];
String line = value.toString();
String[] splits = line.split(",");
for(String s : splits) {
parts = s.split("\\|");
if(parts[2].equals("Y")) {
out.writeBytes(line);
}else {
context.write(key, value);
}
}
out.close();
fs.close();
}
}
public static class MyReducer extends
Reducer<LongWritable, Text, LongWritable, Text> {
public void reduce(LongWritable key, Iterable<Text> values,
Context context) throws IOException, InterruptedException {
for(Text t : values) {
context.write(key, t);
}
}
}
/**
* #param args
* #throws IOException
* #throws InterruptedException
* #throws ClassNotFoundException
*/
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// TODO Auto-generated method stub
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://localhost:9000");
conf.set("mapred.job.tracker", "localhost:9001");
Job job = new Job(conf, "Xxxx");
job.setJarByClass(Xxxx.class);
Path outPath = new Path("/output_path");
job.setMapperClass(MyMapper.class);
job.setReducerClass(MyReducer.class);
FileInputFormat.addInputPath(job, new Path("/input.txt"));
FileOutputFormat.setOutputPath(job, outPath);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
In your map function, you will get input line by line. Split it according by using | as the delimiter. (by using the String.split() method to be exact)
It will look like this
String[] line = value.toString().split('|');
Access the third element of this array by line[2]
Then, using a simple if else statement, emit the output with N value for further processing.

Duplicate "keys" in map-reduce output?

As we all know, either this
public static class SReducer extends MapReduceBase implements Reducer<Text, Text, Text, Text>
{
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException
{
StringBuilder sb = new StringBuilder();
while (key.hasNext())
{
sb.append(key.next().toString());
}
output.collect(key, new Text(sb.toString()));
}
}
or
public static class Reduce extends MapReduceBase implements Reducer<Text, Text, Text, Text>
{
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException
{
boolean start = true;
StringBuilder sb = new StringBuilder();
while (values.hasNext())
{
if(!start)
{
start=false;
sb.append(values.next().toString());
}
}
output.collect(key, new Text(sb.toString()));
}
}
this, is the kind of reducer function we use to eliminate duplicate "values" in output. But what should I do to eliminate duplicate "keys"? Any idea?
Thanks.
PS: more info : In my < key,values > pairs, keys contain links and values contain words. But in my output, each word occurs only once, but I get many duplicate links.
In the Reducer, there will be one call to reduce() for each unique key that the Reducer receives. It will receive all values for that key. But if you only care about the keys, and only care about unique keys, well, just ignore the values entirely. You will get exactly one reduce() per key; do whatever you want with that (non-duplicated) key.

Resources