Method v Class level variables in Hadoop MapReduce - performance

This is a question regarding the performance of writable variables and allocation within a map reduce step. Here is a reducer:
static public class MyReducer extends Reducer<Text, Text, Text, Text> {
#Override
protected void reduce(Text key, Iterable<Text> values, Context context) {
for (Text val : values) {
context.write(key, new Text(val));
}
}
}
Or is this better performance-wise:
static public class MyReducer extends Reducer<Text, Text, Text, Text> {
private Text myText = new Text();
#Override
protected void reduce(Text key, Iterable<Text> values, Context context) {
for (Text val : values) {
myText.set(val);
context.write(key, myText);
}
}
}
In the Hadoop Definitive Guide all the examples are in the first form but I'm not sure if that is for shorter code samples or because it's more idiomatic.

The book may use the first form because it is more concise. However, it is less efficient. For large input files, that approach will create a large number of objects. This excessive object creation would slow down your performance. Performance-wise, the second approach is preferable.
Some references that discuss this issue:
Tip 7 here,
On Hadoop object re-use, and
This JIRA.

Yeah, second approach is preferable if reducer has large data to process. The first approach, will keep creating references and cleaning it up depends on the garbage collector.

Related

Could reducer class not be launched by any chance? Can't see Sytem.out.println statements in the reducer logs

I have a driver class, mapper class and reducer class. The mapreduce job runs fine. But the desired out is not coming. I have put System.out.println statements in the reducer. I looked at the logs of mapper and reducer. System.out.println statements that I put in mapper can be seen in the logs but println statements in the reducer are not seen in the logs. Could it be possible that reducer is not at all launched?
This is the log fine from reducer.
I assume this question is based on the code in your earlier question: mapreduce composite Key sample - doesn't show the desired output
public class CompositeKeyReducer extends Reducer<Country, IntWritable, Country, IntWritable> {
public void reduce(Country key, Iterator<IntWritable> values, Context context) throws IOException, InterruptedException {
}
}
The reduce isn't running because the reduce method signature is wrong. You have:
public void reduce(Country key, Iterator<IntWritable> values, Context context)
It should be:
public void reduce(Country key, Iterable<IntWritable> values, Context context)
To make sure this doesn't happen again you should add the #Override annotation to the class. This will tell you if you've got the signature wrong.
No change in the code. It works now.
All I did was restarted my Hadoop Cloudera image and it works now. I can't believe this happended.

Are member variables of Hadoop Reducer class thread-safe?

I'm a newbie of the Hadoop ecosystem.
What I want to ask is that: "Are member variables of Reducer class thread-safe?"
Mapper passes data to Reducer with unique key.
There is a collection(ConcurrentLinkedQueue) which is a member variable in Reducer class.
The collection is initialized in the setup(Context) method of Reducer class.
Some Query objects(jOOQ) are created and appended into the collection in the reduce(...) method of Reducer class.
jooq.batch(collection).execute() method will be called in the last line of reduce(...) method within specified threshold(e.g 1000). And then the collection will be cleared by clear() method.
The remains of collection from step 4 will be processed as same as step 5 in cleanup(Context) method.
Question: Do I need to synchronize step 5?
Codes
public class SomeReducer extends TableReducer<Text, Text, ImmutableBytesWritable> {
private Queue<Query> queries;
#Override
protected void setup(Context context) {
...
queries = new ConcurrentLinkedQueue<>();
}
#Override
protected void cleanup(Context context) {
if (!queries.isEmpty()) db.batch(queries).execute();
...
}
#Override
public void reduce(Text key, Iterable<Session> sessions, Context context) {
for (...iteration...) { queries.add(...create Query object...); }
// Is this code snippet below should be synchronized?
if (queries.size() >= 1000) {
db.batch(queries).execute();
queries.clear();
}
}
}
A Reducer is threadsafe. You will most likely have multiple Reducers running in parallel, but they are completely isolated from each other and only see their own data and instance variables.
So to answer your qustion, you do not need to synchronize your code or even use a ConcurrentLinkedQueue, it could just be a normal ArrayList.

Hashmap in each mapper should be used in a single reducer

In one of my class im using HashMap.Im calling that class inside my mapper. So now each mapper has its own HashMap. Now can i use all the HashMaps into a single reducer? Actually my HashMap contains Key as my filename and value is the Set.So each HashMap contains a filename and a Set. Now i want to use all the HashMap caontaining same filename and want to club all the values(Sets) and then write that HashMap into my Hdfs file
Yes you can do that. If your mapper is giving an output in the form of hashmap then you can use Hadoop's MapWritable as your value of mapper.
For e.g.
public class MyMapper extends Mapper<LongWritable, Text, Text, MapWritable>
you have to convert your Hashmap into MapWritable format:
MapWritable mapWritable = new MapWritable();
for (Map.Entry<String,String> entry : yourHashMap.entrySet()) {
if(null != entry.getKey() && null != entry.getValue()){
mapWritable.put(new Text(entry.getKey()),new Text(entry.getValue()));
}
}
Then provide the mapwritable to your context:
ctx.write(new Text("my_key",mapWritable);
For Reducer class you have take MapWritable as your input value
public class MyReducer extends Reducer<Text, MapWritable, Text, Text>
public void reduce(Text key, Iterable<MapWritable> values, Context ctx) throws IOException, InterruptedException
Then iterate through the map and extract the values the way you want. For e.g:
for (MapWritable entry : values) {
for (Entry<Writable, Writable> extractData: entry.entrySet()) {
//your logic for the data will go here.
}
}

Use two Mappers on same file simultaneously in Hadoop

Assuming there is a file and two different independent mappers to be executed upon that file in parallel. To do that we require to use a copy of the file.
What I want to know is "Is it possible to use same file for the two mappers" which in turn will reduce the resources utilization and make the system time efficient.
Is there any research in this area or any existing tool in Hadoop which can help in overcoming this.
Assuming that both Mappers have the same K,V signature, you could use a delegating mapper and then call the map method of your two mappers:
public class DelegatingMapper extends Mapper<LongWritable, Text, Text, Text> {
public Mapper<LongWritable, Text, Text, Text> mapper1;
public Mapper<LongWritable, Text, Text, Text> mapper2;
protected void setup(Context context) {
mapper1 = new MyMapper1<LongWritable, Text, Text, Text>();
mapper1.setup(context);
mapper2 = new MyMapper1<LongWritable, Text, Text, Text>();
mapper2.setup(context);
}
public void map(LongWritable key, Text value, Context context) {
// your map methods will need to be public for each class
mapper1.map(key, value, context);
mapper2.map(key, value, context);
}
protected void cleanup(Context context) {
mapper1.cleanup(context);
mapper2.cleanup(context);
}
}
On a high level, there are 2 scenarios I could imagine with the question in hand.
Case 1:
If you are trying to write the SAME implementation in both Mapper classes to process the same input file with the sole aim of efficient resource utilization, this probably isn't the correct approach. Because, when a file is saved in the cluster it gets divided into blocks and replicated across data nodes.
This basically gives you the most efficient resource utilization as all the data blocks for the same input file are processed in PARALLEL.
Case 2:
If you are trying to write two DIFFERENT Mapper implementations (with their own business logic), for some particular workflow you want to execute based on your business requirements. Yes, you can pass the same input file to two different mappers using MultipleInputs class.
MultipleInputs.addInputPath(job, file1, TextInputFormat.class, Mapper1.class);
MultipleInputs.addInputPath(job, file1, TextInputFormat.class, Mapper2.class);
This could only be a workaround based on what you want to implement.
Thanks.

Hadoop Map Reduce , How to combine first reducer output and first map input , as input for second mapper?

I need to implement a functionality using map reduce.
Requirement is mentioned below.
Input for the mapper is a file containing two columns productId , Salescount
Reducers output , sum of salescount
Requirement is I need to calculate salescount / sum(salescount).
For this I am planing to use nested map reduce.
But for the second mapper I need to use first reducers output and first map's input.
How Can I implement this. Or is there any alternate way ?
Regards
Vinu
You can use ChainMapper and ChainReducer to PIPE Mappers and Reducers the way you want. Please have a look at here
The following will be similar to the code snippet you would need to implement
JobConf mapBConf = new JobConf(false);
JobConf reduceConf = new JobConf(false);
ChainMapper.addMapper(conf, FirstMapper.class, FirstMapperInputKey.class, FirstMapperInputValue.class,
FirstMapperOutputKey.class, FirstMapperOutputValue.class, false, mapBConf);
ChainReducer.setReducer(conf, FirstReducer.class, FirstMapperOutputKey.class, FirstMapperOutputValue.class,
FirstReducerOutputKey.class, FirstReducerOutputValue.class, true, reduceConf);
ChainReducer.addMapper(conf, SecondMapper.class, FirstReducerOutputKey.class, FirstReducerOutputValue.class,
SecondMapperOutputKey.class, SecondMapperOutputValue.class, false, null);
ChainReducer.setReducer(conf, SecondReducer.class, SecondMapperOutputKey.class, SecondMapperOutputValue.class, SecondReducerOutputKey.class, SecondReducerOutputValue.class, true, reduceConf);
or if you don't want to use multiple Mappers and Reducers you can do the following
public static class ProductIndexerMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, LongWritable> {
private static Text productId = new Text();
private static LongWritable salesCount = new LongWritable();
#Override
public void map(LongWritable key, Text value,
OutputCollector<Text, LongWritable> output, Reporter reporter)
throws IOException {
String[] values = value.toString().split("\t");
productId.set(values[0]);
salesCount.set(Long.parseLong(values[1]));
output.collect(productId, salesCount);
}
}
public static class ProductIndexerReducer extends MapReduceBase implements Reducer<Text, LongWritable, Text, LongWritable> {
private static LongWritable productWritable = new LongWritable();
#Override
public void reduce(Text key, Iterator<LongWritable> values,
OutputCollector<Text, LongWritable> output, Reporter reporter)
throws IOException {
List<LongWritable> items = new ArrayList<LongWritable>();
long total = 0;
LongWritable item = null;
while(values.hasNext()) {
item = values.next();
total += item.get();
items.add(item);
}
Iterator<LongWritable> newValues = items.iterator();
while(newValues.hasNext()) {
productWritable.set(newValues.next().get()/total);
output.collect(key, productWritable);
}
}
}
`
With the usecase in hand, I believe we don't need two different mappers/mapreduce jobs to achieve this. (As an extension to the answer given in above comments)
Lets assume you have a very large input file split into multiple blocks in HDFS. When you trigger a MapReduce job with this file as input, multiple mappers(equal to the number of input blocks) will start execution in parallel.
In your mapper implementation, read each line from input and write the productId as key and the saleCount as value to context. This data is passed to the Reducer.
We know that, in a MR job all the data with the same key is passed to the same reducer. Now, in your reducer implementation you can calculate the sum of all saleCounts for a particular productId.
Note: I'm not sure about the value 'salescount' in your numerator.
Assuming that its the count of number of occurrences of a particular product, please use a counter to add and get the total sales count in the same for loop where you are calculating the SUM(saleCount). So, we have
totalCount -> Count of number of occurrences of a product
sumSaleCount -> Sum of saleCount value for each product.
Now, you can directly divide the above values: totalCount/sumSaleCount.
Hope this helps! Please let me know if you have a different use case in mind.

Resources