Multiple mappers in hadoop - hadoop

I am trying to run 2 independent mappers on the same input file in a hadoop program using one job. I want the output of both the mappers to go into a single reducer. I face an issue with running multiple mappers. I was using MultipleInputs class. It was working fine by running both the mappers but yesterday i noticed that it is running only one map function that is the second MultipleInputs statement seems to overwrite the first one. I dont find any change done to the code to show this different behavior suddenly :( Please help me in this.. The main function is :
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "mapper accepting whole file at once");
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(IntWritable.class);
job.setJarByClass(TestMultipleInputs.class);
job.setMapperClass(Map2.class);
job.setMapperClass(Map1.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(NLinesInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setMapOutputKeyClass(IntWritable.class);
** MultipleInputs.addInputPath(job, new Path("Rec"), NLinesInputFormat.class, Map1.class);
MultipleInputs.addInputPath(job, new Path("Rec"), NLinesInputFormat.class, Map2.class);**
FileOutputFormat.setOutputPath(job,new Path("testMulinput"));
job.waitForCompletion(true);
}
Whichever Map class is used in the last MultipleInputs statement gets executed. Like in here the Map2.class gets executed.

You wont be able to read from the same file at the same time with two separate Mappers (at least not without some devilishly hack-ish trickery which you should probably avoid).
In any case, you cant have two Mapper classes set for the same job - the latter call to setMapperClass(class) will always overwrite the former. If you need two Mappers to run simultaneously, you'll need to make two separate jobs, and ensure that there are enough mappers available on your cluster to run them both simultaneously (if there aren't any available after the first job starts the second job will have to wait for it to finish, running sequentially rather than simultaneously.)
However, due to the lack of a guarantee that the Mappers will run concurrently, ensure that the functionality of your MapReduce jobs is not reliant on their concurrent execution.

Both mappers can't read the same file at the same time.
Solution (Workaround):
Create a duplicate of the input file (in this case let duplicate rec file be rec1). Then feed mapper1 with rec and mapper2 with rec1.
Both mappers are executed parallel so you don't need to worry about reducer output because both mappers output will be shuffled so that equal keys from both the files go to same reducer.
so output is what you want.
Hope this helps to others who are facing similar issue.

Related

Chaining Map Reduce Program

I have a situation, during one POC I want to create a nested MapReduce within one Job. Like a Map M1 O/P to Reducer R1 O/P then that R1 output goes to M2 and final output will come with either M2 or we can run R2 with M2 O/P.
Single Job ID - M1->R1->M2->R2...Final output will be in a single O/P file.
Can we do it without Oozie?
You can chain multiple jobs in your Driver class. First, create a job for first MapReduce, by defining all the required configuration. Then start the job as usual by calling:
job1.waitForCompletion(true);
This is wait until the job is finished. Now check the final status of the first job, whether failed or succeeded for appropriate next action.
If the first job is completed successfully, then launch the next MapReduce in the same way. First define the required parameters and launch the job with:
job2.waitForCompletion(true);
The important thing will be output path of the first will be input for the second job. This is serial (sequential) job chaining, because both the jobs will be running one after another.
You can also make use of job control where in you can execute a number of map reduce jobs in a sequence. In your case there are two mappers and two or one reducers. You can have two map reduce jobs and for the second job you can use set the number of reducers to zero if you don't require reducers.

How to implement multiple reducers in a single MapReduce Job

I'm having a huge data set and I need to perform different functions for the same data.
I would like to have four output files. Since four operations are different, can I use four partitioners and four reducers to implement the same ? Is it possible or should I need to write four jobs to perform this ? Please help me !
First Approach
I think you should implement the code in a unique reduce method, and emit n keys depending the process performed. For example: You implement A,B,C and D techiniques, then, in your mapper you could implement this (pseudo-code):
dataA = ProcessA(key,value)
context.write("A", dataA)
dataB = ProcessB(key,value)
context.write("B", dataB)
dataC = ProcessC(key,value)
context.write("C", dataC)
dataD = ProcessD(key,value)
context.write("D", dataD)
You should be careful about data types of output. Also, the output key could be more complex.
Second Approach
You could generate N MapReduce applications in the same java project, and then you re-use the Map, and develop N reducers.
In job.setReducerClass in each main class you set each Reducer. The Map will be the same.
You just need to specify number of reducers in your MapReduce
job config. The default partitioner will distribute data to reducers based on hash of key modulus number of specified reducers.
To override behavior of default partitioner, you can implement your own custom partitioner specifying how your data should get across to the reducers.
---Edit to answer questions in the comments section---
How can i specify more than one reducer class in the Map-reduce driver
To set number of reducers, in job conf you can set it like below -
int numReducers = /*number of reducers you want*/;
job.setNumReduceTasks(numReducers);
Whether I should write four different Jobs for this. Or can I do this with a single Job
Hadoop MR jobs are I/O intensive, in your MR job design you should work on minimizing the I/O and parallel processing as much possible.
If your reducers need same input for generating all 4 outputs, it will be better to keep single job, but another consideration can be skewness of data for either output.
For example output1 has more processing time + most of incoming data is likely to be processed for output1.
If you have scenerio like time taken to process output1 is much higher then total time taken to process output2 + output3 + output4, then you should considering splitting processing of output1 in multiple steps.
However if we consider all 4 outouts have more or less equal processing times and consumes same data throughout,
It will be better to have some conditional processing logic in the reducer and let your custom partioner decide which data goes to which reducer.
Your custom partioner can have some check like this incoming data qualifies to be contributing to "GC content" so let it got to Reducer 3.
But if your incoming data needs to be processed for more then one output/distribution use conditional processing and to write multiple output files from same reducer use "MultipleOutputs".
You can google it up and find usage examples, it lets you write output to multiple folders/files at the same time from within a Mapper or Reducer.
Hadoop let's you specify the number of reducer tasks from the job driver job.setNumReduceTasks(num_reducers);. Since you want four outputs, you would specify int num_reducers = 4; Here's an example driver class.
public class run {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "Run NB Count");
job.setJarByClass(NB_train_hadoop.class);
// set mappers, reducers, other stuff
job.setNumReduceTasks(num_reducers);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
While this is handy, you have to understand that there is an optimal number of reducers you can choose which is dependent on the number of nodes in your cluster.
For example, running 4 Amazon m3.xlarge instances (1 master, 3 slaves, and 4 cores an instance), has the following relationship between wall time and number of reducer tasks used in the MapReduce job. You can see that more isn't necessarily better and if you use too many, well then you might as well crunch your data with your mother's hair curler because it would be faster that way.
Hope this is helpful!!

Set result from previous Reducer as configuration parameter

As part of the calculation logic , In a Mapreduce workflow i need to take the result from a reducer as parameter for the next reducer in the chain.
Path plc =new Path(args[1]+"/3");--> Outputpath from the previous reducer
Configuration c4= new Configuration();
c4.set("denom", GetLineC.extCount(plc));---> GetLineC.extCount is a function that returns a value
ControlledJob cJob4= new ControlledJob(c4);
Im using JobControl to create the dependency between the jobs and all the configuration. When the program is executed it gives "No such file or directory".In the flow when the control reaches this part the file will be present in this location. But since the configuration is instantiated in the beginning this error is showing up.
Is there a way to set the single line output from the previous reducer as a parameter directly?
Well, I think you mean previous job instead of previous reducer. If you're executing the two jobs using the same driver class, you already know the output of the last job, which is a directory. Clearly you're using only one reducer and it will write its output in a part-r-00000 file inside the output path. To set it as a configuration parameter to the next job, you will have to read this file manually.
Are you considering that in GetLineC.extCount(Path path)?

How to run hadoop multithread way in single JVM?

I have 4 core desktop and want to use all my cores for local data processing with hadoop.
(i.e. sometimes I have enough power to process data locally sometimes I submit same jobs to cluster).
By default hadoop local mode runs only one mapper and one reducer so my local jobs are really slow.
I do not want to setup cluster on single machine first because of "painful" configuration and second I have to create jar each time. So perfect solution is to how run embedded Hadoop on a single machine
PS pseudo-distributed mode is bad option since it will create cluster with Single node, so I will get only one mapper and I have to spend some time on additional configuration.
You need to use MultithreadedMapRunner - just set up it in JobConf's setMapRunnerClass method and don't forget to set mapred.map.multithreadedrunner.threads to desirable concurrency level.
Also there is an another way, you should:
set MultithreadedMapper as your mapper class in Job-typed object
call MultithreadedMapper.setMapperClass with you actual mapper class
call MultithreadedMapper.setNumberOfThreads with desirable concurrency level
But be careful, your mapper class should be thread safe and it's setup and cleanup methods would be called several times, so it isn't a smart idea to mix MultithreadedMapper with MultipulOutput, unless you implement you own MultithreadedMapper inspired class.
Hadoop purposely does not run more than one task at the same time in one JVM for isolation purposes. And in stand-alone (local) mode, only one JVM is ever used. If you want to make use of your four cores, you should run in pseudo-distributed mode, and increase the max number of concurrent tasks to four. You can do this with the mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum properties.
Configuration conf = new Configuration();
Job job = new Job(conf, "SolerRandomHit");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(MultithreadedMapper.class);

Why all the reduce tasks are ending up in a single machine?

I wrote a relatively simple map-reduce program in Hadoop platform (cloudera distribution). Each Map & Reduce write some diagnostic information to standard ouput besides the regular map-reduce tasks.
However when I'm looking at these log files, I found that Map tasks are relatively evenly distributed among the nodes (I have 8 nodes). But the reduce task standard output log can only be found in one single machine.
I guess, that means all the reduce tasks ended up executing in a single machine and that's problematic and confusing.
Does anybody have any idea what's happening here ? Is it configuration problem ?
How can I make the reduce jobs also distribute evenly ?
If the output from your mappers all have the same key they will be put into a single reducer.
If your job has multiple reducers, but they all queue up on a single machine, then you have a configuration issue.
Use the web interface (http://MACHINE_NAME:50030) to monitor the job and see the reducers it has as well as what machines are running them. There is other information that can be drilled into that will provide information that should be helpful in figuring out the issue.
Couple questions about your configuration:
How many reducers are running for the job?
How many reducers are available on each node?
Is the node running the reducer better
hardware than the other nodes?
Hadoop decides which Reducer will process which output keys by the use of a Partitioner
If you are only outputting a few keys and want an even distribution across your reducers, you may be better off implementing a custom Partitioner for your output data. eg
public class MyCustomPartitioner extends Partitioner<KEY, VALUE>
{
public int getPartition(KEY key, VALUE value, int numPartitions) {
// do something based on the key or value to determine which
// partition we want it to go to.
}
}
You can then set this custom partitioner in the job configuration with
Job job = new Job(conf, "My Job Name");
job.setPartitionerClass(MyCustomPartitioner.class);
You can also implement the Configurable interface in your custom Partitioner if you want to do any further configuration based on job settings.
Also, check that you haven't set the number of reduce tasks to 1 anywhere in the configuration (look for "mapred.reduce.tasks"), or in code, eg
job.setNumReduceTasks(1);

Resources