Use cases/example of Map only jobs - hadoop

Are there any real life good use cases/examples for jobs which only involve only Map tasks and no reducers. A job which only triggers mappers and no reducers need to be set.

I have done many Map-Only Jobs.... here are a few examples.
You have a classification model that you build every day, and you need to classify all your data with that classifier. There is no need for a reduce, you just load the classifier from the distributed cache (or from a remote resource like a DB) and inside the map() function of your mapper you do the classification and write the result somewhere.
Performing data cleanup on something like an HBase table. Read in each row in your mapper, and if it matches some conditional statement then delete it. No need for reduce here.
Basically, if you don't need to combine or aggregate data, you just need to perform a repetitive serial process on each piece of data, you don't usually need a reducer. I would also say that if you don't need a reducer, then you might ask yourself if you might be better off with something like Apache Storm or another processing model with less overhead.

Sure!
Imagine that instead of the famous word-count problem, you simply replace each word by its length.
Doing so, you map each word to its length and you never reduce anything!
Hello map reduce would become 5 3 6

Related

What is the benefit of the reducers in Hadoop?

I don't see a value for the reducers in Hadoop in the following scenario:
The Map Tasks generate unique keys (Because we can merge both the Map/Reduce functionality together)
The output size of the Map Tasks is too big (This will exhaust the memory if we wait for the reducers to begin the work)
If we have any functionality that doesn't need grouping and sorting of the keys
Please correct me if I am wrong.
And if someone could give me a real example of the benefits of the reducers and when it should be used, I will appreciate it.
Reducer is beneficial (or required) when you need to do operations like aggregation/grouping etc..
FYI : Reducer is meant for grouping different value for a key which comes from different mapper. So for a use case which do not require grouping/aggregation then there is no point of using reducer(you can set it to Zero , meaning Map-Only jobs).
One quick use-case i can think of is - you want to randomly split a big file to multiple part file. In this case you will supply big file (lets say 100G) to Map-Only jobs. All maps will read a chunk of file and write as a part of file.

By-passing the shuffling stage of Mapreduce job in hadoop?

I am trying to implement an algorithm where only single reducer is required and mapreduce job is executing iteratively. Result of each mapper in particular iteration is to be added in reducer and then processed. Then output of the reducer is passed as input to mapper in other iteration. I want to execute the job in asynchronous manner i.e. as soon as pre-defined number of mappers are executed, pass the output directly to the reducer i.e. avoiding the shuffling and sorting as its creating only overhead for my algorithm. Is that even possible? If not, what can be done for asynchronous exceution of mapreduce job at implementation level. I went to number of research papers but unable to get any idea from there.
Thanks.
You have to code up you own custom solution for this. I did a similar thing in a project recently.
It requires a bit of code, so I can only outline the steps here :)
set mapreduce.job.reduce.slowstart.completedmaps to 0.0 so that the reducer comes up before the mappers finish (this will give you a speedup right away btw. try it out before going ahead with below steps ;) maybe it's enough)
Implement your own org.apache.hadoop.mapred.MapOutputCollector that writes the shuffle output to Socket instead of to the standard shuffle path (this is the mapper side)
Implement your own org.apache.hadoop.mapred.ShuffleConsumerPlugin that waits for connections by mappers and reads pairs from the network (this is the reducer side)
Things you will need to do:
Synchronize the mappers not starting before that reducer is actually listening (Zookeeper is what I used here)
Adjust your job configs to use the custom mapper and reducer components
Futher reading: https://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html
Def. doable, but requires some effort :)

How do I limit the number of records sent to the reducer in a map reduce job?

I have an file with over 300000 lines that's an input to a map reduce job and I want the job to process only the first 1000 lines of this file. Is there a good way to limit the number of records sent to the reducer?
A simple identity reducer is all I need to write out my output. Currently, the reducer writes out as many lines as there are in the input.
First, make sure your mapreduce program is set to only use one reducer. It has to be explicitly set, otherwise Hadoop might choose some other number, and then there's no good way to coordinate between reduce tasks to make sure they don't emit more than 1000 total. Then you can simply maintain an instance variable in your Reducer class that counts how many records it has seen, and stops emitting them after 1000.
The other, probably simpler, way to do it would be to shorten your input file. Just delete the lines you don't need.
It's also worth noting that hive and pig are both frameworks that will do this type of thing for you. Writing "raw" MapReduce code is rare in practice. Most people use one of those two.

Query related to Hadoop's map-reduce

Scenario:
I have one subset of database and one dataware house. I have bring this both things on HDFS.
I want to analyse the result based on subset and datawarehouse.
(In short, for one record in subset I have to scan each and every record in dataware house)
Question:
I want to do this task using Map-Reduce algo. I am not getting that how to take both files as a input in mapper and also how to handle both files in map phase of map-reduce.
Pls suggest me some idea so that I can able to perform it?
Check the Section 3.5 (Relations Joins) in Data-Intensive Text Processing with MapReduce for Map-Side Joins, Reduce-Side Joins and Memory-Backed Joins. In any case MultipleInput class is used to have multiple mappers process different files in a single job.
FYI, you could use Apache Sqoop to import DB into HDFS.
Some time ago I wrote a Hadoop map reduce for one of my classes. I was scanning several IMD databases and producing a merged information about actors (basically the name, biography and films he acted in was in different databases). I think you can use the same approach I used for my homework:
I wrote a separate map reduce turning every database file in the same format, just placing a two-letter prefix infront of every row the map-reduce produced to be able to tell 'BI' (biography), 'MV' (movies) and so on. Then I used all these produced files as input for my last map reduced that processed them grouping them in the desired way.
I am not even sure that you need so much work if you are really going to scan every line of the datawarehouse. Maybe in this case you can just do this scan either in the map or the reduce phase (based on what additional processing you want to do), but my suggestion assumes that you actually need to filter the datawarehouse based on the subsets. If the latter my suggestion might work for you.

Hadoop one Map and multiple Reduce

We have a large dataset to analyze with multiple reduce functions.
All reduce algorithm work on the same dataset generated by the same map function. Reading the large dataset costs too much to do it every time, it would be better to read only once and pass the mapped data to multiple reduce functions.
Can I do this with Hadoop? I've searched the examples and the intarweb but I could not find any solutions.
Maybe a simple solution would be to write a job that doesn't have a reduce function. So you would pass all the mapped data directly to the output of the job. You just set the number of reducers to zero for the job.
Then you would write a job for each different reduce function that works on that data. This would mean storing all the mapped data on the HDFS though.
Another alternative might be to combine all your reduce functions into a single Reducer which outputs to multiple files, using a different output for each different function. Multiple outputs are mentioned in this article for hadoop 0.19. I'm pretty sure that this feature is broken in the new mapreduce API released with 0.20.1, but you can still use it in the older mapred API.
Are you expecting every reducer to work on exactly same mapped data? But at least the "key" should be different since it decides which reducer to go.
You can write an output for multiple times in mapper, and output as key (where $i is for the i-th reducer, and $key is your original key). And you need to add a "Partitioner" to make sure these n records are distributed in reducers, based on $i. Then using "GroupingComparator" to group records by original $key.
It's possible to do that, but not in trivial way in one MR.
You may use composite keys. Let's say you need two kinds of the reducers, 'R1' and 'R2'. Add ids for these as a prefix to your o/p keys in the mapper. So, in the mapper, a key 'K' now becomes 'R1:K' or 'R2:K'.
Then, in the reducer, pass values to implementations of R1 or R2 based on the prefix.
I guess you want to run different reducers in a chain. In hadoop 'multiple reducers' means running multiple instances of the same reducer. I would propose you run one reducer at a time, providing trivial map function for all of them except the first one. To minimize time for data transfer, you can use compression.
Of course you can define multiple reducers. For the Job (Hadoop 0.20) just add:
job.setNumReduceTasks(<number>);
But. Your infrastructure has to support the multiple reducers, meaning that you have to
have more than one cpu available
adjust mapred.tasktracker.reduce.tasks.maximum in mapred-site.xml accordingly
And of course your job has to match some specifications. Without knowing what you exactly want to do, I only can give broad tips:
the keymap-output have either to be partitionable by %numreducers OR you have to define your own partitioner:
job.setPartitionerClass(...)
for example with a random-partitioner ...
the data must be reduce-able in the partitioned format ... (references needed?)
You'll get multiple output files, one for each reducer. If you want a sorted output, you have to add another job reading all files (multiple map-tasks this time ...) and writing them sorted with only one reducer ...
Have a look too at the Combiner-Class, which is the local Reducer. It means that you can aggregate (reduce) already in memory over partial data emitted by map.
Very nice example is the WordCount-Example. Map emits each word as key and its count as 1: (word, 1). The Combiner gets partial data from map, emits (, ) locally. The Reducer does exactly the same, but now some (Combined) wordcounts are already >1. Saves bandwith.
I still dont get your problem you can use following sequence:
database-->map-->reduce(use cat or None depending on requirement)
then store the data representation you have extracted.
if you are saying that it is small enough to fit in memory then storing it on disk shouldnt be an issue.
Also your use of MapReduce paradigm for the given problem is incorrect, using a single map function and multiple "different" reduce function makes no sense, it shows that you are just using map to pass out data to different machines to do different things. you dont require hadoop or any other special architecture for that.

Resources