Can already partitioned input data improve the hadoop processing? - hadoop

I know that during the intermediate steps between mapper and reducer, hadoop will sort and partition the data on its way to the reducer.
Since I am dealing with already partitioned data in my input to the mapper, is there a way to take advantage of it and possibly accelerate the intermediate processing so no more sorting or grouping-by will take place?
Adding some details:
As I store data on S3, let's say I only have two files in my bucket. First file will store records of the lower half users ids, the other file will store values of the upper half of user ids. Data in each file is not necessarily sorted, but it is guaranteed that all data pertaining to a user is located in the same file.
Such as:
\mybucket\file1
\mybucket\file2
File1 content:
User1,ValueX
User3,ValueY
User1,ValueZ
User1,ValueAZ
File2 content:
User9,ValueD
User7,ValueB
User7,ValueD
User8,ValueB
From what I read, I can use a streaming job and two mappers and each mapper will suck in one of the two files, but the whole file. Is this true?
Next,
Let's say the mapper will only output a unique Key just once, with the associated value being the number of occurrences of that Key. (which I realize it is more of a reducer responsibility, but just for our example here)
Can the sorting and partitioning of those output keys from the Mapper be disabled and let them fly freely to the reducer(s) ?
Or to give another example:
Imagine all my input data contains just one line for each Unique Key, and I don't need that data to be sorted in the final output of the reducer. I just want to Hash the Value for each Key. Can I disable that sorting and partitioning step before the reducer?

Although for the files shown above you'll get 2 mappers, it can't be guaranteed always. Number of mappers depend upon the number of InputSplits created from the input data. If your files are big you might have more than one mappers.
Partitioning is merely a way to tell which key/value goes to which reducer. If you disable it then you either need some other way to do this or you'll end up with performance degradation, as the inputs to reducers will be uneven. A particular reducer might get all of the input or a particular reducer might get zero input. I can't see any performance gain here. Of course, if you think your custom partitioner fits better into the situation you could definitely do that. But skipping partitioning doesn't sound logical to me. The default partitioning behavior depends on hash itself. After a mapper emits its output keys are hashed to find out which set of key/value pairs goes to which reducer.
And if your data is already sorted and you want to skip the sorting phase in your MR job, you might find the patch provided in response to this JIRA useful. Issue is not closed yet, but it would definitely help you in getting started.
HTH

Related

What is the exact Map Reduce WorkFlow?

Summary from the book "hadoop definitive guide - tom white" is:
All the logic between user's map function and user's reduce function is called shuffle. Shuffle then spans across both map and reduce. After user's map() function, the output is in in-memory circular buffer. When the buffer is 80% full, the background thread starts to run. The background thread will output the buffer's content into a spill file. This spill file is partitioned by key. And within each partition, the key-value pairs are sorted by key.After sorting, if combiner function is enabled, then combiner function is called. All spill files will be merged into one MapOutputFile. And all Map tasks's MapOutputFile will be collected over network to Reduce task. Reduce task will do another sort. And then user's Reduce function will be called.
So the questions are:
1.) According to the above summary, this is the flow:
Mapper--Partioner--Sort--Combiner--Shuffle--Sort--Reducer--Output
1a.) Is this the flow or is it something else?
1b.) Can u explain the above flow with an example say word count example, (the ones I found online weren't that elaborative) ?
2.) So the mappers phase output is one big file (MapOutputFile)? And it is this one big file that is broken into and the key-value pairs are passed onto the respective reducers?
3.) Why does the sorting happens for a second time, when the data is already sorted & combined when passed onto their respective reducers?
4.) Say if mapper1 is run on Datanode1 then is it necessary for reducer1 to run on the datanode1? Or it can run on any Datanode?
Answering this question is like rewriting the whole history . A lot of your doubts have to do with Operating System concepts and not MapReduce.
Mappers data is written on local File System. The data is partitioned based on the number of reducer. And in each partition , there can be multiple files based on the number of time the spills have happened.
Each small file in a given partition is sorted , as before writing the file, in Memory sort is done.
Why the data needs to be sorted on mapper side ?
a.The data is sorted and merged on the mapper side to decrease the number of files.
b.The files are sorted as it would become impossible on the reducer to gather all the values for a given key.
After gathering data on the reducer, first the number of files on the system needs to be decreased (remember uLimit has a fixed amount for every user in this case hdfs)
Reducer just maintains a file pointer on a small set of sorted files and does a merge of them.
To know about more interesting ideas please refer :
http://bytepadding.com/big-data/map-reduce/understanding-map-reduce-the-missing-guide/

Hadoop-2.4.1 custom partitioner to balance reducers

As we know, that during the shuffle phase of hadoop, each of the reducer read data from all the mapper's output (intermedia data).
Now, we also know that by default Hash-Partitioning is used for reducers.
My question is: How do we implement an algorithm, e.g. Locality-aware?
In short, you should not do it.
First, you have no control over where the mappers and reducers are executed on the cluster, so even when the complete output of a single mapper will go to a single reducer there is a huge probability that they would be on different hosts and the data would be transferred through the network
Second, to make the reducer process the whole output of the mapper, you first have to make mapper process the right part of the information, which means that you have to preprocess data by partitioning it and then run a single mapper and a single reducer for each partition, but this preprocessing itself would take much resources so it is mostly meaningless
And finally, why do you need it? The main concept of map-reduce is manipulation with key-value pairs, and reducer in general should aggregate list of values outputted by the mappers for the same keys. Here's why hash partitioning is used: distribute N keys between K reducers. Using different type of partitioner is a really seldom case. If you need data locality you might prefer to work with MPP database rather than Hadoop, for example.
If you really need a custom partitioner, here's an example of how it can be implemented: http://hadooptutorial.wikispaces.com/Custom+partitioner. Nothing special, just return reducer number based on the key and value passed and the number of reducers. Using hash code of the host name divided (%) by the number of reducers will make the whole output of a single mapper go to a single reducer. Also you might use process PID % number of reducers. But before doing this you have to check, whether you really need this behavior or not.

What is the purpose of shuffling and sorting phase in the reducer in Map Reduce Programming?

In Map Reduce programming the reduce phase has shuffling, sorting and reduce as its sub-parts. Sorting is a costly affair.
What is the purpose of shuffling and sorting phase in the reducer in Map Reduce Programming?
First of all shuffling is the process of transfering data from the mappers to the reducers, so I think it is obvious that it is necessary for the reducers, since otherwise, they wouldn't be able to have any input (or input from every mapper). Shuffling can start even before the map phase has finished, to save some time. That's why you can see a reduce status greater than 0% (but less than 33%) when the map status is not yet 100%.
Sorting saves time for the reducer, helping it easily distinguish when a new reduce task should start. It simply starts a new reduce task, when the next key in the sorted input data is different than the previous, to put it simply. Each reduce task takes a list of key-value pairs, but it has to call the reduce() method which takes a key-list(value) input, so it has to group values by key. It's easy to do so, if input data is pre-sorted (locally) in the map phase and simply merge-sorted in the reduce phase (since the reducers get data from many mappers).
Partitioning, that you mentioned in one of the answers, is a different process. It determines in which reducer a (key, value) pair, output of the map phase, will be sent. The default Partitioner uses a hashing on the keys to distribute them to the reduce tasks, but you can override it and use your own custom Partitioner.
A great source of information for these steps is this Yahoo tutorial (archived).
A nice graphical representation of this is the following (shuffle is called "copy" in this figure):
Note that shuffling and sorting are not performed at all if you specify zero reducers (setNumReduceTasks(0)). Then, the MapReduce job stops at the map phase, and the map phase does not include any kind of sorting (so even the map phase is faster).
UPDATE: Since you are looking for something more official, you can also read Tom White's book "Hadoop: The Definitive Guide". Here is the interesting part for your question.
Tom White has been an Apache Hadoop committer since February 2007, and is a member of the Apache Software Foundation, so I guess it is pretty credible and official...
Let's revisit key phases of Mapreduce program.
The map phase is done by mappers. Mappers run on unsorted input key/values pairs. Each mapper emits zero, one, or multiple output key/value pairs for each input key/value pairs.
The combine phase is done by combiners. The combiner should combine key/value pairs with the same key. Each combiner may run zero, once, or multiple times.
The shuffle and sort phase is done by the framework. Data from all mappers are grouped by the key, split among reducers and sorted by the key. Each reducer obtains all values associated with the same key. The programmer may supply custom compare functions for sorting and a partitioner for data split.
The partitioner decides which reducer will get a particular key value pair.
The reducer obtains sorted key/[values list] pairs, sorted by the key. The value list contains all values with the same key produced by mappers. Each reducer emits zero, one or multiple output key/value pairs for each input key/value pair.
Have a look at this javacodegeeks article by Maria Jurcovicova and mssqltips article by Datta for a better understanding
Below is the image from safaribooksonline article
I thought of just adding some points missing in above answers. This diagram taken from here clearly states the what's really going on.
If I state again the real purpose of
Split: Improves the parallel processing by distributing the processing load across different nodes (Mappers), which would save the overall processing time.
Combine: Shrinks the output of each Mapper. It would save the time spending for moving the data from one node to another.
Sort (Shuffle & Sort): Makes it easy for the run-time to schedule (spawn/start) new reducers, where while going through the sorted item list, whenever the current key is different from the previous, it can spawn a new reducer.
Some of the data processing requirements doesn't need sort at all. Syncsort had made the sorting in Hadoop pluggable. Here is a nice blog from them on sorting. The process of moving the data from the mappers to the reducers is called shuffling, check this article for more information on the same.
I've always assumed this was necessary as the output from the mapper is the input for the reducer, so it was sorted based on the keyspace and then split into buckets for each reducer input. You want to ensure all the same values of a Key end up in the same bucket going to the reducer so they are reduced together. There is no point sending K1,V2 and K1,V4 to different reducers as they need to be together in order to be reduced.
Tried explaining it as simply as possible
Shuffling is the process by which intermediate data from mappers are transferred to 0,1 or more reducers. Each reducer receives 1 or more keys and its associated values depending on the number of reducers (for a balanced load). Further the values associated with each key are locally sorted.
Because of its size, a distributed dataset is usually stored in partitions, with each partition holding a group of rows. This also improves parallelism for operations like a map or filter. A shuffle is any operation over a dataset that requires redistributing data across its partitions. Examples include sorting and grouping by key.
A common method for shuffling a large dataset is to split the execution into a map and a reduce phase. The data is then shuffled between the map and reduce tasks. For example, suppose we want to sort a dataset with 4 partitions, where each partition is a group of 4 blocks.The goal is to produce another dataset with 4 partitions, but this time sorted by key.
In a sort operation, for example, each square is a sorted subpartition with keys in a distinct range. Each reduce task then merge-sorts subpartitions of the same shade.
The above diagram shows this process. Initially, the unsorted dataset is grouped by color (blue, purple, green, orange). The goal of the shuffle is to regroup the blocks by shade (light to dark). This regrouping requires an all-to-all communication: each map task (a colored circle) produces one intermediate output (a square) for each shade, and these intermediate outputs are shuffled to their respective reduce task (a gray circle).
The text and image was largely taken from here.
There only two things that MapReduce does NATIVELY: Sort and (implemented by sort) scalable GroupBy.
Most of applications and Design Patterns over MapReduce are built over these two operations, which are provided by shuffle and sort.
This is a good reading. Hope it helps. In terms of sorting you are concerning, I think it is for the merge operation in last step of Map. When map operation is done, and need to write the result to local disk, a multi-merge will be operated on the splits generated from buffer. And for a merge operation, sorting each partition in advanced is helpful.
Well,
In Mapreduce there are two important phrases called Mapper and reducer both are too important, but Reducer is mandatory. In some programs reducers are optional. Now come to your question.
Shuffling and sorting are two important operations in Mapreduce. First Hadoop framework takes structured/unstructured data and separate the data into Key, Value.
Now Mapper program separate and arrange the data into keys and values to be processed. Generate Key 2 and value 2 values. This values should process and re arrange in proper order to get desired solution. Now this shuffle and sorting done in your local system (Framework take care it) and process in local system after process framework cleanup the data in local system.
Ok
Here we use combiner and partition also to optimize this shuffle and sort process. After proper arrangement, those key values passes to Reducer to get desired Client's output. Finally Reducer get desired output.
K1, V1 -> K2, V2 (we will write program Mapper), -> K2, V' (here shuffle and soft the data) -> K3, V3 Generate the output. K4,V4.
Please note all these steps are logical operation only, not change the original data.
Your question: What is the purpose of shuffling and sorting phase in the reducer in Map Reduce Programming?
Short answer: To process the data to get desired output. Shuffling is aggregate the data, reduce is get expected output.

Input Sampler in Hadoop

My understanding about InputSampler is that it gets data from record reader and samples keys and then creates a partition file in HDFS.
I have few queries about this sampler:
1) Is this sampling task a map task ?
2) My data is on HDFS (distributed across nodes of my cluster). Will this sampler run on nodes which has the data to be sampled?
3) Will this consume my map slots?
4) Will the sample run simultaneously with the map tasks of my MR job ? I want to know whether it will affect time consumed by mappers by reducing the number of slots?
I found that the InputSampler makes a seriously flawed assumption and is therefore not very helpful.
The idea is that it samples key values from the mapper input and then uses the resulting statistics to evenly partition the mapper output. The assumption then is that the key type and value distribution are the same for the mapper input and output. In my experience the mapper almost never sends the same key value types to the reducer as it reads in. So the InputSampler is useless.
In the few times where I had to sample in order to partition effectively, I ended up doing the sampling as part of the mapper (since only then did I know what keys were being produced) and writing the results out in the mapper's close() method to a directory (one set of stats per mapper). My partitioner then had to perform lazy initialization on its first call to read the mapper-written files, assimilate the stats into some useful structure and then to partition subsequent keys accordingly.
Your only other real option is to guess at development time how the key values are distributed and hard-code that assumption into your partitioner.
Not very clean but it was the best I could figure.
This question was asked a long time ago, and many questions were left unanswered.
The only and most voted answer by #Chris does not really answer the questions, but gives an interesting point of view, though a bit too pessimistic and misleading in my opinion, so I'll discuss it here as well.
Answers to the original questions
The sampling task is done in the call to InputSampler.writePartitionFile(job, sampler). The call to this method is blocking, during which the sampling is done, in the same thread.
That's why you don't need to call job.waitForCompletion(). It's not a MapReduce Job, it simply runs in your client's process. Besides, a MapReduce job needs at least 20 seconds just to start, but sampling a small file only takes a couple of second.
Thus, the answer to all of your questions is simply "No".
More details from reading the code
If you look at the code of the writePartitionFile(), you will find that it calls sampler.getSample(), who will call inputformat.getSplits() to get a list of all input splits to be samples.
These input formats will then be read sequentially to extract the samples. Each input split is read by a new record reader created within the same method. This means that your client is doing the reading and sampling.
Your other nodes are not running any "map" or other processes, they are simply serving HDFS the block data needed by your client for its input splits needed for sampling.
Using Different key types between Map input and output
Now, to discuss the answer given by Chris. I agree that the InputSampler and TotalOrderPartitioner are probably flawed in some ways, since they are really not easy to understand and use ... But they do not impose key types to be the same between map input and output.
The InputSampler uses the job's InputFormat (and its RecordReader) keys to create the partition file containing all sampled keys. This file is then used by the TotalOrderPartitioner during the partitioning phase at the end of the Mapper's process to create partitions.
The easiest solution is to create a custom RecordReader, for the InputSampler only, which performs the same key transformation as your Mapper.
To illustrate this, let's say your dataset contains pairs of (char, int), and that your mapper transforms them into (int, int), by taking the character's ascii value. For example 'a' becomes 97.
If you want to perform total order partitioning of this job, your InputSampler would sample letters 'a', 'b', 'c'. Then during the partitioning phase, your mapper output keys will be integer values like 102 or 107, which wouldn't be comparable to 'a', 'g' or 't' from the partition-file for partition distribution. This is not consistent, and this is why it looks like the input and output key types are assumed to be the same, when using the same InputFormat for sampling and your mapreduce job.
So the solution is to write a custom InputFormat and its RecordReader, used only for the sampling client-side job, which reads your input file and does the same transformation from char to int before returning each record. This way the InputSampler will directly write the integer ascii values from the custom record reader to the partition-file, which maintain the same distribution, and will be usable with your mapper's output.
It's not so easy to grasp in a few lines of text explanation texts, but anybody interested in fully understanding how the InputSampler and TotalOrderPartitioner work should check out this page : http://blog.ditullio.fr/2016/01/04/hadoop-basics-total-order-sorting-mapreduce/
It explains in details how to use them in different cases.

Hadoop one Map and multiple Reduce

We have a large dataset to analyze with multiple reduce functions.
All reduce algorithm work on the same dataset generated by the same map function. Reading the large dataset costs too much to do it every time, it would be better to read only once and pass the mapped data to multiple reduce functions.
Can I do this with Hadoop? I've searched the examples and the intarweb but I could not find any solutions.
Maybe a simple solution would be to write a job that doesn't have a reduce function. So you would pass all the mapped data directly to the output of the job. You just set the number of reducers to zero for the job.
Then you would write a job for each different reduce function that works on that data. This would mean storing all the mapped data on the HDFS though.
Another alternative might be to combine all your reduce functions into a single Reducer which outputs to multiple files, using a different output for each different function. Multiple outputs are mentioned in this article for hadoop 0.19. I'm pretty sure that this feature is broken in the new mapreduce API released with 0.20.1, but you can still use it in the older mapred API.
Are you expecting every reducer to work on exactly same mapped data? But at least the "key" should be different since it decides which reducer to go.
You can write an output for multiple times in mapper, and output as key (where $i is for the i-th reducer, and $key is your original key). And you need to add a "Partitioner" to make sure these n records are distributed in reducers, based on $i. Then using "GroupingComparator" to group records by original $key.
It's possible to do that, but not in trivial way in one MR.
You may use composite keys. Let's say you need two kinds of the reducers, 'R1' and 'R2'. Add ids for these as a prefix to your o/p keys in the mapper. So, in the mapper, a key 'K' now becomes 'R1:K' or 'R2:K'.
Then, in the reducer, pass values to implementations of R1 or R2 based on the prefix.
I guess you want to run different reducers in a chain. In hadoop 'multiple reducers' means running multiple instances of the same reducer. I would propose you run one reducer at a time, providing trivial map function for all of them except the first one. To minimize time for data transfer, you can use compression.
Of course you can define multiple reducers. For the Job (Hadoop 0.20) just add:
job.setNumReduceTasks(<number>);
But. Your infrastructure has to support the multiple reducers, meaning that you have to
have more than one cpu available
adjust mapred.tasktracker.reduce.tasks.maximum in mapred-site.xml accordingly
And of course your job has to match some specifications. Without knowing what you exactly want to do, I only can give broad tips:
the keymap-output have either to be partitionable by %numreducers OR you have to define your own partitioner:
job.setPartitionerClass(...)
for example with a random-partitioner ...
the data must be reduce-able in the partitioned format ... (references needed?)
You'll get multiple output files, one for each reducer. If you want a sorted output, you have to add another job reading all files (multiple map-tasks this time ...) and writing them sorted with only one reducer ...
Have a look too at the Combiner-Class, which is the local Reducer. It means that you can aggregate (reduce) already in memory over partial data emitted by map.
Very nice example is the WordCount-Example. Map emits each word as key and its count as 1: (word, 1). The Combiner gets partial data from map, emits (, ) locally. The Reducer does exactly the same, but now some (Combined) wordcounts are already >1. Saves bandwith.
I still dont get your problem you can use following sequence:
database-->map-->reduce(use cat or None depending on requirement)
then store the data representation you have extracted.
if you are saying that it is small enough to fit in memory then storing it on disk shouldnt be an issue.
Also your use of MapReduce paradigm for the given problem is incorrect, using a single map function and multiple "different" reduce function makes no sense, it shows that you are just using map to pass out data to different machines to do different things. you dont require hadoop or any other special architecture for that.

Resources