Meaning of map time or reduce time in JobHistoryServer - hadoop

I want to know the exact meaning of the notations in the below picture. This picture came from job history server web UI. I definitely know the meaning of Elapsed but I am not sure about other things. Where can I find clear definition of those? Or is there anyone who knows the meaning of those?
What I want to know is map time, reduce time, shuffle time and merge time separately. And the sum of the four time should be very similar(or equal) to elapsed time. But the 'Average' keyword makes me confuse.
There are 396 map, and 1 reduce.

As you probably already know, there are three phases to a MapReduce job:
Map is the 1st phase, where each Map task is provided with an input split, which is a small portion of the total input data. The Map tasks process data from the input split & output intermediate data which needs to go to the reducers.
Shuffle phase is the next step, where the intermediate data that was generated by Map tasks is directed to the correct reducers. Reducers usually handle a subset of the total number of keys generated by the Map task. The Shuffle phase assigns keys to reducers & sends all values pertaining to a key to the assigned reducer. Sorting (or Merging) is also a part of this phase, where values of a given key are sorted and sent to the reducer. As you may realize, the shuffle phase involves transfer of data across the network from Map -> Reduce tasks.
Reduce is the last step of the MapReduce Job. The Reduce tasks process all values pertaining to a key & output their results to the desired location (HDFS/Hive/Hbase).
Now coming to the average times, you said there were 396 map tasks. Each Map task is essentially doing exactly the same processing job, but on different chunks of data. So the Average Map time is basically the average of time taken by all 396 map tasks to complete.
Average Map Time = Total time taken by all Map tasks/ Number of Map Tasks
Similarly,
Average Reduce Time = Total time taken by all Reduce tasks/Number of Reduce tasks
Now, why is the average time significant? It is because, most, if not all your map tasks & reduce tasks would be running in parallel (depending on your cluster capacity/ no. of slots per node, etc.). So calculating the average time of all map tasks & reduce tasks will give you good insight into the completion time of the Map or Reduce phase as a whole.
Another observation from your screenshot is that your Shuffle phase took 40 minutes. There can be several reasons for this.
You have 396 map tasks, each generating intermediate data. The shuffle phase had to pass all this data across the network to just 1 reducer, causing a lot of network traffic & hence increasing transfer time. Maybe you can optimize performance by increasing the number of reducers.
The network itself has very low bandwidth, and cannot efficiently handle large amounts of data transfer. In this case, consider deploying a combiner, which will effectively reduce the amount of data flowing through your network between the map and reduce phases.
There are also some hidden costs of execution such as job setup time, time required by job tracker to contact task trackers & assign map/reduce tasks, time taken by slave nodes to send heartbeat signals to JobTracker, time taken by NameNode to assign storage block & create Input splits, etc. which all go into the total elapsed time.
Hope this helps.

Related

How is the number of map and reduce tasks is determined?

When running certain file on Hadoop using map reduce, sometimes it creates 1 map task and 1 reduce tasks while other file can use 4 map and 1 reduce tasks.
My question is based on what the number of map and reduce tasks is being decided?
is there a certain map/reduce size after which a new map/reduce is created?
Many Thanks Folks.
From the the official doc :
The number of maps is usually driven by the number of DFS blocks in
the input files. Although that causes people to adjust their DFS block
size to adjust the number of maps. The right level of parallelism for
maps seems to be around 10-100 maps/node, although we have taken it up
to 300 or so for very cpu-light map tasks. Task setup takes awhile, so
it is best if the maps take at least a minute to execute.
The ideal reducers should be the optimal value that gets them closest to:
A multiple of the block size
A task time between 5 and 15 minutes
Creates the fewest files possible
Anything other than that means there is a good chance your reducers are less than great. There is a tremendous tendency for users to use a REALLY high value ("More parallelism means faster!") or a REALLY low value ("I don't want to blow my namespace quota!"). Both are equally dangerous, resulting in one or more of:
Terrible performance on the next phase of the workflow
Terrible performance due to the shuffle
Terrible overall performance because you've overloaded the namenode with objects that are ultimately useless
Destroying disk IO for no really sane reason
Lots of network transfers
The number of Mappers is equal to the the number of HDFS blocks for the input file that will be processed.
The number of reducers ideally should be about 10% of your total mappers. Say you have 100 mappers then ideally the number of reducers should be somewhere around 10.
But however it is possible specify the number of reducers in our Map Reduce job.

Why the time of Hadoop job decreases significantly when reducers reach certain number

I test the scalability of a MapReduce based algorithm with increasing number of reducers. It looks fine generally (the time decreases with increasing reducers). But the time of the job always decreases significantly when the reducer reach certain number (30 in my hadoop cluster) instead of decreasing gradually. What are the possible causes?
Something about My Hadoop Job:
(1) Light Map Phase. Only a few hundred lines input. Each line will generate around five thousand key-value pairs. The whole map phase won't take more than 2 minutes.
(2) Heavy Reduce Phase. Each key in the reduce function will match 1-2 thousand values. And the algorithm in reduce phase is very compute intensive. Generally the reduce phase will take around 30 minutes to be finished.
Time performance plot:
it should be because of high no of key-value pair. At specific no of reducers they are getting equally distributed to the reducers, which is resulting in all reducer performing the task at almost same time.Otherwise it might be the case that combiner keeps on waiting for 1 or 2 heavily loaded reducers to finish there job.
IMHO it could be that with sufficient number of reducers available the network IO (to transfer intermediate results) between each reduce stage decreases.
As network IO is usually the bottleneck in most Map-Reduce programs. This decrease in network IO needed will give significant improvement.

Hadoop shuffle/merge time: average vs. total

Hadoop outputs the following statistics:
average map time
average reduce time
average shuffle time
average merge time
The total map and reduce time can be obtained by multiplying the number of completed maps/reduces with these averages. But how can the total shuffle/merge time be obtained? Or: how is the average shuffle time calculated?
Average Map Time = Total time taken by all Map tasks/ Count of Map Tasks
Average Reduce Time = Total time taken by all Reduce tasks/Count of Reduce tasks
Average Merge time = Average of (attempt.sortFinishTime - attempt.shuffleFinishTime)
In Shuffle phase, intermediate data, which was generated by Map tasks is directed to the right reducers. The Shuffle phase assigns keys to reducers &
sends all values of a particular key to the right reducer.
Sorting also happens in this phase before sending output values to Reducer.
The shuffle phase involves transfer of data across the network from Map nodes.
From Apache link
Shuffle
Input to the Reducer is the sorted output of the mappers. In this phase the framework fetches the relevant partition of the output of all the mappers, via HTTP.
Sort
The framework groups Reducer inputs by keys (since different mappers may have output the same key) in this stage.
The shuffle and sort phases occur simultaneously; while map-outputs are being fetched they are merged.
Hadoop framework will execute these two phases : shuffling & sorting

number of map and reduce task does not change in M/R program

I have a question.. I have a mapreduce program that get input from cassandra. my input is a little big, about 100000000 data. my problem is that my program takes too long to process, but I think mapreduce is good and fast for large volume of data. so I think maybe I have problems in number of map and reduce tasks.. I set the number of map and reduce asks with JobConf, with Job, and also in conf/mapred-site.xml, but I don't see any changes.. in my logs at first there is map 0% reduce 0% and after about 2 hours working it shows map 1% reduce 0%..!!
what should I do? please Help me I really get confused...
Please consider these points to check where the bottleneck might be --
Merely configuring to increase the number of map or reduce tasks files won't do. You need hardware to support that. Hadoop is fast, but to process a huge file, as you have mentioned
you need to have more numbers of parellel map and reduce tasks
running. To achieve what you need more processors. To get more
processors you need more machines (nodes). For example, if you have
2 machines with 8 processors each, you get a total processing power
of around 16. So, total 16 map and reduce tasks can run in parallel and the next set of tasks comes in as soon as slots gets unoccupied out of the 16 slots you have.
Now, when you add one more machine with 8 processors, you now have 24.
The Algorithms you used for map and reduce. Even though, you have
processing power, that doesn't mean your Hadoop application will
perform unless your algorithm performs. It might be the case that
a single map task takes forever to complete.

When do reduce tasks start in Hadoop?

In Hadoop when do reduce tasks start? Do they start after a certain percentage (threshold) of mappers complete? If so, is this threshold fixed? What kind of threshold is typically used?
The reduce phase has 3 steps: shuffle, sort, reduce. Shuffle is where the data is collected by the reducer from each mapper. This can happen while mappers are generating data since it is only a data transfer. On the other hand, sort and reduce can only start once all the mappers are done. You can tell which one MapReduce is doing by looking at the reducer completion percentage: 0-33% means its doing shuffle, 34-66% is sort, 67%-100% is reduce. This is why your reducers will sometimes seem "stuck" at 33%-- it's waiting for mappers to finish.
Reducers start shuffling based on a threshold of percentage of mappers that have finished. You can change the parameter to get reducers to start sooner or later.
Why is starting the reducers early a good thing? Because it spreads out the data transfer from the mappers to the reducers over time, which is a good thing if your network is the bottleneck.
Why is starting the reducers early a bad thing? Because they "hog up" reduce slots while only copying data and waiting for mappers to finish. Another job that starts later that will actually use the reduce slots now can't use them.
You can customize when the reducers startup by changing the default value of mapred.reduce.slowstart.completed.maps in mapred-site.xml. A value of 1.00 will wait for all the mappers to finish before starting the reducers. A value of 0.0 will start the reducers right away. A value of 0.5 will start the reducers when half of the mappers are complete. You can also change mapred.reduce.slowstart.completed.maps on a job-by-job basis. In new versions of Hadoop (at least 2.4.1) the parameter is called is mapreduce.job.reduce.slowstart.completedmaps (thanks user yegor256).
Typically, I like to keep mapred.reduce.slowstart.completed.maps above 0.9 if the system ever has multiple jobs running at once. This way the job doesn't hog up reducers when they aren't doing anything but copying data. If you only ever have one job running at a time, doing 0.1 would probably be appropriate.
The reduce phase can start long before a reducer is called. As soon as "a" mapper finishes the job, the generated data undergoes some sorting and shuffling (which includes call to combiner and partitioner). The reducer "phase" kicks in the moment post mapper data processing is started. As these processing is done, you will see progress in reducers percentage. However, none of the reducers have been called in yet. Depending on number of processors available/used, nature of data and number of expected reducers, you may want to change the parameter as described by #Donald-miner above.
As much I understand Reduce phase start with the map phase and keep consuming the record from maps. However since there is sort and shuffle phase after the map phase all the outputs have to be sorted and sent to the reducer. So logically you can imagine that reduce phase starts only after map phase but actually for performance reason reducers are also initialized with the mappers.
The percentage shown for the reduce phase is actually about the amount of the data copied from the maps output to the reducers input directories.
To know when does this copying start? It is a configuration you can set as Donald showed above. Once all the data is copied to reducers (ie. 100% reduce) that's when the reducers start working and hence might freeze in "100% reduce" if your reducers code is I/O or CPU intensive.
Reduce starts only after all the mapper have fished there task, Reducer have to communicate with all the mappers so it has to wait till the last mapper finished its task.however mapper starts transferring data to the moment it has completed its task.
Consider a WordCount example in order to understand better how the map reduce task works.Suppose we have a large file, say a novel and our task is to find the number of times each word occurs in the file. Since the file is large, it might be divided into different blocks and replicated in different worker nodes. The word count job is composed of map and reduce tasks. The map task takes as input each block and produces an intermediate key-value pair. In this example, since we are counting the number of occurences of words, the mapper while processing a block would result in intermediate results of the form (word1,count1), (word2,count2) etc. The intermediate results of all the mappers is passed through a shuffle phase which will reorder the intermediate result.
Assume that our map output from different mappers is of the following form:
Map 1:-
(is,24)
(was,32)
(and,12)
Map2 :-
(my,12)
(is,23)
(was,30)
The map outputs are sorted in such a manner that the same key values are given to the same reducer. Here it would mean that the keys corresponding to is,was etc go the same reducer.It is the reducer which produces the final output,which in this case would be:-
(and,12)(is,47)(my,12)(was,62)
Reducer tasks starts only after the completion of all the mappers.
But the data transfer happens after each Map.
Actually it is a pull operation.
That means, each time reducer will be asking every maptask if they have some data to retrive from Map.If they find any mapper completed their task , Reducer pull the intermediate data.
The intermediate data from Mapper is stored in disk.
And the data transfer from Mapper to Reduce happens through Network (Data Locality is not preserved in Reduce phase)
When Mapper finishes its task then Reducer starts its job to Reduce the Data this is Mapreduce job.

Resources