how to check that mapreduce is running parallelly? - hadoop

I submitted a mapreduce job and checked the log.
In the log l see that there are many mappers, each mapper processes one split, and the processing details of each mapper is logged in the log file sequentially in time.
However, I would like to check if my job is running parallelly and I want to see how many mappers are running concurrently.
I don't know where to find these informations.
Please help me, thx!

Use following Jobtracker Web UI and drill down to executing MapReduce job
http://<Jobtracker-HostName>:50030/

Related

Map-reduce via Oozie

If I am using Oozie to run MapReduce job, is there a specific number about how many mappers will be started?
Is it:
one for Oozie and one for map-reduce job or
one for Oozie and one mapper for every 64MB block(default block size)
The above answers focus on how many maps and reduces a mapreduce job needs. However as you specifically ask about oozie, I will share my experience on mapreduce (in pig) via Oozie.
Explanation
When you kick off an oozie workflow, you need 1 yarn application for this. I am not sure what the logic is, but it appears that these applications usually require 1 map, and occasionally 2.
Besides the above, you still need the same amount of mappers and reducers to do the actual work as if you did not use oozie. (If you see a different number than you expected, this may be because you passed specific parameters on map or reduce properties when calling the script).
Warning
The above means, that if you were to have 100 available containers, and kickoff 100 workflows (for example by starting a daily job with a startdate of 100 days in the past), it is likely that the workflows take up all available containers, and the actual work is suspended indefinitely.
Short answer : Oozie launches mapreduce job by submitting one maponly job to the cluster called Oozie launcher. Agree with #Dennis Jaheruddin.
Detail answer after my research : Oozie's execution model
Oozie’s execution model is different from
the default approach users take to run Hadoop jobs. When a user
invokes the Hadoop, Hive, or Pig CLI tool from a Hadoop edge node, the
corresponding client executable runs on that node which is configured
to contact and submit jobs to the Hadoop cluster. When the same jobs
are defined and submitted via an Oozie workflow action, things work
differently.
Let’s say you are submitting a workflow job using the Oozie CLI on the
edge node. The Oozie client actually submits the workflow to the Oozie
server, which typically runs on a different node. Regardless of where
it runs, it’s the Oozie server’s responsibility to submit and run the
underlying MapReduce jobs on the Hadoop cluster. Oozie doesn’t do so
by using the standard client tools installed locally on the Oozie
server node. Instead, it first submits a MapReduce job called the
“launcher job,” which in turn runs the Hadoop, Hive, or Pig job using
the appropriate client APIs.
Imp Note : The Oozie launcher is basically a map-only job running a single mapper
on the Hadoop cluster. This map job knows what to do for the specific
action it’s supposed to run and does the appropriate thing by using
the libraries for Hadoop, Pig, etc. This will result in other
MapReduce jobs being spun up as required. These Oozie jobs are called
“asynchronous actions” in Oozie parlance. Oozie doesn’t run these
actions in its own server, but kicks them off on the Hadoop cluster
using a launcher job. The reason Oozie server “outsources” the
launcher to the Hadoop cluster is to protect itself from unexpected
workloads and also to isolate user code from its own services. After
all, Oozie has access to an awesome distributed system in the form of
a Hadoop cluster.
Coming to Mapreduce actions you can set number of maptasks but there is no guarantee, it will depend as described below.
The number of maps is usually driven by the total size of the inputs,
that is, the total number of blocks of the input files.
setting number of maps - Suggestion (actually based on inputsplits)
setting number of reducer - Demand
Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute
Number of mapper depend on number of logical input splits it do not depends on number of blocks. You can control number of input splits by your programme.
Refer this https://hadoopi.wordpress.com/2013/05/27/understand-recordreader-inputsplit/ for more information about how input splits effects number of mapper and how to create input splits.

Check and verifiy number of nodes in Hadoop mapreduce?

How can I know after the job has completed, how many nodes that actually ran on that job, how many map task, and how many reduce task?
Thanks....
You could use jobtracker UI for the same. It's running on 50030 by default and URL would look like http://myhost:50030/.
Once you go there, you could see how many mapper's and how many reducers that were used by your job. You could play around by clicking on job link itself.

Mapreduce dataflow Internals

I tried to understand map reduce anatomy from various books/blogs.. But I am not getting a clear idea.
What happens when I submit a job to the cluster using this command:
..Loaded the files into hdfs already
bin/hadoop jar /usr/joe/wordcount.jar org.myorg.WordCount /usr/joe/wordcount/input /usr/joe/wordcount/output
Can anyone explain the sequence of opreations that happens right from the client and inside the cluster?
The processes goes like this :
1- The client configures and sets up the job via Job and submits it to the JobTracker.
2- Once the job has been submitted the JobTracker assigns a job ID to this job.
3- Then the output specification of the job is verified. For example, if the output directory has not been specified or it already exists, the job is not submitted and an error is thrown to the MapReduce program.
4- Once this is done, InputSplits for the job are created(based on the InputFormat you are using). If the splits cannot be computed, because the input paths don’t exist, for example, then the job is not submitted and an error is thrown to the MapReduce program.
5- Based on the number of InputSplits, map tasks are created and each InputSplits gets processed by one map task.
6- Then the resources which are required to run the job are copied across the cluster like the the job JAR file, the configuration file etc. The job JAR is copied with a high replication factor (which defaults to 10) so that there are lots of copies across the cluster for the tasktrackers to access when they run tasks for the job.
7- Then based on the location of the data blocks, that are going to get processed, JobTracker directs TaskTrackers to run map tasks on that very same DataNode where that particular data block is present. If there are no free CPU slots on that DataNode, the data is moved to a nearby DataNode with free slots and the processes is continued without having to wait.
8- Once the map phase starts individual records(key-value pairs) from each InputSplit start getting processed by the Mapper one by one completing the entire InputSplit.
9- Once the map phase gets over, the output undergoes shuffle, sort and combine. After this the reduce phase starts giving you the final output.
Below is the pictorial representation of the entire process :
Also, I would suggest you to go through this link.
HTH

How to check the overall progress of PIG job

A pig script can be translated into multiple MR jobs and I am wondering if there is an interface or a way to see the progress of the overall PIG script like how many jobs are scheduled, executed and so on.
We had the same problem at Twitter, as some of our Pig scripts spin up dozens of Map-Reduce jobs and it's sometimes hard to tell which of them is doing what, reason about efficiency of the plan, understand how many will run in parallel, etc.
So we created Twitter Ambrose: https://github.com/twitter/ambrose
It spins up a little jetty server which gives you a nice web ui that shows the job DAG, colors the nodes as the jobs complete, gives you stats about the jobs, and tells you which relations each job is trying to calculate.
There is a command illustrate but it throws an exception on my deployment. So I use another approach.
You can get the information on how many MR jobs are scheduled by using explain command and looking at the Physical Plan section, which is at the end of the explain report. To get the number of MR jobs for the script I do the following:
./pig -e 'explain -script ./script_name.pig' > ./explain.txt
grep MapReduce ./explain.txt | wc -l
Now we have the number of MR jobs planned. To monitor script execution, before you run it, you need to access Hadoop's jobtracker page (via "http://(IP_or_node_name):50030/jobtracker.jsp") and write down the name of last job (Completed Jobs section). Submit the script. Refresh the jobtracker page and count how many running jobs there are and how many are completed after the one you have noted. Now you can get an idea of how many jobs are left to be executed.
Click on each job and see its statistics and progress.
A much simpler approach would be to run the script on a small dataset, note down the number of jobs, it is displayed on the console output after the script execution. As pig does not change its execution plan, it will be the same with the big dataset. By looking into stats of each job on Hadoop's jobtracker page (via "http://(IP_or_node_name):50030/jobtracker.jsp") you can get the idea of the proportion of time each MR job takes. Than you can use it to approximately interpolate the execution time on large dataset. If you have skewed data and some Cartesian products, execution time prediction might become tricky.

A tool showing a breakdown of completion times and source machine names for each and every mapper and reducer?

I know job tasks page (in the JobTracker UI) is already showing start time and end time of every tasks in mapper and reducer but I would like to see something more like source machine names, number of spills and so on. I guess I can try to write such a tool using JobTracker class? But before embarking on that, I would like to see if there is such a tool already.
Does the hadoop job -history all output-dir command give you enough information to parse / process?
http://hadoop.apache.org/common/docs/r1.0.3/cluster_setup.html - Search for the above command

Resources