Check and verifiy number of nodes in Hadoop mapreduce? - hadoop

How can I know after the job has completed, how many nodes that actually ran on that job, how many map task, and how many reduce task?
Thanks....

You could use jobtracker UI for the same. It's running on 50030 by default and URL would look like http://myhost:50030/.
Once you go there, you could see how many mapper's and how many reducers that were used by your job. You could play around by clicking on job link itself.

Related

Map-reduce via Oozie

If I am using Oozie to run MapReduce job, is there a specific number about how many mappers will be started?
Is it:
one for Oozie and one for map-reduce job or
one for Oozie and one mapper for every 64MB block(default block size)
The above answers focus on how many maps and reduces a mapreduce job needs. However as you specifically ask about oozie, I will share my experience on mapreduce (in pig) via Oozie.
Explanation
When you kick off an oozie workflow, you need 1 yarn application for this. I am not sure what the logic is, but it appears that these applications usually require 1 map, and occasionally 2.
Besides the above, you still need the same amount of mappers and reducers to do the actual work as if you did not use oozie. (If you see a different number than you expected, this may be because you passed specific parameters on map or reduce properties when calling the script).
Warning
The above means, that if you were to have 100 available containers, and kickoff 100 workflows (for example by starting a daily job with a startdate of 100 days in the past), it is likely that the workflows take up all available containers, and the actual work is suspended indefinitely.
Short answer : Oozie launches mapreduce job by submitting one maponly job to the cluster called Oozie launcher. Agree with #Dennis Jaheruddin.
Detail answer after my research : Oozie's execution model
Oozie’s execution model is different from
the default approach users take to run Hadoop jobs. When a user
invokes the Hadoop, Hive, or Pig CLI tool from a Hadoop edge node, the
corresponding client executable runs on that node which is configured
to contact and submit jobs to the Hadoop cluster. When the same jobs
are defined and submitted via an Oozie workflow action, things work
differently.
Let’s say you are submitting a workflow job using the Oozie CLI on the
edge node. The Oozie client actually submits the workflow to the Oozie
server, which typically runs on a different node. Regardless of where
it runs, it’s the Oozie server’s responsibility to submit and run the
underlying MapReduce jobs on the Hadoop cluster. Oozie doesn’t do so
by using the standard client tools installed locally on the Oozie
server node. Instead, it first submits a MapReduce job called the
“launcher job,” which in turn runs the Hadoop, Hive, or Pig job using
the appropriate client APIs.
Imp Note : The Oozie launcher is basically a map-only job running a single mapper
on the Hadoop cluster. This map job knows what to do for the specific
action it’s supposed to run and does the appropriate thing by using
the libraries for Hadoop, Pig, etc. This will result in other
MapReduce jobs being spun up as required. These Oozie jobs are called
“asynchronous actions” in Oozie parlance. Oozie doesn’t run these
actions in its own server, but kicks them off on the Hadoop cluster
using a launcher job. The reason Oozie server “outsources” the
launcher to the Hadoop cluster is to protect itself from unexpected
workloads and also to isolate user code from its own services. After
all, Oozie has access to an awesome distributed system in the form of
a Hadoop cluster.
Coming to Mapreduce actions you can set number of maptasks but there is no guarantee, it will depend as described below.
The number of maps is usually driven by the total size of the inputs,
that is, the total number of blocks of the input files.
setting number of maps - Suggestion (actually based on inputsplits)
setting number of reducer - Demand
Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute
Number of mapper depend on number of logical input splits it do not depends on number of blocks. You can control number of input splits by your programme.
Refer this https://hadoopi.wordpress.com/2013/05/27/understand-recordreader-inputsplit/ for more information about how input splits effects number of mapper and how to create input splits.

how to check that mapreduce is running parallelly?

I submitted a mapreduce job and checked the log.
In the log l see that there are many mappers, each mapper processes one split, and the processing details of each mapper is logged in the log file sequentially in time.
However, I would like to check if my job is running parallelly and I want to see how many mappers are running concurrently.
I don't know where to find these informations.
Please help me, thx!
Use following Jobtracker Web UI and drill down to executing MapReduce job
http://<Jobtracker-HostName>:50030/

Does it matter where I submit hadoop jobs from?

Does it have any measurable effect on resources whether I submit a bunch of hadoop jobs from different client servers or all from the same one? I would think not since all the work is done in the cluster. Is this correct?
The only thing which is resource intensive on the client submitting to the Hadoop cluster is the calculation of the input splits. When the input data is huge or when too many jobs are submitted from the same client then because of the input split calculations, the job submission might become a bit slow.
I am not able to recall the Hadoop release or the parameter, but a configurable parameter was included to move the calculation of the input splits from the client submitting a job to the Hadoop cluster.
It really shouldn't matter where you submit your jobs from. The client itself doesn't do much, it uses RPC protocol to contact the services, and then just sits idle until the job is finished.
Also, the most important is what kind of scheduler you use to allocate resource, which is probably going to make the most significant difference and decide which resources to allocate to which job. More on job scheduling here.
I don't think you can move the input split calculation into Job Tracker in 'Classic' version. In YARN, you can move it using
"yarn.app.mapreduce.am.compute-splits-in-cluster"
I am guessing, Hadoop people didn't want to overload Job tracker with input split creation. Similar to the design decision of not assigning too much work for Namenode in HDFS.
In YARN, every job gets its own Application Master, so no worries about overloading a SPOF/bottleneck master like job tracker.
In reference to the original question, the client job would have to reach out to the namenode to get the block locations (I have see parts of code on block storage class calling data node for some meta data...not sure whether these happen during input split creation or in task tracker node) . This can become an issue if you are handling a lot of jobs on the same client node.
If you are using YARN, there would be a slight performance increase if all these communications happen inside the cluster.
Need to check how Oozie handles this issue.
Hopefully, this helps!
Arun

A tool showing a breakdown of completion times and source machine names for each and every mapper and reducer?

I know job tasks page (in the JobTracker UI) is already showing start time and end time of every tasks in mapper and reducer but I would like to see something more like source machine names, number of spills and so on. I guess I can try to write such a tool using JobTracker class? But before embarking on that, I would like to see if there is such a tool already.
Does the hadoop job -history all output-dir command give you enough information to parse / process?
http://hadoop.apache.org/common/docs/r1.0.3/cluster_setup.html - Search for the above command

Why all the reduce tasks are ending up in a single machine?

I wrote a relatively simple map-reduce program in Hadoop platform (cloudera distribution). Each Map & Reduce write some diagnostic information to standard ouput besides the regular map-reduce tasks.
However when I'm looking at these log files, I found that Map tasks are relatively evenly distributed among the nodes (I have 8 nodes). But the reduce task standard output log can only be found in one single machine.
I guess, that means all the reduce tasks ended up executing in a single machine and that's problematic and confusing.
Does anybody have any idea what's happening here ? Is it configuration problem ?
How can I make the reduce jobs also distribute evenly ?
If the output from your mappers all have the same key they will be put into a single reducer.
If your job has multiple reducers, but they all queue up on a single machine, then you have a configuration issue.
Use the web interface (http://MACHINE_NAME:50030) to monitor the job and see the reducers it has as well as what machines are running them. There is other information that can be drilled into that will provide information that should be helpful in figuring out the issue.
Couple questions about your configuration:
How many reducers are running for the job?
How many reducers are available on each node?
Is the node running the reducer better
hardware than the other nodes?
Hadoop decides which Reducer will process which output keys by the use of a Partitioner
If you are only outputting a few keys and want an even distribution across your reducers, you may be better off implementing a custom Partitioner for your output data. eg
public class MyCustomPartitioner extends Partitioner<KEY, VALUE>
{
public int getPartition(KEY key, VALUE value, int numPartitions) {
// do something based on the key or value to determine which
// partition we want it to go to.
}
}
You can then set this custom partitioner in the job configuration with
Job job = new Job(conf, "My Job Name");
job.setPartitionerClass(MyCustomPartitioner.class);
You can also implement the Configurable interface in your custom Partitioner if you want to do any further configuration based on job settings.
Also, check that you haven't set the number of reduce tasks to 1 anywhere in the configuration (look for "mapred.reduce.tasks"), or in code, eg
job.setNumReduceTasks(1);

Resources