Hadoop Mapreduce detailed task status queries - hadoop

I want to write a 3rd party frontend to hadoop mapreduce which needs to query mapreduce on some information and statistics.
Right now I'm able to use hadoop job to query jobs and the map and reduce completion percentages, along with counters, e.g.:
# hadoop job -status job_201212170023_0127
Job: job_201212170023_0127
map() completion: 0.6342382
reduce() completion: 0.0
Counters: 28
Job Counters
SLOTS_MILLIS_MAPS=4537
...
What I would also like are the numbers of each task, as used by the visualisation within the job tracker, i.e.:
I am able to list all the mappers...
# hadoop job -list-attempt-ids job_201212170023_0127 map running
attempt_201212170023_0127_m_000000_0
attempt_201212170023_0127_m_000001_0
attempt_201212170023_0127_m_000002_0
...
..but how would I get the percentage of each of these tasks? Ideally I would want something like this:
# hadoop job -task-status attempt_201212170023_0127_m_000000_0
completion: 0.6342382
start: 2012-12-18T12:23:34Z
... etc.
The current solution would be to scrape the web interface, but I'm not a fan of this if it is at all possible to use the command line output.

Related

Spark streaming jobs duration in program

How do I get in my program (which is running the spark streaming job) the time taken for each rdd job.
for example
val streamrdd = KafkaUtils.createDirectStream[String, String, StringDecoder,StringDecoder](ssc, kafkaParams, topicsSet)
val processrdd = streamrdd.map(some operations...).savetoxyz
In the above code for each microbatch rdd the job is run for map and saveto operation.
I want to get the timetake for each streaming job. I can see the job in port 4040 UI, but want to get in the spark code itself.
Pardon if my question is not clear.
You can use the StreamingListener in you spark app. This interface provides a method onBatchComplete that can give you total time taken by the batch jobs.
context.addStreamingListener(new StatusListenerImpl());
StatusListenerImpl is the implementation class that you have to implement using StreamingListener.
There are more other methods also available in listener you should explore them as well.

Hadoop passing variables from reducer to main

I am working on a map reduce program. I'm trying to pass parameters to the context configuration in the reduce method using the setLong method and then after completion read them in the main
in reducer:
context.getConfiguration().setLong(key, someLong);
In the Main after the job completion i try to read using :
long val = job.getConfiguration().getLong(key, -1);
but i always get -1.
when i try reading inside the reducer i see that the value is set and i get the correct answer.
am i missing something?
Thank you
You can use counters: set&update their value in reducers and then you can access them in your client application (Main).
You can translate configuration from main to map task or reduce task, but you cannot translate it back. The procedure of configuration translation is:
A configuration file is generated on the MapReduce client based on the configuration you set on main, and it will be pushed to a HDFS path only shared by the job. The file will be readonly
When launching a map or reduce task, the configuration file is pulled from the HDFS path, and task init the configuration based by the file.
If you want to translate configuration back, you may use another HDFS file: update the file on Reducer, and read it after job completes

Hadoop performance modeling

I am working on Hadoop performance modeling. Hadoop has 200+ parameters so setting them manually is not possible. So often we run our hadoop jobs with default parameter value(like using default value io.sort.mb, io.sort.record.percent, mapred.output.compress etc). But using default parameter value gives us sub optimal performance. There is some work done in this area by Herodotos Herodotou (http://www.cs.duke.edu/starfish/files/vldb11-job-optimization.pdf) to improve performance. But i have following doubt in their work --
They are fixing the value of parameters at the job start time( according to proportionality assumption of data) for all the phases( read, map, collect etc.) of MapReduce job. Can we set different value of these parameters for each phase at run time according to run time environment( like cluster configuration, underling file system etc.), by changing Hadoop configuration log files of a particular node to get optimal performance from a node ?
They are using white box model for Hadoop core are they still applicable for
current Hadoop ( http://arxiv.org/pdf/1106.0940.pdf) ?
No, you couldn't dynamically change MapReduce parameters per job per node.
Configuring set of nodes
Rather what you could do is change the configuration parameters per node statically in the configuration files (generally located in /etc/hadoop/conf), so that you could take the most out of your cluster with different h/w configurations.
Example: Assume you have 20 worker nodes with different hardware configurations like:
10 with configuration of 128GB RAM, 24 Cores
10 with configuration of 64GB RAM, 12 Cores
In that case you would want to configure each of identical servers to take most out of the hardware for example, you would want to run more child tasks (mappers & reducers) on worker nodes with more RAM and Cores, for example:
Nodes with 128GB RAM, 24 Cores => 36 worker tasks (mappers + reducers), JVM heap for each worker task would be around 3GB.
Nodes with 64GB RAM, 12 Cores => 18 worker tasks (mappers + reducers), JVM heap for each worker task would be around 3GB.
So, you would want to configure the set of nodes respectively with appropriate parameters.
Using ToolRunner to pass configuration parameters dynamically to a Job:
Also, you could dynamically change the MapReduce job parameters per job but these parameters would be applied to the entire cluster not just to a set of nodes. Provided your MapReduce job driver extends ToolRunner.
ToolRunner allows you to parse generic hadoop command line arguments. You'll be able to pass MapReduce configuration parameters using -D property.name=property.value.
You can pretty much pass almost all hadoop parameters dynamically to a job. But most commonly passed MapReduce configuration parameters dynamically to a job are:
mapreduce.task.io.sort.mb
mapreduce.map.speculative
mapreduce.job.reduces
mapreduce.task.io.sort.factor
mapreduce.map.output.compress
mapreduce.map.outout.compress.codec
mapreduce.reduce.memory.mb
mapreduce.map.memory.mb
Here is an example terasort job passing lots of parameters dynamically per job:
hadoop jar hadoop-mapreduce-examples.jar tearsort \
-Ddfs.replication=1 -Dmapreduce.task.io.sort.mb=500 \
-Dmapreduce.map.sort.splill.percent=0.9 \
-Dmapreduce.reduce.shuffle.parallelcopies=10 \
-Dmapreduce.reduce.shuffle.memory.limit.percent=0.1 \
-Dmapreduce.reduce.shuffle.input.buffer.percent=0.95 \
-Dmapreduce.reduce.input.buffer.percent=0.95 \
-Dmapreduce.reduce.shuffle.merge.percent=0.95 \
-Dmapreduce.reduce.merge.inmem.threshold=0 \
-Dmapreduce.job.speculative.speculativecap=0.05 \
-Dmapreduce.map.speculative=false \
-Dmapreduce.map.reduce.speculative=false \

 -Dmapreduce.job.jvm.numtasks=-1 \
-Dmapreduce.job.reduces=84 \

 -Dmapreduce.task.io.sort.factor=100 \
-Dmapreduce.map.output.compress=true \

 -Dmapreduce.map.outout.compress.codec=\
org.apache.hadoop.io.compress.SnappyCodec \
-Dmapreduce.job.reduce.slowstart.completedmaps=0.4 \
-Dmapreduce.reduce.merge.memtomem.enabled=fasle \
-Dmapreduce.reduce.memory.totalbytes=12348030976 \
-Dmapreduce.reduce.memory.mb=12288 \

 -Dmapreduce.reduce.java.opts=“-Xms11776m -Xmx11776m \
-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode \
-XX:+CMSIncrementalPacing -XX:ParallelGCThreads=4” \

 -Dmapreduce.map.memory.mb=4096 \

 -Dmapreduce.map.java.opts=“-Xmx1356m” \
/terasort-input /terasort-output

Find Job Status by Job name or id for a hadoop mapreduce job

I'm very new to hadoop and have question.
I'm submitting (or creating) mapreduce jobs using Hadoop Job API v2 (i.e. namespace mapreduce than old one mapred)
I submit MR Jobs based on our own jobs. We maintain the Hadoop Job Name in this table.
I want to track the submitted jobs for the progress (and so completion) so that we can update our own jobs as complete.
All the Job Status API requires Job object. Whereas ‘Job Monitoring’ module of ours does not have any job object with it.
Can you please help us with anyway to get Job Status given a Job Name? We make sure job name are unique.
I google quite a bit only to find below. Is this the way to go? There is no other way in the v2 (.mapreduce. and not .mapred.) API to get job's status given the JobId?
Configuration conf = new Configuration();
JobClient jobClient = new JobClient(new JobConf(conf)); // deprecation WARN
JobID jobID = JobID.forName(jobID); // deprecation WARN
RunningJob runningJob = jobClient.getJob(jobID);
Field field = runningJob.getClass().getDeclaredField("status"); // reflection !!!
field.setAccessible(true);
JobStatus jobStatus = (JobStatus) field.get(runningJob);
http://blog.erdemagaoglu.com/post/9407457968/hadoop-mapreduce-job-statistics-a-fraction-of-them

How RecommenderJob(org.apache.mahout.cf.taste.hadoop.item.RecommenderJob) will call my custom mappers and reducers?

I am running Mahout in Action example for 6 using command:
"hadoop jar target/mia-0.1-job.jar org.apache.mahout.cf.taste.hadoop.item.RecommenderJob -Dmapred.input.dir=input/input.txt -Dmapred.output.dir=output --usersFile input/users.txt --booleanData"
But the mappers and reducers in example of ch 06 are not working ?
You have to change the code to use the custom Mapper and Reducer classes you have in mind. Otherwise yes of course it runs the ones that are currently in the code. Add them, change the caller, recompile, and run it all on Hadoop. I am not sure what you refer to that is not working.

Resources