What are the hidden features of Hadoop MapReduce that every developer should be aware of?
One hidden feature per answer, please.
Here are some tips and tricks http://allthingshadoop.com/2010/04/28/map-reduce-tips-tricks-your-first-real-cluster/
One item from there specifically that every developer should be aware of:
In your Java code there is a little trick to help the job be “aware” within the cluster of tasks that are not dead but just working hard. During execution of a task there is no built in reporting that the job is running as expected if it is not writing out. So this means that if your tasks are taking up a lot of time doing work it is possible the cluster will see that task as failed (based on the mapred.task.tracker.expiry.interval setting).
Have no fear there is a way to tell cluster that your task is doing just fine. You have 2 ways todo this you can either report the status or increment a counter. Both of these will cause the task tracker to properly know the task is ok and this will get seen by the jobtracker in turn. Both of these options are explained in the JavaDoc http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/Reporter.html
Related
So I'm in this situation, where I'm modifying the mapred-site.xml and specific configuration files of different schedulers for Hadoop, and I just want to make sure that the modifications I have made to the default scheduler(FIFO), has actually taken place.
How can I check the scheduler applied to a job or a queue of jobs already submitted to hadoop using job id ?
Sorry if this doesn't make that much sense, but I've looked around quite extensively to wrap my head around it, and read many documentations, and yet I still cannot seem to find this fundamental piece of information.
I'm simply trying the word count as a job, changing scheduler settings in mapped-site.xml and yarn-site.xml.
For instance I'm changing property "yarn.resourcemanager.scheduler.class" to "org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler" based on this link : see this
I'm also moving appropriate jar files specific to the schedulers to the correct directory.
For your reference, I'm using the "yarn" runtime mode, and Cloudera and Hadoop 2.
Thanks a ton for your help
I'm running an Oozie job with multiple actions and there's a part I could not make it work. In the process of troubleshooting I'm overwhelmed with lots of logs.
In YARN UI (yarn.resourcemanager.webapp.address in yarn-site.xml, normally on port 8088), there's the application_<app_id> logs.
In Job History Server (yarn.log.server.url in yarn-site.xml, ours on port 19888), there's the job_<job_id> logs. (These job logs should also show up on Hue's Job Browser, right?)
In Hue's Oozie workflow editor, there's the task and task_attempt (not sure if they're the same, everything's a mixed-up soup to me already), which redirects to the Job Browser if you clicked here and there.
Can someone explain what's the difference between these things from Hadoop/Oozie architectural standpoint?
P.S.
I've seen in logs container_<container_id> as well. Might as well include this in your explanation in relation to the things above.
In terms of YARN, the programs that are being run on a cluster are called applications. In terms of MapReduce they are called jobs. So, if you are running MapReduce on YARN, job and application are the same thing (if you take a close look, job ids and application ids are the same).
MapReduce job consists of several tasks (they could be either map or reduce tasks). If a task fails, it is launched again on another node. Those are task attempts.
Container is a YARN term. This is a unit of resource allocation. For example, MapReduce task would be run in a single container.
in last weeks, we use hadoop streaming to calculate some reports everyday. Recently we made a change to our program, if the input size is smaller than 10MB, we will set mapred.job.tracker=local in the JobConf, then the job will run locally.
But last night, many jobs failed, with status 3 returned by runningJob.getJobState().
I don't know why, and there is nothing in the stderr.
I can google nothing related about this question. So I'm wondering if I should use mapred.job.tracker=local in production mode? Maybe it's just a debug solution in developing supplied by hadoop.
Has anyone know something about it? Anything, Any infomation, Thank you.
I believe setting mapred.job.tracker=local has nothing to do with your error as local is the default value.
This config parameter defines the host and port that the MapReduce job tracker runs at. If it is set to be "local", then jobs are run in-process as a single map and reduce task.
Refer here.
I want to find out IPs of slave nodes where currently map reduce job is running or about to run for a given Job.
Is there any method to do this ?
Thanks in Advance.
For any job, you can view the list of running tasks through the Job Scheduler Web UI - this will detail the nodes on which the task is running.
As for where tasks are about to run - this is not neccessarily decided in advance. As slots become available on a node, the Job Scheduler (there are a number which behave differently depending on your needs) identifies a job task which will run on that node (based upon a number of criteria, hopefully honoring data locality where it can) and instructs the task tracker on that node to run the specific task.
Programatically, look at the javadocs for the JobClient class, it should be able to acquire information about the running tasks, and their node names (you'll probably need to do a DNS lookup to get the actual IPs i imagine)
Hadoop comes with several web interfaces which are by default (see conf/hadoop-default.xml) available at these locations:
http://localhost:50030/ – web UI for MapReduce job tracker(s)
http://localhost:50060/ – web UI for task tracker(s)
http://localhost:50070/ – web UI for HDFS name node(s)
Thanks to #Chris..
Programatically, look at the javadocs for the JobClient class, it should be able to acquire information about the running tasks, and their node names.
I am a beginner to Hadoop.
As per my understanding, Hadoop framework runs the Jobs in FIFO order (default scheduling).
Is there any way to tell the framework to run the job at a particular time?
i.e Is there any way to configure to run the job daily at 3PM like that?
Any inputs on this greatly appreciated.
Thanks, R
What about calling the job from external java schedule framework, like Quartz? Then you can run the job as you want.
you might consider using Oozie (http://yahoo.github.com/oozie/). It allows (beside other things):
Frequency execution: Oozie workflow specification supports both data
and time triggers. Users can specify execution frequency and can wait
for data arrival to trigger an action in the workflow.
It is independent of any other Hadoop schedulers and should work with any of them, so probably nothing in you Hadoop configuration will change.
How about having a script to execute your Hadoop job and then using at command to execute at some specified time.if you want the job to run regularly, you could setup a cron job to execute your script.
I'd use a commercial scheduling app if Cron does not cut it and/or a custom workflow solution. We use a solution called jams but keep in mind it's .net-oriented.