I would like to know if there is a way to stop a job (and have it in a FAILED or KILLED state) when I detect something wrong within a map or a reduce task without Hadoop retries the task.
If possible I would like to keep the fact that on "normal" fails Yarn restart the task.
Currently I am throwing an exception but Hadoop tries again.
It is Scala/Spark code but It may be useful in Java/Hadoop too.
Thanks
Related
Does anyone have any idea on what's the maximum limit of oozie workflows that can execute in parallel?
I'm running 35 workflows in parallel (or that's what oozie UI mentions that they all got started in parallel). All the subworkflows perform ingestion of files from local to HDFS & do some validation checks henceforth on the metadata of file. Simple as that.
However, I see some subworkflows get failed during execution; the step in which they fail tries to put the files into HDFS location, i.e., the process wasn't able to execute hdfs dfs -put command. However, when I rerun these subworkflows they run successfully.
Not sure what caused them to execute and fail on hdfs dfs -put.
Any clues/suggestions on what could be happening?
First limitation does not depends on Oozie, but on resources available in YARN to execute Oozie actions as each action is executed in one map. But this limit will not fail your workflow: they will just wait for resources.
The major limit we've faced, leading to troubles, was on the callable queue of oozie services. Sometime, on heavy loads created by plenty of coordinator submitting plenty of worklow, Oozie was loosing more time in processing its internal callable queue than running workflows :/
Check oozie.service.CallableQueueService settings for informations about this.
I am trying to run pyspark on yarn with oozie, after submitting the workflow, there are 2 jobs in the hadoop job queue, one is the oozie job , which is with the application type "map reduce", and another job triggered by the previous one, with application type "Spark", while the first job is running, the second job remains in 'accepted" status. here comes the problem, while the first job is waiting for the second job to finish to proceed, the second is waiting for the first one to finish to run, I may be stuck in a dead lock, how could I get rid of this trouble, is there anyway the hadoop job with application type "mapreduce" run parallel with other jobs of different application type?
Any advice is appreciated, thanks!
Please check the value for property into Yarn scheduler configuration. I guess you need to increase it to something like .9 or so.
Property: yarn.scheduler.capacity.maximum-am-resource-percent
You would need to start Yarn, MapReduce and Oozie after updating the property.
More info: Setting Application Limits.
I'm running an Oozie job with multiple actions and there's a part I could not make it work. In the process of troubleshooting I'm overwhelmed with lots of logs.
In YARN UI (yarn.resourcemanager.webapp.address in yarn-site.xml, normally on port 8088), there's the application_<app_id> logs.
In Job History Server (yarn.log.server.url in yarn-site.xml, ours on port 19888), there's the job_<job_id> logs. (These job logs should also show up on Hue's Job Browser, right?)
In Hue's Oozie workflow editor, there's the task and task_attempt (not sure if they're the same, everything's a mixed-up soup to me already), which redirects to the Job Browser if you clicked here and there.
Can someone explain what's the difference between these things from Hadoop/Oozie architectural standpoint?
P.S.
I've seen in logs container_<container_id> as well. Might as well include this in your explanation in relation to the things above.
In terms of YARN, the programs that are being run on a cluster are called applications. In terms of MapReduce they are called jobs. So, if you are running MapReduce on YARN, job and application are the same thing (if you take a close look, job ids and application ids are the same).
MapReduce job consists of several tasks (they could be either map or reduce tasks). If a task fails, it is launched again on another node. Those are task attempts.
Container is a YARN term. This is a unit of resource allocation. For example, MapReduce task would be run in a single container.
Running Spark 1.3.1 on Yarn and EMR. When I run the spark-shell everything looks normal until I start seeing messages like INFO yarn.Client: Application report for application_1439330624449_1561 (state: ACCEPTED). These messages are generated endlessly, once per second. Meanwhile, I am unable to use the Spark shell.
I don't understand why this is happening.
Seeing (near) endless Accepted messages from YARN has always been a sure sign that there were not enough cluster resources to allocate for my Spark jobs / shell. YARN will continue trying to schedule your Spark application, but will eventually time-out if not enough resources become available in a certain amount of time.
Are you providing any command line options to spark-shell that override the defaults provided? When I ask for too many executors/cores/memory YARN will accept my request but never transition to a Running ApplicationMaster.
Try running a spark-shell with no options (other than perhaps --master yarn) and see if it gets past Accepted.
Realized there were a couple of streaming jobs I had killed in the terminal, but I guess they were somehow still running. I was able to find these in the UI showing all running applications on YARN (I wasn't able to execute Hive queries as either). Once I killed the jobs using the command below the spark-shell started as usual.
yarn application -kill application_1428487296152_25597
I guess that YARN is not having resources enough for running jobs.
Please check
https://www.cloudera.com/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html
for calculating how many resources can you provide to YARN.
Please check the number of cores and the RAM quantity that it is controlled by the following variables:
yarn.nodemanager.resource.cpu-vcores
yarn.nodemanager.resource.memory-mb
Current setup:
- Hadoop 0.20.2-cdh3u3
- Hbase Version 0.90.4-cdh3u3
- Jetty-6.1.14
- Running on VM (Debian Squeeze)
Problem appears during mapreduce process on Hbase table. On the Reduce phase it crashes every time at the very same point with these logs in tasktracker.log:
ERROR org.apache.hadoop.mapred.TaskTracker: getMapOutput(attempt_201205290717_0001_m_000010_0,3) failed:
org.mortbay.jetty.EofException
WARN org.mortbay.log: Committed before 410 getMapOutput(attempt_201205290717_0001_m_000010_0,3) failed :
org.mortbay.jetty.EofException
ERROR org.mortbay.log: /mapOutput
java.lang.IllegalStateException: Committed
Hoping anyone faced the same or similar problem before, looking for a solution.
I am facing the same issue here.
On my cluster, this happens on all slaves (datanode & tasttrackers) except for one, which results in the general reduce process to first progress very slowly and at a certain point in a reroll of the reduce progress so far due to some error. the reduce process then starts all over again: the job never finishes.
There is an open major issue in the bugtracker. See https://issues.apache.org/jira/browse/MAPREDUCE-5
Let us hope, it will be fixed some day, but at the very moment, i can not use my hadoop program with huge files > 3 GB at all. In my case i hope, i can fix it by additional data cleaning and more efficient data structures (trove, fastutils), so the problem doesnt occur at all, but honestly, this feels kind of like the wrong approach here. Not to do those smaller tweaks was the main reason starting with hadoop anyways.
The Jetty EOFException is observed when the reduce Task prematurely closes a connection to a jetty server. Restart the tasktrackers and run the job again. See if it works for you.