I am running a Pig job that loads around 8 million rows from HBase (several columns) using HBaseStorage. The job finishes successfully and seems to produce the right results but when I look at the job details in the job tracker it says 50 map tasks were created of which 28 where successful and 22 were killed. The reduce ran fine. By looking at the logs of the killed map tasks there is nothing obvious to me as to why the tasks were killed. In fact the logs of successful and failed tasks are practically identical and both tasks are taking some reasonable time. Why are all these map tasks created and then killed? Is it normal or is it a sign of a problem?
This sounds like Speculative Execution in Hadoop. It runs the same task on several nodes and kills them when at least one completes. See the explanation this this book: https://www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter-6/task-execution
Related
I run multiples instances of the same PIG script with different parameters.
When the cluster is under heavy load, Map Reduce jobs logs show lots of reducers preempted, which takes times:
Reducer preempted to make room for pending map attempts Container killed
by the ApplicationMaster. Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
How to avoid such preemptions issue to minimize script execution time ?
Have a look at this link.
In short, you can use one of below commands
SET mapred.reduce.slowstart.completed.maps 1.00;
SET mapreduce.job.reduce.slowstart.completedmaps 1.00;
depending on your Hadoop version (latter on is for Hadoop 2.4.1+). It will start reducers once all mappers art done.
I am not sure if this is something that has been fixed for newer releases of Hadoop, but I'm currently locked into running Hadoop 0.20 (legacy code).
Here's the issue: when I launch a Hadoop job, there is "Job setup" task that needs to run first. It seems to me that Hadoop randomly picks this task to be either a map task or a reduce task.
We have more capacity for map tasks configured than reduce tasks, so whenever I get unlucky and have a reduce startup task, it takes forever long for my job to even start running. Any ideas how to overcome this?
Hadoop job first complete all your mapper task. Once all the mapper task is completed then it will go across the network and do shuffling and sorting and only after then your reducer task will start processing. So i guess there could possibly be some other for this delay.
A pig script can be translated into multiple MR jobs and I am wondering if there is an interface or a way to see the progress of the overall PIG script like how many jobs are scheduled, executed and so on.
We had the same problem at Twitter, as some of our Pig scripts spin up dozens of Map-Reduce jobs and it's sometimes hard to tell which of them is doing what, reason about efficiency of the plan, understand how many will run in parallel, etc.
So we created Twitter Ambrose: https://github.com/twitter/ambrose
It spins up a little jetty server which gives you a nice web ui that shows the job DAG, colors the nodes as the jobs complete, gives you stats about the jobs, and tells you which relations each job is trying to calculate.
There is a command illustrate but it throws an exception on my deployment. So I use another approach.
You can get the information on how many MR jobs are scheduled by using explain command and looking at the Physical Plan section, which is at the end of the explain report. To get the number of MR jobs for the script I do the following:
./pig -e 'explain -script ./script_name.pig' > ./explain.txt
grep MapReduce ./explain.txt | wc -l
Now we have the number of MR jobs planned. To monitor script execution, before you run it, you need to access Hadoop's jobtracker page (via "http://(IP_or_node_name):50030/jobtracker.jsp") and write down the name of last job (Completed Jobs section). Submit the script. Refresh the jobtracker page and count how many running jobs there are and how many are completed after the one you have noted. Now you can get an idea of how many jobs are left to be executed.
Click on each job and see its statistics and progress.
A much simpler approach would be to run the script on a small dataset, note down the number of jobs, it is displayed on the console output after the script execution. As pig does not change its execution plan, it will be the same with the big dataset. By looking into stats of each job on Hadoop's jobtracker page (via "http://(IP_or_node_name):50030/jobtracker.jsp") you can get the idea of the proportion of time each MR job takes. Than you can use it to approximately interpolate the execution time on large dataset. If you have skewed data and some Cartesian products, execution time prediction might become tricky.
When I run my hadoop job I get the following error:
Request received to kill task 'attempt_201202230353_23186_r_000004_0' by user
Task has been KILLED_UNCLEAN by the user
The logs appear to be clean. I run 28 reducers, and this doesnt happen for all the reducers. It happens for a selected few and the reducer starts again. I fail to understand this. Also other thing I have noticed is that for a small dataset, I rarely see this error!
There are three things to try:
Setting a CounterIf Hadoop sees a counter for the job progressing then it won't kill it (see Arockiaraj Durairaj's answer.) This seems to be the most elegant as it could allow you more insight into long running jobs and were the hangups may be.
Longer Task TimeoutsHadoop jobs timeout after 10 minutes by default. Changing the timeout is somewhat brute force, but could work. Imagine analyzing audio files that are generally 5MB files (songs), but you have a few 50MB files (entire album). Hadoop stores an individual file per block. So if your HDFS block size is 64MB then a 5MB file and a 50 MB file would both require 1 block (64MB) (see here http://blog.cloudera.com/blog/2009/02/the-small-files-problem/, and here Small files and HDFS blocks.) However, the 5MB job would run faster than the 50MB job. Task timeout can be increased in the code (mapred.task.timeout) for the job per the answers to this similar question: How to fix "Task attempt_201104251139_0295_r_000006_0 failed to report status for 600 seconds."
Increase Task AttemptsConfigure Hadoop to make more than the 4 default attempts (see Pradeep Gollakota's answer). This is the most brute force method of the three. Hadoop will attempt the job more times, but you could be masking an underlying issue (small servers, large data blocks, etc).
Can you try using counter(hadoop counter) in your reduce logic? It looks like hadoop is not able to determine whether your reduce program is running or hanging. It waits for a few minutes and kills it, even though your logic may be still executing.
I have a Hadoop job that processes log files and reports some statistics. This job died about halfway through the job because it ran out of file handles. I have fixed the issue with the file handles and am wondering if it is possible to restart a "killed" job.
As it turns out, there is not a good way to do this; once a job has been killed there is no way to re-instantiate that job and re-start processing immediately prior to the first failure. There are likely some really good reasons for this but I'm not qualified to speak to this issue.
In my own case, I was processing a large set of log files and loading these files into an index. Additionally I was creating a report on the contents of these files at the same time. In order to make the job more tolerant of failures on the indexing side (a side-effect, this isn't related to Hadoop at all) I altered my job to instead create many smaller jobs, each one of these jobs processing a chunk of these log files. When one of these jobs finishes, it renames the processed log files so that they are not processed again. Each job waits for the previous job to complete before running.
Chaining multiple MapReduce jobs in Hadoop
When one job fails, all of the subsequent jobs quickly fail afterward. Simply fixing whatever the issue was and the re-submitting my job will, roughly, pick up processing where it left off. In the worst-case scenario where a job was 99% complete at the time of it's failure, that one job will be erroneously and wastefully re-processed.