Oozie for multiple mapreduce jobs - hadoop

I have a sequence of mapreduce jobs that need to be run. I was wondering if there is any advantage of using Oozie for that, instead of having "one big driver" that will run that sequence?
I know that Oozie can be used to run multiple actions of different type, e.g. pig script, shell script, mr job, but I'm concretely interested should I split my two jobs and run them using Oozie, or have a single jar to do that?

Oozie is a scheduler - crude, poorly documented, but a scheduler.
If you don't need scheduling per se, or if CRON on an edge node is sufficient
if you want to handle your workflow logic by yourself (e.g. conditional
branching, parallel executions w/ waiting for stragglers, calling
generic sub-workflows w/ ad hoc parameters, e-mail alerts on errors,
<insert your pet feature here>) or don't need any fancy logic
if you handle your executions logs and state history by yourself, or don't care about history
... well, don't use a scheduler.
PS: you also have Luigi (Spotify) and Azkaban (LinkedIn) as alternative Hadoop schedulers.
[edit] extra point to consider: if your "driver" crashes for whatever reason, you may not have a chance to send an alert; but if run from Oozie, the crash will be detected eventually (may take as much as 30 min. in a corner case e.g. AM job self-destruction due to YARN RM failover)

Related

Map-reduce via Oozie

If I am using Oozie to run MapReduce job, is there a specific number about how many mappers will be started?
Is it:
one for Oozie and one for map-reduce job or
one for Oozie and one mapper for every 64MB block(default block size)
The above answers focus on how many maps and reduces a mapreduce job needs. However as you specifically ask about oozie, I will share my experience on mapreduce (in pig) via Oozie.
Explanation
When you kick off an oozie workflow, you need 1 yarn application for this. I am not sure what the logic is, but it appears that these applications usually require 1 map, and occasionally 2.
Besides the above, you still need the same amount of mappers and reducers to do the actual work as if you did not use oozie. (If you see a different number than you expected, this may be because you passed specific parameters on map or reduce properties when calling the script).
Warning
The above means, that if you were to have 100 available containers, and kickoff 100 workflows (for example by starting a daily job with a startdate of 100 days in the past), it is likely that the workflows take up all available containers, and the actual work is suspended indefinitely.
Short answer : Oozie launches mapreduce job by submitting one maponly job to the cluster called Oozie launcher. Agree with #Dennis Jaheruddin.
Detail answer after my research : Oozie's execution model
Oozie’s execution model is different from
the default approach users take to run Hadoop jobs. When a user
invokes the Hadoop, Hive, or Pig CLI tool from a Hadoop edge node, the
corresponding client executable runs on that node which is configured
to contact and submit jobs to the Hadoop cluster. When the same jobs
are defined and submitted via an Oozie workflow action, things work
differently.
Let’s say you are submitting a workflow job using the Oozie CLI on the
edge node. The Oozie client actually submits the workflow to the Oozie
server, which typically runs on a different node. Regardless of where
it runs, it’s the Oozie server’s responsibility to submit and run the
underlying MapReduce jobs on the Hadoop cluster. Oozie doesn’t do so
by using the standard client tools installed locally on the Oozie
server node. Instead, it first submits a MapReduce job called the
“launcher job,” which in turn runs the Hadoop, Hive, or Pig job using
the appropriate client APIs.
Imp Note : The Oozie launcher is basically a map-only job running a single mapper
on the Hadoop cluster. This map job knows what to do for the specific
action it’s supposed to run and does the appropriate thing by using
the libraries for Hadoop, Pig, etc. This will result in other
MapReduce jobs being spun up as required. These Oozie jobs are called
“asynchronous actions” in Oozie parlance. Oozie doesn’t run these
actions in its own server, but kicks them off on the Hadoop cluster
using a launcher job. The reason Oozie server “outsources” the
launcher to the Hadoop cluster is to protect itself from unexpected
workloads and also to isolate user code from its own services. After
all, Oozie has access to an awesome distributed system in the form of
a Hadoop cluster.
Coming to Mapreduce actions you can set number of maptasks but there is no guarantee, it will depend as described below.
The number of maps is usually driven by the total size of the inputs,
that is, the total number of blocks of the input files.
setting number of maps - Suggestion (actually based on inputsplits)
setting number of reducer - Demand
Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute
Number of mapper depend on number of logical input splits it do not depends on number of blocks. You can control number of input splits by your programme.
Refer this https://hadoopi.wordpress.com/2013/05/27/understand-recordreader-inputsplit/ for more information about how input splits effects number of mapper and how to create input splits.

Exporting jobs listed in Oozie Web Console

Apologies if this question sounds basic, I'm totally new to Hadoop environment.
What am I looking for?
In my case, there are jobs scheduled to run everday and I would want to export the list of failed jobs in an excel sheet each day.
How do I view the workflow jobs?
Currently I use the Oozie web console to view the jobs and I don't have/see an option to export. Also,
I was not able to find this information from the Oozie documentation.
However, I found that jobs can be listed using commands like
$ oozie jobs -oozie http://localhost:8080/oozie -localtime -len 2 -fliter status=RUNNING
Where am I stuck?
I want to filter the failed jobs for a given date and would want to export it as csv/excel data.
#YoungHobbit was right to point at that post which is very similar to this one; his answer was dead on target when it comes to extracting the entire list of jobs that have run on a specific day with the Oozie CLI (command-line interface).
Just don't forget to specify an "unbounded" reply e.g. -len 999999999 to avoid side effects (defaut is to show only the first 100 matches, which may be way too low if you run a lot of frequent jobs).
The trick is that you can make a more complex filter such as
"startCreatedTime=2016-06-28T00:00Z;endcreatedtime=2016-06-28T10:00Z;status=FAILED"
... but you cannot request jobs that have FAILED or have been KILLED or have been SUSPENDED (which may result from a temporary YARN or HDFS outage) or are still suspiciously RUNNING (because a sub-workflow is SUSPENDED for instance).
So your best choice is to get the whole list, then filter out all jobs that have SUCCEEDED, with a plain old grep -- as suggested in another answer.
Then you will also need a complex sed or awk script to break down the ugly CLI output into a well-formed CSV. Ouch!
Now, you have an alternative to the Oozie CLI: the Oozie REST API (old Cloudera tutorial here, reference for Oozie V4.2 here) lets you query the Oozie server with any programming language that provides...
an HTTP client
and a way to parse JSON messages (using plain old regular expressions, if nothing else is available)
The logic would be basically the same -- fetch the list of all jobs in the desired time window, ignore SUCCEEDED jobs, parse the others to generate a CSV record, dump into a CSV file.
But your program would be more robust, since it would be based on structured JSON input.
One more thing: if you are familiar with Microsoft VBA, you can even use an Excel macro to build the report dynamically, in a self-service way. No need to bother with in intermediate CSV file.

Run non-blocking series of jobs

A certain number of jobs needs to be executed in a sequence, such that result of one job is input to another. There's also a loop in one part of job chain. Currently, I'm running this sequency using wait for completition, but I'm going to start this sequence from web service, so I don't want to get stuck waiting for response. I wan't to start the sequence and return.
How can I do that, considering that job's depend on each other?
The typical approach I follow is to use Oozie work flow to chain the sequence of jobs with passing the dependent inputs to them accordingly.
I used a shell script to invoke the oozie job .
I am not sure about the loops within the oozie workflow. but the below link speaks about the way to implement loops within the workflow.Hope it might help you.
http://zapone.org/bernadette/2015/01/05/how-to-loop-in-oozie-using-sub-workflow/
Apart from this the JobControl class is also a good option if the jobs need to be in sequence and it requires less efforts to implement.It would be easy to do loop since it would be fully done with Java code.
http://gandhigeet.blogspot.com/2012/12/hadoop-mapreduce-chaining.html
https://cloudcelebrity.wordpress.com/2012/03/30/how-to-chain-multiple-mapreduce-jobs-in-hadoop/

What is the difference between job.submit and job.waitForComplete in Apache Hadoop?

I have read the documentation so I know the difference.
My question however is that, is there any risk in using .submit instead of .waitForComplete if I want to run several Hadoop jobs on a cluster in parallel ?
I mostly use Elastic Map Reduce.
When I tried doing so, I noticed that only the first job being executed.
If your aim is to run jobs in parallel then there is certainly no risk in using job.submit(). The main reason job.waitForCompletion exists is that it's method call returns only when the job gets finished, and it returns with it's success or failure status which can be used to determine that further steps are to be run or not.
Now, getting back at you seeing only the first job being executed, this is because by default Hadoop schedules the jobs in FIFO order. You certainly can change this behaviour. Read more here.

How to check the overall progress of PIG job

A pig script can be translated into multiple MR jobs and I am wondering if there is an interface or a way to see the progress of the overall PIG script like how many jobs are scheduled, executed and so on.
We had the same problem at Twitter, as some of our Pig scripts spin up dozens of Map-Reduce jobs and it's sometimes hard to tell which of them is doing what, reason about efficiency of the plan, understand how many will run in parallel, etc.
So we created Twitter Ambrose: https://github.com/twitter/ambrose
It spins up a little jetty server which gives you a nice web ui that shows the job DAG, colors the nodes as the jobs complete, gives you stats about the jobs, and tells you which relations each job is trying to calculate.
There is a command illustrate but it throws an exception on my deployment. So I use another approach.
You can get the information on how many MR jobs are scheduled by using explain command and looking at the Physical Plan section, which is at the end of the explain report. To get the number of MR jobs for the script I do the following:
./pig -e 'explain -script ./script_name.pig' > ./explain.txt
grep MapReduce ./explain.txt | wc -l
Now we have the number of MR jobs planned. To monitor script execution, before you run it, you need to access Hadoop's jobtracker page (via "http://(IP_or_node_name):50030/jobtracker.jsp") and write down the name of last job (Completed Jobs section). Submit the script. Refresh the jobtracker page and count how many running jobs there are and how many are completed after the one you have noted. Now you can get an idea of how many jobs are left to be executed.
Click on each job and see its statistics and progress.
A much simpler approach would be to run the script on a small dataset, note down the number of jobs, it is displayed on the console output after the script execution. As pig does not change its execution plan, it will be the same with the big dataset. By looking into stats of each job on Hadoop's jobtracker page (via "http://(IP_or_node_name):50030/jobtracker.jsp") you can get the idea of the proportion of time each MR job takes. Than you can use it to approximately interpolate the execution time on large dataset. If you have skewed data and some Cartesian products, execution time prediction might become tricky.

Resources