how to deploy and run oozie job? - hadoop

I'm trying to do a simple job using oozie. It will be a one simple Pig Action.
I have a file : FirstScript.pig containing:
dual = LOAD 'default.dual' USING org.apache.hcatalog.pig.HCatLoader();
store dual into 'dummy_file.txt' using PigStorage();
and a workflow.xml containing:
<workflow-app name="FirstWorkFlow" xmlns="uri:oozie:workflow:0.2">
<start to="FirstJob"/>
<action name="FirstJob">
<pig>
<job-tracker>hadoop:50300</job-tracker>
<name-node>hdfs://hadoop:8020</name-node>
<script>/FirstScript.pig</script>
</pig>
<ok to="okjob"/>
<error to="errorjob"/>
</action>
<ok name='okjob'>
<message>job OK, message[${wf:errorMessage()}]</message>
</ok>
<error name='errorjob'>
<message>job error, error message[${wf:errorMessage()}]</message>
</error>
</workflow-app>
I have created structure :
FirstScript
|- lib
|---FirstScript.pig
|- workflow.xml
And what now?
How do I deploy it and run with oozie?
Can anyone more experienced help?
Regards
Pawel

I do it like this:
hadoop fs -put workflow.xml some_dir/
oozie job --oozie http://your_host:11000/oozie -config cluster_conf.xml -run
and my cluster_conf.xml looks like this (please check your ports first they depend on Hadoop distro):
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<configuration>
<property>
<name>nameNode</name>
<value>hdfs://my_nn:8020</value>
</property>
<property>
<name>jobTracker</name>
<value>my_jt:8050</value>
</property>
<property>
<name>oozie.wf.application.path</name>
<value>/user/my_user/some_dir/workflow.xml</value>
</property>
</configuration>

Config file should point to job.properties in place of file.xml. Since, job.properties contains path to workflow.xml
oozie job --oozie http://your_host:11000/oozie -config **/job.properties** -run

Related

Scheduling a sqoop job in oozie through Shell script using Hue

I am able to run a sqoop command in Oozie using Hue. But, when I try to run the same sqoop command by placing it in a shell script I am getting an error like below
Stdoutput 2016-05-20 10:52:13,241 ERROR [main] sqoop.Sqoop (Sqoop.java:runSqoop(181)) - Got exception running Sqoop:
java.lang.RuntimeException: Could not load db driver class: oracle.jdbc.OracleDriver
I have included the jdbc jar file like I did while running the sqoop command directly. I don't understand why it is not working for shell script.
Here is the workflow generated by Hue
<workflow-app name="My_Workflow" xmlns="uri:oozie:workflow:0.5">
<start to="shell-ca31"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="shell-ca31">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value>true</value>
</property>
<property>
<name>oozie.libpath</name>
<value>/user/oozie/libext</value>
</property>
</configuration>
<exec>sqoopoozie.sh</exec>
<file>/user/yxr6907/sqoopoozie.sh#sqoopoozie.sh</file>
<archive>/user/oozie/libext/ojdbc7.jar#ojdbc7.jar</archive>
<capture-output/>
</shell>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
When you use shell action, jars for sqoop are not imported into classpath.
I was able to solve it by adding the jar into the classpath. Then, i export HADOOP_CLASSPATH and sqoop works.
Use the following:
Put the jar ojdbc7.jar in files
Use the following command inside shell script: export HADOOP_CLASSPATH=${PWD}/ojdbc7.jar
Instead of step 1. you can use the following properties to load jar into classpath:
oozie.use.system.libpath=true
oozie.libpath=/path/to/jars
Exporting HADOOP_CLASSPATH is required in both ways.

oozie Pig action lauching Error

I am trying to do very basic oozie workflow
I am getting the below error wheni give the command..
user#ubuntu:~/surender$ oozie job -oozie http://localhost:11000/oozie /home/user/surender/oozie_demo/job.properties -run
Error:
Error: E0501 : E0501: Could not perform authorization operation, Failed on local exception: java.io.EOFException; Host Details : local host is: "ubuntu/127.0.0.1"; destination host is: "localhost":8020;
My oozie version is 4.0.0 , I checked that oozie web console is enabled..
This is how created a oozie workflow
I created a directory called oozie_demo and inside that i created two files
1.workflow.xml
2.job.properties
I also created a lib directory and placed the pig script inside that lib directory
workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.2" name="pig-wf">
<start to="pig-node"/>
<action name="pig-node">
<pig>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/output/pig/simple_load"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
</property>
</configuration>
<script>simple_load.pig</script>
<param>INPUT=/user/${wf:user()}/inputfiles/records.txt</param>
<param>OUTPUT=/user/${wf:user()}//output/pig/simple_load</param>
</pig>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Pig failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
job.properties
nameNode=hdfs://localhost:8020
jobTracker=localhost:8021
queueName=default
oozie_demo=oozie_demo
oozie.use.system.libpath=true
ozie.wf.application.path=${nameNode}/user/user/oozie_demo
my pig script :
records = load '/user/user/inputfiles/records.txt' USING PigStorage(',');
store records into '/user/user/output/pig/simple_load' using PigStorage(',');
Could somebody help me on this? I would like to know what went wrong? and how do i resolve this issue ?
Could you check if the Namenode is up and running at port 8020.

Hive-oozie action error

Here is my workflow.xml
<action name="hive-node">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/${wfeRoot}/output-data/hive"/>
<mkdir path="${nameNode}/user/${wf:user()}/${wfeRoot}/output-data"/>
</prepare>
<job-xml>hive-site.xml</job-xml>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>oozie.log.hive.level</name>
<value>DEBUG</value>
</property>
<property>
<name>oozie.hive.defaults</name>
<value>hive-default.xml</value>
</property>
</configuration>
<script>script.q</script>
</hive>
<ok to="end"/>
<error to="fail"/>
Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
my job.properties file
nameNode=hdfs://localhost:8020
jobTracker=localhost:8021
queueName=default
wfeRoot=wfe
oozie.use.system.libpath=true
oozie.libpath=/user/oozie/share/lib/hive
oozie.wf.application.path=${nameNode}/user/${user.name}/${wfeRoot}/hiveoozie
Script
create table brundesh(name string,lname string) row format delimited fields terminated by ',';
I copied hive-site.xml ,script.hql and hive-default.xml in to oozie app directory. I am using cdh3
Error detalis:
Error code: JA018
Error Message: Main Class[org.apache.oozie.action.hadoop.HiveMain],exit code [9]
I copied the required jar files to sharelib directory in hdfs. I copied all the jar fiels present in oozie.sharelib.tar.gz from $OOZIE_HOME
I goggled for error but no luck. Please help me were am going wrong
As mention by Ben Please check Hive Log, which present in the respected Node or Check with in the console URL with details of the Logs.
Will also suggest to do another steps which requried to perform are:
Take a Backup of Shared Lib Jar from the DFS Location.
Upload the same Jars from local Hive Lib Location to DFS Shared Location with Oozie User.
Make Sure there should not be any Duplicate Hive Jar present in other Local Location except Hive Lib Path.
All Nods should be having the same Jars.
If you are using Pig as well, then please perform the Step 1, Step 2 , Step 3 from Pig as well.
Check the Hadoop ClassPath if there Classpath have been set properly.

hadoop streaming workflow multiple files

I am trying to write a workflow with hadoop streaming action which executes a awk program, Below is my scenario
Hadoop streaming commands works fine from client.However ever when executing as oozie workflow it does not work as its not able to find second file. please note the awk script is on local home directory which is mounted on hadoop as well and the input paths are on HDFS
In sample.awk(code attached below) i am passing two variables $1 and $2 which should get data from file1 and file2
From CLI , I have also attached the streaming workflow which I configured from hue which is not working as expected.
/usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.3.0-mr1-cdh5.1.0.jar -D mapreduce.job.reduces=0 -D mapred.reduce.tasks=0 -input /user/cloudera/input/file1 /user/cloudera/input/file2 -output /user/cloudera/awk/ouput -mapper /home/cloudera/diff_files/op_code/sample.awk -file /home/cloudera/diff_files/op_code/sample.awk
Workflow.xml
------------------
<workflow-app name="awk" xmlns="uri:oozie:workflow:0.4">
<global>
<configuration>
<property>
<name></name>
<value></value>
</property>
</configuration>
</global>
<start to="awk-streaming"/>
<action name="awk-streaming" cred="">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<streaming>
<mapper>/home/clouderasample.awk</mapper>
<reducer>/home/clouderasample.awk</reducer>
</streaming>
<configuration>
<property>
<name>mapred.output.dir</name>
<value>/user/cloudera/awk/output</value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value>true</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/user/cloudera/awk/input</value>
</property>
</configuration>
<file>/user/cloudera/awk/input/file1#file1</file>
<file>/user/cloudera/awk/input/file2#file2</file>
</map-reduce>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
Kindly see this link for more details
http://wiki.apache.org/hadoop/JobConfFile
<property>
<name>mapred.input.dir</name>
<value>/user/cloudera/awk/input/file1,/user/cloudera/awk/input/file2</value>
<description>A comma separated list of input directories.</description>
</property>

How do I pass arguments to an Oozie action using oozie.launcher.action.main.class?

Oozie has a config property called oozie.launcher.action.main.class where you can pass in the name of a "main class" for a map-reduce action (or a shell action), like so:
<configuration>
<property>
<name>oozie.launcher.action.main.class</name>
<value>com.company.MyCascadingClass</value>
</property>
</configuration>
But I need to pass arguments to my main class and can't see a way to do it. Any ideas?
I'm asking because I'm trying to launch a Cascading class/flow from within Oozie and all options I've tried so far have failed. If anyone has gotten Cascading to work from Oozie, let me know and I'll post another question asking that in particular.
As of Oozie 3 (haven't tried Oozie 4 yet), the answer to my main question is: you can't. There is no facility (strangely) for specifying any arguments to your main class defined with the oozie.launcher.action.main.class property.
#Dmitry's suggestion in the comments to just use the Oozie java action works for a Cascading job (or any Hadoop dependent job) because Oozie puts all the Hadoop jars in the classpath when it launches the job.
I've documented a working example of launching a Cascading job from Oozie at my blog here: http://thornydev.blogspot.com/2013/10/launching-cascading-job-from-apache.html
Here is the workflow.xml file that worked for me:
<workflow-app xmlns='uri:oozie:workflow:0.2' name='cascading-wf'>
<start to='stage1' />
<action name='stage1'>
<java>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<main-class>com.mycompany.MyCascade</main-class>
<java-opts></java-opts>
<arg>/user/myuser/dir1/dir2</arg>
<arg>my-arg-2</arg>
<arg>my-arg-3</arg>
<file>lib/${EXEC}#${EXEC}</file>
<capture-output />
</java>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>FAIL: Oh, the huge manatee!</message>
</kill>
<end name="end"/>
</workflow-app>
In the job.properties file that accompanies the workflow.xml, the EXEC property is defined as:
EXEC=mybig-shaded-0.0.1-SNAPSHOT.jar
and the job is put into the lib directory below where these two definition files are.

Resources