Scheduling a sqoop job in oozie through Shell script using Hue - shell

I am able to run a sqoop command in Oozie using Hue. But, when I try to run the same sqoop command by placing it in a shell script I am getting an error like below
Stdoutput 2016-05-20 10:52:13,241 ERROR [main] sqoop.Sqoop (Sqoop.java:runSqoop(181)) - Got exception running Sqoop:
java.lang.RuntimeException: Could not load db driver class: oracle.jdbc.OracleDriver
I have included the jdbc jar file like I did while running the sqoop command directly. I don't understand why it is not working for shell script.
Here is the workflow generated by Hue
<workflow-app name="My_Workflow" xmlns="uri:oozie:workflow:0.5">
<start to="shell-ca31"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="shell-ca31">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value>true</value>
</property>
<property>
<name>oozie.libpath</name>
<value>/user/oozie/libext</value>
</property>
</configuration>
<exec>sqoopoozie.sh</exec>
<file>/user/yxr6907/sqoopoozie.sh#sqoopoozie.sh</file>
<archive>/user/oozie/libext/ojdbc7.jar#ojdbc7.jar</archive>
<capture-output/>
</shell>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>

When you use shell action, jars for sqoop are not imported into classpath.
I was able to solve it by adding the jar into the classpath. Then, i export HADOOP_CLASSPATH and sqoop works.
Use the following:
Put the jar ojdbc7.jar in files
Use the following command inside shell script: export HADOOP_CLASSPATH=${PWD}/ojdbc7.jar
Instead of step 1. you can use the following properties to load jar into classpath:
oozie.use.system.libpath=true
oozie.libpath=/path/to/jars
Exporting HADOOP_CLASSPATH is required in both ways.

Related

oozie Pig action lauching Error

I am trying to do very basic oozie workflow
I am getting the below error wheni give the command..
user#ubuntu:~/surender$ oozie job -oozie http://localhost:11000/oozie /home/user/surender/oozie_demo/job.properties -run
Error:
Error: E0501 : E0501: Could not perform authorization operation, Failed on local exception: java.io.EOFException; Host Details : local host is: "ubuntu/127.0.0.1"; destination host is: "localhost":8020;
My oozie version is 4.0.0 , I checked that oozie web console is enabled..
This is how created a oozie workflow
I created a directory called oozie_demo and inside that i created two files
1.workflow.xml
2.job.properties
I also created a lib directory and placed the pig script inside that lib directory
workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.2" name="pig-wf">
<start to="pig-node"/>
<action name="pig-node">
<pig>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/output/pig/simple_load"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
</property>
</configuration>
<script>simple_load.pig</script>
<param>INPUT=/user/${wf:user()}/inputfiles/records.txt</param>
<param>OUTPUT=/user/${wf:user()}//output/pig/simple_load</param>
</pig>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Pig failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
job.properties
nameNode=hdfs://localhost:8020
jobTracker=localhost:8021
queueName=default
oozie_demo=oozie_demo
oozie.use.system.libpath=true
ozie.wf.application.path=${nameNode}/user/user/oozie_demo
my pig script :
records = load '/user/user/inputfiles/records.txt' USING PigStorage(',');
store records into '/user/user/output/pig/simple_load' using PigStorage(',');
Could somebody help me on this? I would like to know what went wrong? and how do i resolve this issue ?
Could you check if the Namenode is up and running at port 8020.

library packages not working with oozie

Hi i am running oozie with shell script. In that shell script i am using sparkR jobs.whenever running oozie jobs i am getting error with library.
here is my error.
Stdoutput Running /opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/spark/bin/spark-submit --class edu.berkeley.cs.amplab.sparkr.SparkRRunner --files pi.R --master yarn-client /SparkR-pkg/lib/SparkR/sparkr-assembly-0.1.jar pi.R yarn-client 4
Stdoutput Error in library(SparkR) : there is no package called ‘SparkR’
Stdoutput Execution halted
Exit code of the Shell command 1
<<< Invocation of Shell command completed <<<
my job.properties file
nameNode=hdfs://ip-172-31-41-199.us-west-2.compute.internal:8020
jobTracker=ip-172-31-41-199.us-west-2.compute.internal:8032
queueName=default
oozie.libpath=hdfs://ip-172-31-41-199.us-west- 2.compute.internal:8020/SparkR-pkg/lib/
oozie.use.system.libpath=true
oozie.wf.rerun.failnodes=true
oozieProjectRoot=shell_example
oozie.wf.application.path=${oozieProjectRoot}/apps/shell
my workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.1" name="Test">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>script.sh</exec>
<file>oozie-oozi/script.sh#script.sh</file>
<file>/user/karun/examples/pi.R</file>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Incorrect output</message>
</kill>
<end name="end"/>
</workflow-app>
my shellscript file
export SPARK_HOME=/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/spark
export YARN_CONF_DIR=/etc/hadoop/conf
export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
export HADOOP_CMD=/usr/bin/hadoop
/SparkR-pkg/lib/SparkR/sparkR-submit --master yarn-client pi.R yarn-client 4
I don't know how to resolve the issue.any help will be appreciated...

oozie running Sqoop command in a shell script

can I write a sqoop import command in a script and excute it in oozie as coordinator workflow?
I have tired to do so and found an error saying sqoop command not found even if i give the absolute path for sqoop to execute
script.sh is as follows
sqoop import --connect 'jdbc:sqlserver://xx.xx.xx.xx' -username=sa -password -table materials --fields-terminated-by '^' -- --schema dbo -target-dir /user/hadoop/CFFC/oozie_materials
and I have placed the file in HDFS and gave oozie its path.The workflow is as follows :
<workflow-app xmlns='uri:oozie:workflow:0.3' name='shell-wf'>
<start to='shell1' />
<action name='shell1'>
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>script.sh</exec>
<file>script.sh#script.sh</file>
</shell>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Script failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end' />
the oozie returns an error as sqoop command not found in the mapreduce log.
so is that a good practice?
Thanks
The shell action will be running as a mapper task as you have observed. The sqoop command needs to be present on each data node where the mapper is running. If you make sure sqoop command line is there and has proper permission for the user who submitted the job, it should work.
The way to verify could be :
ssh to datanode as specific user
run command line sqoop to see if it works
try to add sqljdbc41.jar sqlserver driver to your HDFS and add archive tag in your workflow.xml as below and then try to run oozie workflow run command:
<archive>${HDFSAPATH}/sqljdbc41.jar#sqljdbc41.jar</archive>
If problem exists then..add hive-site.xml with below properties,
javax.jdo.option.ConnectionURL
hive.metastore.uris
Keep hive-site.xml in HDFS, and add file tag in workflow.xml and restart oozie workflow.xml

Ozzie flow not able to hive-ql having UDF add command

I am creating oozie workflow where I am calling hive sqls sequentially.
First sql has simple transformation logic. While second has temporary function creation command and add lookup files commands. I am using this UDF further in sql.
ADD JAR **;
CREATE TEMPORARY FUNCTION XXXXX AS ...;
ADD FILE *;
<workflow-app xmlns="uri:oozie:workflow:0.4" name="hive-wf">
<credentials>
<credential name="hive_credentials" type="hcat">
<property>
<name>hcat.metastore.uri</name>
<value>XXXXXXXX</value>
</property>
<property>
<name>hcat.metastore.principal</name>
<value>XXXXXXXX</value>
</property>
</credential>
</credentials>
<start to="hive-1" />
<action name="hive-1" cred="hive_credentials">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>XXXXXXX</job-tracker>
<name-node>XXXXXXX</name-node>
<job-xml>/XXXXXX/oozie/oozie-hive-site.xml</job-xml>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
</configuration>
<script>/XXXXXXX/hive_1.sql</script>
</hive>
<ok to="hive-2" />
<error to="fail" />
</action>
<action name="hive-2" cred="hive_credentials">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>XXXXXXXX</job-tracker>
<name-node>XXXXXXXX</name-node>
<job-xml>/XXXXXX/oozie/oozie-hive-site.xml</job-xml>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
</configuration>
<script>/XXXXXXX/hive_2.sql</script>
</hive>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end" />
First hql script is executed successfully. Workflow is killed while executing second hql script giving below error.
JOB[0000044-140317190624992-oozie-oozi-W] ACTION[-] E1100: Command precondition does not hold before execution, [, coord action is null], Error Code: E1100
It throws error while executing commands to ADD UDF.(ADD JAR,CREATE TEMPORARY,ADD FILE).
I searched on this error and I got some links to ignore the error !!!
But, my actual sql using hive UDF given in second Hql script is not executed.
Can you please help?
The add jar path is a local machine path.
Oozie actions are run on datanodes. There is a possibility it cannot find the jar on the datanode and hence gives an error (Check the mapreduce logs , you would find the reason)
If this is a issue, one workaround can be to put the file in HDFS and copy it to local filesytem on the datanode during execution

Error while running Hive Action in Oozie

I'm trying to run a hive action through Oozie. My workflow.xml is as follows:
<workflow-app name='edu-apollogrp-dfe' xmlns="uri:oozie:workflow:0.1">
<start to="HiveEvent"/>
<action name="HiveEvent">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>oozie.hive.defaults</name>
<value>${hiveConfigDefaultXml}</value>
</property>
</configuration>
<script>${hiveQuery}</script>
<param>OUTPUT=${StagingDir}</param>
</hive>
<ok to="end"/>
<error to="end"/>
</action>
<kill name='kill'>
<message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end'/>
And here is my job.properties file:
oozie.wf.application.path=${nameNode}/user/${user.name}/hiveQuery
oozie.libpath=${nameNode}/user/${user.name}/hiveQuery/lib
queueName=interactive
#QA
nameNode=hdfs://hdfs.bravo.hadoop.apollogrp.edu
jobTracker=mapred.bravo.hadoop.apollogrp.edu:8021
# Hive
hiveConfigDefaultXml=/etc/hive/conf/hive-default.xml
hiveQuery=hiveQuery.hql
StagingDir=${nameNode}/user/${user.name}/hiveQuery/Output
When I run this workflow, I end up with this error:
ACTION[0126944-130726213131121-oozie-oozi-W#HiveEvent] Launcher exception: org/apache/hadoop/hive/cli/CliDriver
java.lang.NoClassDefFoundError: org/apache/hadoop/hive/cli/CliDriver
Error Code: JA018
Error Message: org/apache/hadoop/hive/cli/CliDriver
I'm not sure what this error means. Where am I going wrong?
EDIT
This link says error code JA018 is: JA018 is output directory exists error in workflow map-reduce action. But in my case the output directory does not exist. This makes it all the more confusing
I figured out what was going wrong!
The class org/apache/hadoop/hive/cli/CliDriver is required for execution of a Hive Action. This much is obvious from the error message. This class is within this jar file: hive-cli-0.7.1-cdh3u5.jar. (In my case cdh3u5 in my cloudera version).
Oozie checks for this jar in the ShareLib directory. The location of this directory is usually configured in hive-site.xml, with the property name as oozie.service.WorkflowAppService.system.libpath, so Oozie should find the jar easily.
But in my case, hive-site.xml did not include this property, so Oozie didn't know where to look for this jar, hence the java.lang.NoClassDefFoundError.
To resolve this, I had to include a parameter in my job.properties file to point oozie to the location of the ShareLib directory, as follows:
oozie.libpath=${nameNode}/user/oozie/share/lib. (depends on where SharedLib directory is configured on your cluster).
This got rid of the error!

Resources