I am running an Oozie job with a workflow.xml file which begins:
<!-- <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1"> -->
<start to="imagesCreateSequenceFile"/>
<action name="imagesCreateSequenceFile">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
This gives me the following error:
Error: E0701 : E0701: XML schema error, cvc-elt.1.a: Cannot find the declaration of element 'start'.
I am pretty sure my job is pointing to the correct workflow.xml file on hdfs (so this does not fix it).
Any help is appreciated in working out why my job cannot see 'start'.
TIA!
PS I have tried a different workflow.xml and it gives the same error.
I needed to remove the comment marks around the first line, i.e. make it:
<workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
Related
I have a oozie workflow that will invoke a shell file, Shell file will further invoke a driver class of mapreduce job. Now i want to map my oozie jobId to Mapreduce jobId for later process. Is there any way to get oozie jobId in workflow file so that i can pass the same as argument to my driver class for mapping.
Following is my sample workflow.xml file
<workflow-app xmlns="uri:oozie:workflow:0.4" name="test">
<start to="start-test" />
<action name='start-test'>
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${jobScript}</exec>
<argument>${fileLocation}</argument>
<argument>${nameNode}</argument>
<argument>${jobId}</argument> <!-- this is how i wanted to pass oozie jobId -->
<file>${jobScriptWithPath}#${jobScript}</file>
</shell>
<ok to="end" />
<error to="kill" />
</action>
<kill name="kill">
<message>test job failed
failed:[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end" />
Following is my shell script.
hadoop jar testProject.jar testProject.MrDriver $1 $2 $3
Try to use ${wf:id()}:
String wf:id()
It returns the workflow job ID for the current workflow job.
More info here.
Oozie drops an XML file in the CWD of the YARN container running the shell (the "launcher" container), and also sets an env variable pointing to that XML (cannot remember the name though).
That XML contains a lot of stuff like name of Workflow, name of Action, ID of both, run attempt number, etc.
So you can sed back that information in the shell script itself.
Of course passing explicitly the ID (as suggested by Alexei) would be cleaner, but sometimes "clean" is not the best way. Especially if you are concerned about whether it's the first run or not...
can I write a sqoop import command in a script and excute it in oozie as coordinator workflow?
I have tired to do so and found an error saying sqoop command not found even if i give the absolute path for sqoop to execute
script.sh is as follows
sqoop import --connect 'jdbc:sqlserver://xx.xx.xx.xx' -username=sa -password -table materials --fields-terminated-by '^' -- --schema dbo -target-dir /user/hadoop/CFFC/oozie_materials
and I have placed the file in HDFS and gave oozie its path.The workflow is as follows :
<workflow-app xmlns='uri:oozie:workflow:0.3' name='shell-wf'>
<start to='shell1' />
<action name='shell1'>
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>script.sh</exec>
<file>script.sh#script.sh</file>
</shell>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Script failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end' />
the oozie returns an error as sqoop command not found in the mapreduce log.
so is that a good practice?
Thanks
The shell action will be running as a mapper task as you have observed. The sqoop command needs to be present on each data node where the mapper is running. If you make sure sqoop command line is there and has proper permission for the user who submitted the job, it should work.
The way to verify could be :
ssh to datanode as specific user
run command line sqoop to see if it works
try to add sqljdbc41.jar sqlserver driver to your HDFS and add archive tag in your workflow.xml as below and then try to run oozie workflow run command:
<archive>${HDFSAPATH}/sqljdbc41.jar#sqljdbc41.jar</archive>
If problem exists then..add hive-site.xml with below properties,
javax.jdo.option.ConnectionURL
hive.metastore.uris
Keep hive-site.xml in HDFS, and add file tag in workflow.xml and restart oozie workflow.xml
I am trying to successfully run a sqoop-action in Oozie using a Hadoop Cluster.
Whenever I check on the jobs status, Oozie returns with the following status update:
Actions
ID Status Ext ID Ext Status Err Code
0000037-140930230740727-oozie-oozi-W#:start: OK - OK -
0000037-140930230740727-oozie-oozi-W#sqoop-load ERROR job_1412278758569_0002 FAILED/KILLEDJA018
0000037-140930230740727-oozie-oozi-W#sqoop-load-fail OK - OK E0729
Which leads me to believe that there is nothing wrong with my Workflow, as opposed to some permission I am missing.
My jobs.properties config:
nameNode=hdfs://mynamenode.demo.com:8020
jobTracker=mysnamenode.demo.com:8050
queueName=default
workingRoot=working_dir
jobOutput=/user/test/out
oozie.use.system.libpath=true
oozie.libpath=/user/oozie/share/lib
oozie.wf.application.path=${nameNode}/user/test/${workingRoot}
MyWorkFlow.xml :
<?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns='uri:oozie:workflow:0.4' name='sqoop-workflow'>
<start to='sqoop-load' />
<action name="sqoop-load">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/test/${workingRoot}/out-data/sqoop" />
<mkdir path="${nameNode}/user/test/${workingRoot}/out-data"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<command>import --connect jdbc:oracle:thin:#10.100.50.102:1521/db --username myID --password myPass --table SomeTable -target-dir /user/test/${workingRoot}/out-data/sqoop </command>
</sqoop>
<ok to="end"/>
<error to="sqoop-load-fail"/>
</action>
<kill name="sqoop-load-fail">
<message>Sqoop export failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end' />
</workflow-app>
Steps I have taken:
Looking up the Error...didn't find much beyound what I mentioned previously
checking that the required ojdbc.jar file was executable and that the /user/oozie/share/lib/sqoop directory and is accessible on HDFS
checking to see if I have any prexisting directories that might be causing a problem
I have been searching the internet and my log files for an answer....any help provided would be much appreciated....
Update:
Ok...so I add ALL of the jars within /usr/lib/sqoop/lib to /user/oozie/share/lib/sqoop. I am still getting the same errors. checking the job log...there is something I did not post previously:
2014-10-03 11:16:35,586 WARN CoordActionUpdateXCommand:542 - USER[ambari-qa] GROUP[-] TOKEN[] APP[sqoop-workflow] JOB[0000015-141002171510902-oozie-oozi-W] ACTION[-] E1100: Command precondition does not hold before execution, [, coord action is null], Error Code: E1100
As you can see I am running the job as "Super User".....and the error is exactly the same. So it cannot be a permission issue. I am thinking there is a jar that is required other than those required to be in the /user/oozie/share/lib/sqoop directory.....perhaps I need to copy the jars for mapreduce to be in /user/oozie/share/lib/mapreduce ?
Ok...problem solved.
Apparently EVERY component of the Oozie Workflow/Job must have it's corresponding *.jar dependencies uploaded to the Oozie SharedLib(/user/oozie/share/lib/) directories corresponding to those components.
I copied ALL the *.jars in /usr/lib/sqoop/lib into -> /user/oozie/share/lib
I copied ALL the *.jars in the /usr/lib/oozie/lib into -> /user/oozie/share/lib/oozie
After running the job again....the workflow stalled, and the error given was different from the last one....namely that this time around....the workflow was trying to create a directory on HDFS that already existed, so I removed that directory and then ran the job again.....
SUCCESS!
Side Note: People really need to write better exception messages. If this was just an issue a few people where having....then fine....but this is simply not the case. This particular error is giving more than a few people fits if the requests for help online are any indication.
I faced the same problem. Just adding a
<archive>path/in/hdfs/ojdbc6.jar#ojdbc6.jar</archive>
to my workflow.xml within the <sqoop> </sqoop> tags worked for me. Got the reference here.
The following is my workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.3" name="import-job">
<start to="createtimelinetable" />
<action name="createtimelinetable">
<sqoop xmlns="uri:oozie:sqoop-action:0.3">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
</property>
</configuration>
<command>import --connect jdbc:mysql://10.65.220.75:3306/automation --table ABC --username root</command>
</sqoop>
<ok to="end"/>
<error to="end"/>
</action>
<end name="end"/>
</workflow-app>
Getting the following error on trying to submit the job:
Error: E0701 : E0701: XML schema error, cvc-elt.1.a: Cannot find the declaration of element 'action'.
However, oozie validate workflow.xml returns:
Valid worflow-app
Anyone who faced and resolved a similar issue in the past?
Confirm if you have copied your workflow.xml to hdfs. You need not copy job.properties to hdfs but have to copy all the other files and libraries to hdfs
For those who reached here by googling the error message below is the general way to resolve Oozie schema issues:
Once your workflow.xml is complete, it's a best practice to validate it against oozie XSD schema file rather than submitting the Ooozie job and facing the issue later.
note on What is XSD schema:
XSD schema is a kind of validation file which narrates,
a. Sequence of tags
b. whether a tag should be present or not
c. what are the valid sub-tags in a tag, etc.
How to validate workflow XML against XSD?
a. find out the specific XSD, this is seen in xmlns(xml namespace) property
< workflow-app name='FooBarWorkFlow' xmlns="uri:oozie:workflow:0.4">
in this case, it is uri:oozie:workflow:0.4. find the XSD file of uri:oozie:workflow:0.4(get it from appendix of Oozie official site or can be found easily by Googling)
b. There are numerous XML validation sites(example https://www.liquid-technologies.com/online-xsd-validator), provide your Workflow XML file ,XSD file and validate
Errors in workflow XML file will be listed out with line and column info. Rectify these, now use the valid Workflow XML file to avoid schema validation errors in oozie.
oozie validate some_workflow.xml
Tells you line numbers and is much easier to understand than logging output.
I'm trying to run a hive action through Oozie. My workflow.xml is as follows:
<workflow-app name='edu-apollogrp-dfe' xmlns="uri:oozie:workflow:0.1">
<start to="HiveEvent"/>
<action name="HiveEvent">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>oozie.hive.defaults</name>
<value>${hiveConfigDefaultXml}</value>
</property>
</configuration>
<script>${hiveQuery}</script>
<param>OUTPUT=${StagingDir}</param>
</hive>
<ok to="end"/>
<error to="end"/>
</action>
<kill name='kill'>
<message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end'/>
And here is my job.properties file:
oozie.wf.application.path=${nameNode}/user/${user.name}/hiveQuery
oozie.libpath=${nameNode}/user/${user.name}/hiveQuery/lib
queueName=interactive
#QA
nameNode=hdfs://hdfs.bravo.hadoop.apollogrp.edu
jobTracker=mapred.bravo.hadoop.apollogrp.edu:8021
# Hive
hiveConfigDefaultXml=/etc/hive/conf/hive-default.xml
hiveQuery=hiveQuery.hql
StagingDir=${nameNode}/user/${user.name}/hiveQuery/Output
When I run this workflow, I end up with this error:
ACTION[0126944-130726213131121-oozie-oozi-W#HiveEvent] Launcher exception: org/apache/hadoop/hive/cli/CliDriver
java.lang.NoClassDefFoundError: org/apache/hadoop/hive/cli/CliDriver
Error Code: JA018
Error Message: org/apache/hadoop/hive/cli/CliDriver
I'm not sure what this error means. Where am I going wrong?
EDIT
This link says error code JA018 is: JA018 is output directory exists error in workflow map-reduce action. But in my case the output directory does not exist. This makes it all the more confusing
I figured out what was going wrong!
The class org/apache/hadoop/hive/cli/CliDriver is required for execution of a Hive Action. This much is obvious from the error message. This class is within this jar file: hive-cli-0.7.1-cdh3u5.jar. (In my case cdh3u5 in my cloudera version).
Oozie checks for this jar in the ShareLib directory. The location of this directory is usually configured in hive-site.xml, with the property name as oozie.service.WorkflowAppService.system.libpath, so Oozie should find the jar easily.
But in my case, hive-site.xml did not include this property, so Oozie didn't know where to look for this jar, hence the java.lang.NoClassDefFoundError.
To resolve this, I had to include a parameter in my job.properties file to point oozie to the location of the ShareLib directory, as follows:
oozie.libpath=${nameNode}/user/oozie/share/lib. (depends on where SharedLib directory is configured on your cluster).
This got rid of the error!