Can't run shell script via oozie - shell

I'm trying to run a shell script that just create a directory via mkdir command via oozie workflow.
I'm using HDP 2.6.5 | Oozie 4.2.0
The error message is always :
java.io.IOException: Cannot run program "test.sh" (in directory
"/hadoop/yarn/local/usercache/whorchani/appcache/application_1547225966242_3390/container_e111_1547225966242_3390_01_000002"):
error=2, No such file or directory

It's hard to answer your question if you don't at least post your Oozie XML for the relevant action. My guess is that you haven't used the <file> tag.
From Apache Oozie by Mohammad Kamrul Islam and Aravind Srinivasan
Because the shell command runs on any Hadoop node, you need to be aware of the path of the binary on these nodes. The executable has to be either available on the node or copied by the action via the distributed cache using the <file> tag. For the binaries on the node that are not copied via the cache, it’s perhaps safer and easier to debug if you always use an absolute path.
Here is a simple example of a shell action:
<action name="shell_action">
<shell xmlns = "uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<exec>sh</exec>
<argument>my_shell_script.sh</argument>
<file>/full/hdfs/path/to/your/script/my_shell_script.sh</file>
</shell>
<ok to="action2"/>
<error to="fail"/>
</action>

Related

How to check whether the file exist in HDFS location, using oozie?

How to check whether a file in HDFS location is exist or not, using Oozie?
In my HDFS location I will get a file like this test_08_01_2016.csv at 11PM , on a daily basis.
I want check whether this file exist after 11.15 PM. I can schedule the batch using a Oozie coordinator job.
But how can I validate if the file exists in HDFS?
you can use EL expression in oozie like:
<decision name="CheckFile">
<switch>
<case to="nextOozieTask">
${fs:exists('/path/test_08_01_2016.csv')} <!--do note the path which should be in ''-->
</case>
<default to="MailActionFileMissing" />
</switch>
</decision>
You can also build the name of the file using simple shell script using capture output.

Set hadoop user to launch spark-submit via oozie shell action

I want to set hadoop user for spark-submit action when launching oozie workflow via shell action: oozie MR (that launches shell) should launch as user A, but spark-submit (that is started from shell script) should launch as user B.
I tried to set user.name=A (in job.properties) with 'export HADOOP_USER_NAME=B' (in shell script) but it doesn't work unless A=B.
Can anyone help?
P.S. I'm using oozie 4.0.0 with CDH 5.3.1 and spark 1.2.0 on yarn.
I'm surprised exporting the HADOOP_USER_NAME in the shell script isn't working, but you might try adding a
<shell ...>
...
<env-var>HADOOP_USER_NAME=B</env-var>
...
</shell>
to the shell action in the xml.

Execute a sub-workflow for each line of a file

I'm using Oozie Sqoop Action to import data in the Datalake.
I need a HDFS folder for each table of the database source. I have more than 300 tables.
I could have all the 300 Sqoop Actions hardcoded in a Workflow but then the Workflow would be too big for the Oozie configuration.
Error submitting job /user/me/workflow.xml
E0736: Workflow definition length [107,123] exceeded maximum allowed length [100,000]
Having big file like that isn't a good idea because it slows the system (it is saved in the database) and it's hard to maintain.
Question is, how do I call a sub-workflow for each table name ?
Equivalent shell script would be something like:
while read TABLE; do
sqoop import --connect ${CONNECT} --username ${USERNAME} --password ${PASSWORD} --table ${TABLE} --target-dir ${HDFS_LOCATION}/${TABLE} --num-mappers ${NUM-MAPPERS}
done < tables.data
Where tables.data contains a table names list which is a subset of the database source tables names. For example :
TABLE_ONE
TABLE_TWO
TABLE_SIX
TABLE_TEN
And here the sub-workflow I want to call for each table:
<workflow-app name="sub-workflow-import-table" xmlns="uri:oozie:workflow:0.5">
<start to="sqoop-import"/>
<action name="sqoop-import">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>sqoop import --connect ${CONNECT} --username ${USERNAME} --password ${PASSWORD} --table ${TABLE} --target-dir ${HDFS_LOCATION}/${TABLE} --num-mappers ${NUM-MAPPERS}</command>
</sqoop>
<ok to="end"/>
<error to="log-and-kill"/>
</action>
<end name="end"/>
<kill name="log-and-kill">
<message>Workflow failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
</workflow-app>
Let me know if you need more precision.
Thanks!
David
There's sadly no way to do this nicely in Oozie - you'd need to hardcode all 300 Sqoop actions into an Oozie XML. This is because Oozie deals with directed acyclic graphs, which means loops (like your shell script) don't have an Oozie equivalent.
However I don't think Oozie is the right tool here. Oozie requires one container per action to use as a launcher, which means your cluster will need to allocate 300 additional containers over the space of a single run. This can effectively deadlock a cluster as you end up in situations where launchers prevent the actual jobs running! I've worked on a large cluster with > 1000 tables and we used Bash there to avoid this issue.
If you do want to go ahead with this in Oozie, you can't avoid generating a workflow with 300 actions. I would do it as 300 actions rather than 300 calls to sub-workflows which each call one action, else you're going to generate even more overhead. You can either create this file manually, or preferably write some code to generate the Oozie workflow XML file given a list of tables. The latter is more flexible as it allows tables to be included or excluded on a per-run basis.
But as I initially said, I'd stick to Bash for this one unless you have a very very good reason.
My suggestion would be to create workflows each for 50 table import. So you have 6 of them like that. Call all the 6 workflows as sub workflows from a master or parent workflow. By this way we can have the control at one point and it will be easy to schedule a single workflow.

Oozie shell script action

I am exploring the capabilities of Oozie for managing Hadoop workflows. I am trying to set up a shell action which invokes some hive commands. My shell script hive.sh looks like:
#!/bin/bash
hive -f hivescript
Where the hive script (which has been tested independently) creates some tables and so on. My question is where to keep the hivescript and then how to reference it from the shell script.
I've tried two ways, first using a local path, like hive -f /local/path/to/file, and using a relative path like above, hive -f hivescript, in which case I keep my hivescript in the oozie app path directory (same as hive.sh and workflow.xml) and set it to go to the distributed cache via the workflow.xml.
With both methods I get the error message:
"Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]" on the oozie web console. Additionally I've tried using hdfs paths in shell scripts and this does not work as far as I know.
My job.properties file:
nameNode=hdfs://sandbox:8020
jobTracker=hdfs://sandbox:50300
queueName=default
oozie.libpath=${nameNode}/user/oozie/share/lib
oozie.use.system.libpath=true
oozieProjectRoot=${nameNode}/user/sandbox/poc1
appPath=${oozieProjectRoot}/testwf
oozie.wf.application.path=${appPath}
And workflow.xml:
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${appPath}/hive.sh</exec>
<file>${appPath}/hive.sh</file>
<file>${appPath}/hive_pill</file>
</shell>
<ok to="end"/>
<error to="end"/>
</action>
<end name="end"/>
My objective is to use oozie to call a hive script through a shell script, please give your suggestions.
One thing that has always been tricky about Oozie workflows is the execution of bash scripts.
Hadoop is created to be massively parallel so the architecture acts very different than you would think.
When an oozie workflow executes a shell action, it will receive resources from your job tracker or YARN on any of the nodes in your cluster. This means that using a local location for your file will not work, since the local storage is exclusively on your edge node. If the job happened to spawn on your edge node then it would work, but any other time it would fail, and this distribution is random.
To get around this, I found it best to have the files I needed (including the sh scripts) in hdfs in either a lib space or the same location as my workflow.
Here is a good way to approach what you are trying to achieve.
<shell xmlns="uri:oozie:shell-action:0.1">
<exec>hive.sh</exec>
<file>/user/lib/hive.sh#hive.sh</file>
<file>ETL_file1.hql#hivescript</file>
</shell>
One thing you will notice is that the exec is just hive.sh since we are assuming that the file will be moved to the base directory where the shell action is completed
To make sure that last note is true, you must include the file's hdfs path, this will force oozie to distribute that file with the action. In your case, the hive script launcher should only be coded once, and simply fed different files. Since we have a one to many relationship, the hive.sh should be kept in a lib and not distributed with every workflow.
Lastly you see the line:
<file>ETL_file1.hql#hivescript</file>
This line does two things. Before the # we have the location of the file. It is just the file name since we should distribute our distinct hive files with our workflows
user/directory/workflow.xml
user/directory/ETL_file1.hql
and the node running the sh will have this distributed to it automagically. Lastly, the part after the # is the variable name we assign it two inside of the sh script. This gives you the ability to reuse the same script over and over and simply feed it different files.
HDFS directory notes,
if the file is nested inside the same directory as the workflow, then you only need to specify child paths:
user/directory/workflow.xml
user/directory/hive/ETL_file1.hql
Would yield:
<file>hive/ETL_file1.hql#hivescript</file>
But if the path is outside of the workflow directory you will need the full path:
user/directory/workflow.xml
user/lib/hive.sh
would yield:
<file>/user/lib/hive.sh#hive.sh</file>
I hope this helps everyone.
From
http://oozie.apache.org/docs/3.3.0/DG_ShellActionExtension.html#Shell_Action_Schema_Version_0.2
If you keep your shell script and hive script both in some folder in workflow then you can execute it.
See the command in sample
<exec>${EXEC}</exec>
<argument>A</argument>
<argument>B</argument>
<file>${EXEC}#${EXEC}</file> <!--Copy the executable to compute node's current working directory -->
you can write whatever commands you want in file
You can also use use hive action directly
http://oozie.apache.org/docs/3.3.0/DG_HiveActionExtension.html

Can I rename the oozie job name dynamically

We have a Hadoop service in which we have multiple applications. We need to process the data for each of the applications by reexecuting the same workflow. These are scheduled to execute at the same time of the day. The issue is that when these jobs are running its hard to know for which application the job is running/failed/succeeded. Ofcourse, I can open the job coonfiguration and know it but that does take time since there are 10s of applications running under that service.
Is there any option in oozie to dynamically pass the name of the workflow (or part of it) when executing the job such as
oozie job -run -config <filename> -name "<NameIWishToGive>"
OR
oozie job -run -config <filename> -nameSuffix "<MyApplicationNameUnderTheService>"
Also, we dont wish to create multiple job folders to execute separately as that would be too much of copy paste.
Please suggest.
It looks to me like you should be able to just use properties set in the job config.
I was able to get a dynamic name by doing the following.
Here's an example of my workflow.xml:
<workflow-app xmlns="uri:oozie:workflow:0.2" name="map-reduce-wf-${environment}">
...
</workflow-app>
And in my job.properties I had:
...
environment=test
...
The name ended up being: "map-reduce-wf-test"
you will find a whole bunch of oozie command lines here in the apache docs. i'm not sure which one exactly you are looking for so i thought i'd just paste the link. hope this helps!
I couldn't find anything in oozie to do that. Here is the script that does find/replace of #{appName} and #{frequency} in *.xml files + uploads all files to hdfs. Values are taken from the properties file passed to the script as the 3rd argument.
Gist - https://gist.github.com/epishkin/5952522
Example:
./upload.sh simple_reports namenode01 simple_reports/coordinator_script-1.properties
where 'simple_reports' is a folder with workflow.xml and coordinator.xml files.
workflow.xml:
<workflow-app name="#{appName}" xmlns="uri:oozie:workflow:0.3">
...
</workflow-app>
coordinator.xml:
<coordinator-app name="#{appName}-coord" xmlns="uri:oozie:coordinator:0.2"
frequency="#{frequency}"
start="${start}"
end= "${end}"
timezone="America/New_York">
...
</coordinator-app>
coordinator_script-1.properties:
appName=multi_network
frequency=${coord:days(7)}
...
Hope this helps.
I had recently faced this issue and this, All the tables uses the same workflow but name of the oozie application should reflect the name of the table it is processing.
Then pass the same parameter from job.properties then the name of the ozzie application will be acoording to dataload_tablename.

Resources