Can I rename the oozie job name dynamically - hadoop

We have a Hadoop service in which we have multiple applications. We need to process the data for each of the applications by reexecuting the same workflow. These are scheduled to execute at the same time of the day. The issue is that when these jobs are running its hard to know for which application the job is running/failed/succeeded. Ofcourse, I can open the job coonfiguration and know it but that does take time since there are 10s of applications running under that service.
Is there any option in oozie to dynamically pass the name of the workflow (or part of it) when executing the job such as
oozie job -run -config <filename> -name "<NameIWishToGive>"
OR
oozie job -run -config <filename> -nameSuffix "<MyApplicationNameUnderTheService>"
Also, we dont wish to create multiple job folders to execute separately as that would be too much of copy paste.
Please suggest.

It looks to me like you should be able to just use properties set in the job config.
I was able to get a dynamic name by doing the following.
Here's an example of my workflow.xml:
<workflow-app xmlns="uri:oozie:workflow:0.2" name="map-reduce-wf-${environment}">
...
</workflow-app>
And in my job.properties I had:
...
environment=test
...
The name ended up being: "map-reduce-wf-test"

you will find a whole bunch of oozie command lines here in the apache docs. i'm not sure which one exactly you are looking for so i thought i'd just paste the link. hope this helps!

I couldn't find anything in oozie to do that. Here is the script that does find/replace of #{appName} and #{frequency} in *.xml files + uploads all files to hdfs. Values are taken from the properties file passed to the script as the 3rd argument.
Gist - https://gist.github.com/epishkin/5952522
Example:
./upload.sh simple_reports namenode01 simple_reports/coordinator_script-1.properties
where 'simple_reports' is a folder with workflow.xml and coordinator.xml files.
workflow.xml:
<workflow-app name="#{appName}" xmlns="uri:oozie:workflow:0.3">
...
</workflow-app>
coordinator.xml:
<coordinator-app name="#{appName}-coord" xmlns="uri:oozie:coordinator:0.2"
frequency="#{frequency}"
start="${start}"
end= "${end}"
timezone="America/New_York">
...
</coordinator-app>
coordinator_script-1.properties:
appName=multi_network
frequency=${coord:days(7)}
...
Hope this helps.

I had recently faced this issue and this, All the tables uses the same workflow but name of the oozie application should reflect the name of the table it is processing.
Then pass the same parameter from job.properties then the name of the ozzie application will be acoording to dataload_tablename.

Related

Unable to deploy Spark jobs using Oozie

I need to keep a spark job running 24/7 and for this I am using Oozie. To do this I have written a workflow.xml and job.properties files, containing the needful information to invoke it.
However when I try to send the oozie job using this:
oozie job –config /home/oozie/tst/job.properties -run
I get the following error message, which is very clear:
java.io.IOException: configuration is not specified
at org.apache.oozie.cli.OozieCLI.getConfiguration(OozieCLI.java:816)
at org.apache.oozie.cli.OozieCLI.jobCommand(OozieCLI.java:1055)
at org.apache.oozie.cli.OozieCLI.processCommand(OozieCLI.java:686)
at org.apache.oozie.cli.OozieCLI.run(OozieCLI.java:639)
at org.apache.oozie.cli.OozieCLI.main(OozieCLI.java:225)
configuration is not specified
The problem here is that the configuration file (job.properties) exists locally on the path specified. I also PUT the directory containing both files and .jar in the HDFS.
Any idea why is this failing?
Is Oozie the best tool for this task I have?
The config parameter takes local path not HDFS. check job.properties present in /home/oozie/tst/job.properties
check job.properties contain oozie.wf.application.path=PATH_TO_HDFS_PATH_WHERE_WORKFLOW.XML_IS_PRESENT
Plus I see the dash(-) given in config parameter is different then dash(-) in run parameter
Specify the host in your command
oozie job --oozie http://your_host:11000/oozie -config /home/oozie/tst/job.properties -run
11000 is deafult port

Oozie - Run workflow by command line with configuration file in HDFS

as a newbie with Oozie, I tried to run some tutorials by command line. My stepByStep:
upload my Oozie project (workflow xml file, job.properties file, jar and data) to HDFS via HUE interface. In my job.properties files, I've indicated every information like data name node, path to my application, ...
running via HUE interface, simply, I check on check box of workflow xml file and submit
I would like to run my Oozie project by command line:
with job.properties file in local, I run:
oozie job -oozie http://localhost:11000/oozie -config examples/apps/map-reduce/job.properties -run
How can I run my Oozie project instead of with the job.properties in local (instead of the config file in local, I want to run my job with the configuration file in HDFS)?
Thanks for any suggestion and feel free to comment if my question is not clear!
I don't know if there is a direct way, but you certainly could do something like
oozie job -oozie http://localhost:11000/oozie -config <(hdfs dfs -cat examples/apps/map-reduce/job.properties) -run

Why MR2 map task is running under 'yarn' user and not under user I ran hadoop job?

I'm trying to run mapreduce job on MR2, Hadoop ver. 2.6.0-cdh5.8.0. Job has relative path to directory which has a lot of files to be compressed based on some criteria(not really necessary for this question). I'm running my job as following:
sudo -u my_user hadoop jar my_jar.jar com.example.Main
There is a folder on HDFS under path /user/my_user/ with files. But when I'm running my job I got following exception:
java.io.FileNotFoundException: File /user/yarn/<path_from_job> does not exist.
I'm migrating this job from MR1 where this job is working correctly. My suggestion is this is happening due to YARN, because each container started under YARN user. In my job configuration I've tried to set mapreduce.job.user.name="my_user" but this didn't help.
I've found ${user.home} usage in me Job configuration, but I don't know aware where it is set and is it possible to change this.
The only solution I found so far is to provide absolute path to folder. Is there any other way around, because I feel like this is not correct approach.
Thank you

Oozie shell script action

I am exploring the capabilities of Oozie for managing Hadoop workflows. I am trying to set up a shell action which invokes some hive commands. My shell script hive.sh looks like:
#!/bin/bash
hive -f hivescript
Where the hive script (which has been tested independently) creates some tables and so on. My question is where to keep the hivescript and then how to reference it from the shell script.
I've tried two ways, first using a local path, like hive -f /local/path/to/file, and using a relative path like above, hive -f hivescript, in which case I keep my hivescript in the oozie app path directory (same as hive.sh and workflow.xml) and set it to go to the distributed cache via the workflow.xml.
With both methods I get the error message:
"Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]" on the oozie web console. Additionally I've tried using hdfs paths in shell scripts and this does not work as far as I know.
My job.properties file:
nameNode=hdfs://sandbox:8020
jobTracker=hdfs://sandbox:50300
queueName=default
oozie.libpath=${nameNode}/user/oozie/share/lib
oozie.use.system.libpath=true
oozieProjectRoot=${nameNode}/user/sandbox/poc1
appPath=${oozieProjectRoot}/testwf
oozie.wf.application.path=${appPath}
And workflow.xml:
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${appPath}/hive.sh</exec>
<file>${appPath}/hive.sh</file>
<file>${appPath}/hive_pill</file>
</shell>
<ok to="end"/>
<error to="end"/>
</action>
<end name="end"/>
My objective is to use oozie to call a hive script through a shell script, please give your suggestions.
One thing that has always been tricky about Oozie workflows is the execution of bash scripts.
Hadoop is created to be massively parallel so the architecture acts very different than you would think.
When an oozie workflow executes a shell action, it will receive resources from your job tracker or YARN on any of the nodes in your cluster. This means that using a local location for your file will not work, since the local storage is exclusively on your edge node. If the job happened to spawn on your edge node then it would work, but any other time it would fail, and this distribution is random.
To get around this, I found it best to have the files I needed (including the sh scripts) in hdfs in either a lib space or the same location as my workflow.
Here is a good way to approach what you are trying to achieve.
<shell xmlns="uri:oozie:shell-action:0.1">
<exec>hive.sh</exec>
<file>/user/lib/hive.sh#hive.sh</file>
<file>ETL_file1.hql#hivescript</file>
</shell>
One thing you will notice is that the exec is just hive.sh since we are assuming that the file will be moved to the base directory where the shell action is completed
To make sure that last note is true, you must include the file's hdfs path, this will force oozie to distribute that file with the action. In your case, the hive script launcher should only be coded once, and simply fed different files. Since we have a one to many relationship, the hive.sh should be kept in a lib and not distributed with every workflow.
Lastly you see the line:
<file>ETL_file1.hql#hivescript</file>
This line does two things. Before the # we have the location of the file. It is just the file name since we should distribute our distinct hive files with our workflows
user/directory/workflow.xml
user/directory/ETL_file1.hql
and the node running the sh will have this distributed to it automagically. Lastly, the part after the # is the variable name we assign it two inside of the sh script. This gives you the ability to reuse the same script over and over and simply feed it different files.
HDFS directory notes,
if the file is nested inside the same directory as the workflow, then you only need to specify child paths:
user/directory/workflow.xml
user/directory/hive/ETL_file1.hql
Would yield:
<file>hive/ETL_file1.hql#hivescript</file>
But if the path is outside of the workflow directory you will need the full path:
user/directory/workflow.xml
user/lib/hive.sh
would yield:
<file>/user/lib/hive.sh#hive.sh</file>
I hope this helps everyone.
From
http://oozie.apache.org/docs/3.3.0/DG_ShellActionExtension.html#Shell_Action_Schema_Version_0.2
If you keep your shell script and hive script both in some folder in workflow then you can execute it.
See the command in sample
<exec>${EXEC}</exec>
<argument>A</argument>
<argument>B</argument>
<file>${EXEC}#${EXEC}</file> <!--Copy the executable to compute node's current working directory -->
you can write whatever commands you want in file
You can also use use hive action directly
http://oozie.apache.org/docs/3.3.0/DG_HiveActionExtension.html

Oozie job submission fails

I am trying to submit an example map reduce oozie job and all the properties are configured properly with regards to the path and name node and job-tracker port etc. I validated the workflow.xml too . when I deploy the job I get a job id and when I check the status I see a status KILLED and the details basically say that
/var/tmp/oozie/oozie-oozi7188507762062318929.dir/map-reduce-launcher.jar does not exist.
In order to resolve this error, just crate hdfs folders and give appropriate permissions to them.
http://kadirsert.blogspot.com.tr/2014/03/oozie-says-jar-does-not-exist.html
Local file system (no HDFS) should have '/var/tmp/oozie' directory.
If the directory doesn't exist, create the directory and restart the Oozie server. Then there comes a lot of files under /var/tmp/oozie including *-launcher.jar files.
'/var/tmp/oozie' is the value of -Djava.io.tmpdir variable in Oozie server start-up command line. You can check the value using 'ps -ef | grep oozie' where the Oozie server is running.

Resources