I ran an Oozie coordinator which runs a workflow every hour. I don't have its id and when I run the command oozie jobs -oozie http://localhost:11000/oozie it only shows me the workflow jobs and there is no coordinator. I would like to stop this coordinator from further processing, how can I do that?
First an advice in order to avoid to define the oozie URL in each command
export OOZIE_URL=http://localhost:11000/oozie
You can list running coordinators
oozie jobs -jobtype coordinator -filter status=RUNNING
This will return a list displaying the coordinator ID <coord_id> in the first column.
Note that you must have appropriate rights to run the following commands.
Then you can suspend the coordinator
oozie job -suspend `<coord_id>`
And resume it.
oozie job -resume <coord_id>
But often you have to kill it
oozie job -kill <coord_id>
and redeploy it...
oozie job -config job.properties -run
For coordinator jobs, try this
oozie jobs -jobtype coordinator -oozie http://localhost:11000/oozie
su - {username} -c 'oozie job -oozie http://localhost:11000/oozie -kill {Workflow External Id or coordinator's external Id}'
To execute this command you need to login to your oozie cluster or you can also run from local machine for that you need to replace localhost to box address where oozie is running..
Thanks
Related
Can you please someone help me to get the oozie error logs to hive table when jobs get failed. suggest me the approach how to do as i am new to this.
Write a shell script to pick up oozie job logs using:
oozie job -oozie http://localhost:11000 -info <wfid>
oozie job -oozie http://localhost:11000 -log <wfid>
redirect results of it into a file, which you may load into a hive table. Then use a oozie shell action trigger this step at failure.
I tried running my first oozie job by following a blog post.
I used oozie-examples.tar.gz, after extracting, placed examples in hdfs.
I tried running map-reduce job in it but unfortunately got an error.
Ran below command:
oozie job -oozie http://localhost:11000/oozie -config /examples/apps/map-reduce/job.properties -run
Got the error:
java.io.IOException: configuration is not specified at
org.apache.oozie.cli.OozieCLI.getConfiguration(OozieCLI.java:787) at
org.apache.oozie.cli.OozieCLI.jobCommand(OozieCLI.java:1026) at
org.apache.oozie.cli.OozieCLI.processCommand(OozieCLI.java:662) at
org.apache.oozie.cli.OozieCLI.run(OozieCLI.java:615) at
org.apache.oozie.cli.OozieCLI.main(OozieCLI.java:218) configuration is
not specified
I don't know which configuration it is asking for as I am using Cloudera VM and it has by default got all the configurations set in it.
oozie job -oozie http://localhost:11000/oozie -config /examples/apps/map-reduce/job.properties -run
The -config parameter takes an local path not an HDFS path. The workflow.xml needs to be present in the HDFS and path is defined in the job.properties file with the property:
oozie.wf.application.path=<path to the workflow.xml>
I have checked for oozie service at 11000. It is connecting.
But at the time of submitting job console is stuck.
Command used for submitting is
oozie/bin/oozie job -submit -config /tmp/config.properties -oozie http://127.0.0.1:11000/oozie
I have also checked logs for errors. There isnt any.
You are using the -submit command. You need to use -run for the job to submit. You should get a workflow id when you submit. This is the command you should run:
oozie/bin/oozie job -run -config /tmp/config.properties -oozie http://127.0.0.1:11000/oozie
You can check the status of the job by running:
oozie job -oozie http://localhost:11000/oozie -info <wfid>
I have mentioned wrong yarn port and ip address in config file. Thats why it was not connecting to yarn and it was not submitting. I have updated it and its working fine.
I am trying to start a Oozie Shell Action Job via cli as:
oozie job -config jobprops/jos.prioperties -run
The Job Starts, it gives me a unique Id and I can see Job in Oozie UI.
However, Yarn Console shows no submitted jobs and on checking log in oozie I get following message:
Error starting action [folder-structure].
ErrorType [TRANSIENT], ErrorCode [JA009]
Message [JA009: Permission denied: user=vikas.r, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257).
The job finally goes to SUSPENDED state.
Why is job trying to access "/" ? How could it be resolved ?
I am running under unix user vikas.r, with all folders in hdfs at /user/vikas.r
The error message is quite straightforward. Your oozie job is trying to write something to / as vikas.r user, which lacks permissions to do so.
My coordinator failed with Error : E0301 invalid resource [filename]
when I do hadoop fs -ls [filename] the file is listed.
how can I debug what is wrong.
how can I check log files???
oozie job -log requires jobId. in my case i dont have job id. how can I see logs in that case. appreciate responses.
thank you
If you are looking for a command line way to do this, you can run the following:
oozie job -oozie http://localhost:11000 -info <wfid>
oozie job -oozie http://localhost:11000 -log <wfid>
If you have the $OOZIE_URL set, then you do not need the -oozie parm in the above statements. This first command will show you the status of the job and each action. The second command will dig into the oozie log and display the part in the log that pertains to the workflow id that was passed in.
cd /var/log/oozie/
ls
You should see the log file there.
I highly recommend using the oozie webconsole when new to oozie. If you are using Cloudera it's under "Enabling the Oozie Web Console" here http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Installation-Guide/cdh4ig_topic_17_6.html for CDH4. CDH3 link is similar.
Also the jobid is printed when you submit the job.