Regarding job's failure running in oozie - hadoop

Job running in oozie is getting the following error:
hduser#ubuntu:~/oozie/distro/target/oozie-3.3.2-distro/oozie-3.3.2$ bin/oozie job -oozie -config examples/apps/map-reduce/job.properties -run
Error: E0902 : E0902: Exception occured: [Call to 127.0.0.1:8020 failed on local exception: java.io.IOException: Broken pipe]
How can I solve this. Thanks

Looks like Oozie has trouble connecting to your name-node.
Its trying to connect to 127.0.0.1:8020, is this where the namenode is running?
If yes, double check it is running.
If no, make sure job.properties has a line like: nameNode=hdfs://namenode_host:8020
Which should point to the correct name node location.
Reference ${nameNode} in the section of your action.

Related

oozie: Could not perform authorization operation, Failed on local exception

I was trying to run the oozie example from https://oozie.apache.org/docs/3.1.3-incubating/DG_Examples.html
Then I copied the job.properties from examples/apps/aggregator to my home dictionary in local filesystem, and edited it like below:
nameNode=hdfs://localhost:9000
jobTracker=localhost:8032
queueName=default
examplesRoot=examples
oozie.coord.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/aggregator
start=2010-01-01T01:00Z end=2010-01-01T03:00Z
I am sure the port is right. Cause hadoop fs -ls hdfs://localhost:9000 returned the files in hdfs.
Then I typed this in terminal:
oozie job -oozie http://127.0.0.1:11000/oozie -config ~/job.properties -run
, but it returned:
Error: E0501 : E0501: Could not perform authorization operation, Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "Master/x.x.x.x"; destination host is: "localhost":9000
Any help would be appreciated.

How to resolve Oozie error : JA009: Cannot initialize Cluster.check configuration for mapreduce.framework.name

I have been using oozie to schedule spark jobs.
Trying to deploy a spark job in 2.x cluster using spark action available in Oozie.
In my job.properties, I have the following
`nameNode=hdfs://hostname:8020
jobTracker=hostname:8050
master=yarn-cluster
queueName=default
oozie.use.system.libpath=true`
When i submit the oozie job, i have been receiving this error
Error:
ErrorCode [JA009], Message [JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.]
org.apache.oozie.action.ActionExecutorException: JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:457)
What am I doing wrong here ? Any thing to be changed in properties file ?
Thanks
I was also getting same error JA009 and in my case namenode was running in safemode so after leaving safemode it was able to initialize the cluster and i was able to submit the oozie job.
You can also refer following oozie error codes list.
https://oozie.apache.org/docs/4.1.0/oozie-default.xml

error while running example of oozie job

I tried running my first oozie job by following a blog post.
I used oozie-examples.tar.gz, after extracting, placed examples in hdfs.
I tried running map-reduce job in it but unfortunately got an error.
Ran below command:
oozie job -oozie http://localhost:11000/oozie -config /examples/apps/map-reduce/job.properties -run
Got the error:
java.io.IOException: configuration is not specified at
org.apache.oozie.cli.OozieCLI.getConfiguration(OozieCLI.java:787) at
org.apache.oozie.cli.OozieCLI.jobCommand(OozieCLI.java:1026) at
org.apache.oozie.cli.OozieCLI.processCommand(OozieCLI.java:662) at
org.apache.oozie.cli.OozieCLI.run(OozieCLI.java:615) at
org.apache.oozie.cli.OozieCLI.main(OozieCLI.java:218) configuration is
not specified
I don't know which configuration it is asking for as I am using Cloudera VM and it has by default got all the configurations set in it.
oozie job -oozie http://localhost:11000/oozie -config /examples/apps/map-reduce/job.properties -run
The -config parameter takes an local path not an HDFS path. The workflow.xml needs to be present in the HDFS and path is defined in the job.properties file with the property:
oozie.wf.application.path=<path to the workflow.xml>

OOZIE status check throws java.lang.NullPointerException

I am new to oozie, trying to write a oozie workflow in CDH4.1.1. So I started the oozie service and then I checked the status using this command:
sudo service oozie status
I got the message:
running
Then I tried this command for checking the status:
oozie admin --oozie http://localhost:11000/oozie status
And I got the below exception:
java.lang.NullPointerException
at java.io.Writer.write(Writer.java:140)
at org.apache.oozie.client.AuthOozieClient.writeAuthToken(AuthOozieClient.java:182)
at org.apache.oozie.client.AuthOozieClient.createConnection(AuthOozieClient.java:137)
at org.apache.oozie.client.OozieClient.validateWSVersion(OozieClient.java:243)
at org.apache.oozie.client.OozieClient.createURL(OozieClient.java:344)
at org.apache.oozie.client.OozieClient.access$000(OozieClient.java:76)
at org.apache.oozie.client.OozieClient$ClientCallable.call(OozieClient.java:410)
at org.apache.oozie.client.OozieClient.getSystemMode(OozieClient.java:1299)
at org.apache.oozie.cli.OozieCLI.adminCommand(OozieCLI.java:1323)
at org.apache.oozie.cli.OozieCLI.processCommand(OozieCLI.java:499)
at org.apache.oozie.cli.OozieCLI.run(OozieCLI.java:466)
at org.apache.oozie.cli.OozieCLI.main(OozieCLI.java:176)
null
Reading the exception stack, I am unable to figure out the reason for this exception. Please let me know why I got this exception and how to resolve this.
Try disabling the env property USE_AUTH_TOKEN_CACHE_SYS_PROP in your cluster. As per your stacktrace and the code .
Usually the clusters are setup with Kerberos based authentication, which is set up by following the steps here . Not sure if you want to do that, but just wanted to mentioned that as an FYI.

Oozie Job getting Suspended and not reaching YARN

I am trying to start a Oozie Shell Action Job via cli as:
oozie job -config jobprops/jos.prioperties -run
The Job Starts, it gives me a unique Id and I can see Job in Oozie UI.
However, Yarn Console shows no submitted jobs and on checking log in oozie I get following message:
Error starting action [folder-structure].
ErrorType [TRANSIENT], ErrorCode [JA009]
Message [JA009: Permission denied: user=vikas.r, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257).
The job finally goes to SUSPENDED state.
Why is job trying to access "/" ? How could it be resolved ?
I am running under unix user vikas.r, with all folders in hdfs at /user/vikas.r
The error message is quite straightforward. Your oozie job is trying to write something to / as vikas.r user, which lacks permissions to do so.

Resources