I have setup the oozie 4.3.1 with Hadoop 2.7.3.
oozie has been setup and running successfully and able to see web console http://localhost:11000/oozie/
and also confirm using oozie status command.
Issue 1:
While running the oozie examples after changing the job.properties with relevant details getting the error.
nameNode=hdfs://localhost:9000
jobTracker=localhost:8032
bin/oozie job -oozie http://localhost:11000/oozie -config $OOZIE_HOME/examples/apps/map-reduce/job.properties -run
Error: E0902 : E0902: Exception occured: [No FileSystem for scheme: hdfs]
Issue 2: oozie admin -sharelibupdate
[ShareLib update status]
host = http://f091403isdpbato05:11000/oozie
status = java.io.IOException: No FileSystem for scheme: hdfs
hdfs path and other oozie related .xml files also updated with proper configurations.
Please let me know any solution to move ahead.
You can try adding the following to you core-site.xml :
<property>
<name>fs.file.impl</name>
<value>org.apache.hadoop.fs.LocalFileSystem</value>
<description>The FileSystem for file: uris.</description>
</property>
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
<description>The FileSystem for hdfs: uris.</description>
</property>
Related
I installed Hadoop on a Windows machine in pseudo-distributed mode and tried to run a MapReduce job on it. The Namenode and Datanode ran without any problems, however, the MapReduce job kept failing with the error:
Exception in thread "main" java.io.IOException: Mkdirs failed to create C:\Users\acer\AppData\Local\Temp\hadoop-unjar778
7707269774970262\META-INF\license
at org.apache.hadoop.util.RunJar.ensureDirectory(RunJar.java:128)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:104)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:81)
at org.apache.hadoop.util.RunJar.run(RunJar.java:209)
I've checked that I already have full permission to that folder, and I also tried using maven-shade-plugin with no success.
Not sure what the issue but there are some todo
verify the folder permission with proper user for Temp\hadoop-unjar778
7707269774970262\META-INF (Can use chmod -R 777)
Check Namenode is running while executing MR
Node Managger service is running
Check the configuration:
For Hadoop 1.x:
<property>
<name>mapred.job.tracker</name>
<value>localhost:9101</value>
</property
For Hadoop 2.x:
<property>
<name>mapreduce.jobtracker.address</name>
<value>localhost:9101</value>
</property>
I am new to oozie and was following this for my first oozie hive job.
As per given in tutorial ,i made following files in a directory:
hive-default.xml
hive_job1.hql
job.properties
workflow.xml
But when i run this command:
oozie job -oozie http://localhost:11000/ -config /home/ec2-user/ankit/oozie_job1/job.properties -submit
I get following error:
Error: IO_ERROR : java.io.IOException: Error while connecting Oozie server. No of retries = 1. Exception = Could not authenticate, Authentication failed, status: 404, message: Not Found
I tried finding solution for this on internet ,but none solved the problem.(Might have missed something)
Please let me know where i am going wrong and what additional information will be required more from my side to understand the problem.
The error is because of the incorrect value for -oozie parameter. You forgot to add the oozie in the end. It should be -oozie http://localhost:11000/oozie
oozie job -oozie http://localhost:11000/oozie -config /home/ec2-user/ankit/oozie_job1/job.properties -submit
Please try setting following properties in core-site.xml:
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>
where * represents to all users.
Restart the hadoop cluster after making above changes.
I'm trying to use Oozie from Java to start a job on a Hadoop cluster. I have very limited experience with Oozie on Hadoop 1 and now I'm struggling trying out the same thing on YARN.
I'm given a machine that doesn't belong to the cluster, so when I try to start my job I get the following exception:
E0501 : E0501: Could not perform authorization operation, User: oozie is not allowed to impersonate hadoop
Why is that and what to do?
I read a bit about core-site properties that need to be set
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>users</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>master</value>
</property>
Does it seem that this is the problem? Should I contact people responsible for cluster to fix that?
Could there be problems because I'm using same code for YARN as I did for Hadoop 1? Should something be changed? For example, I'm setting nameNode and jobTracker in workflow.xml, should jobTracker exist, since there is now ResourceManager? I have set the address of ResourceManager, but left the property name as jobTracker, could that be the error?
Maybe I should also mention that Ambari is used...
Hi please update the core-site.xml
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
and jobTracker address is the Resourcemananger address that will not be the case . once update the core-site.xml file it will works.
Reason:
Cause of this type of error is- You run oozie server as a hadoop user but you define oozie as a proxy user in core-site.xml file.
Solution:
change the ownership of oozie installation directory to oozie user and run oozie server as a oozie user and problem will be solved.
I am using cloudera quickstart in vmware to run sample Oozie.
I am trying to run some examples of Oozie that comes in Cloudera.
I am following this link: http://archive.cloudera.com/cdh/3/oozie/DG_Examples.html
I untared 'oozie-examples.tar.gz' and got examples directory.
When running the oozie, I get an error message:
[cloudera#localhost oozie-3.3.2+92]$ oozie job -oozie http://localhost:11000/oozie -config examples/apps/map-reduce/job.properties -run
o/p:
uce/job.properties -run
Error: E0901 : E0901: Namenode [localhost:8020] not allowed, not in Oozies whitelist
oozie-site.xml looks like:
vi /etc/oozie/conf.dist/oozie-site.xml:
<property>
<name>oozie.service.HadoopAccessorService.jobTracker.whitelist</name>
<value>
localhost:8021
</value>
<description>
Whitelisted job tracker for Oozie service.
</description>
</property>
<property>
<name>oozie.service.HadoopAccessorService.nameNode.whitelist</name>
<value>
hdfs://localhost:8020
</value>
<description>
Whitelisted job tracker for Oozie service.
</description>
</property>
vi job.properties look like:
hdfs://localhost:8020
jobTracker=localhost:8021
queueName=default
examplesRoot=examples
oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/map-reduce
outputDir=map-reduce
What am I doing wrong?
Thank you!
In job.properties file, I replaced localhost with: localhost.localdomain. And it fixed the problem
You should have something like following your /etc/hosts.
127.0.0.1 localhost.localdomain localhost
Visit this link for details.
https://issues.apache.org/jira/browse/OOZIE-1516
I'm getting an error on sqoop2 job submission.
sqoop:000> start job --jid 1
Submission details
Job id: 1
Status: FAILURE_ON_SUBMIT
Creation date: 2013-11-06 11:21:30 IST
Last update date: 2013-11-06 11:21:30 IST
Exception: java.io.FileNotFoundException: File does not exist: hdfs://master:9000/usr/local/sqoop/server/webapps/sqoop/WEB-INF/lib/sqoop-common-1.99.3.jar
Do we need to put all sqoop jar files on HDFS?
I’m running sqoop jobs on the same master node of hadoop 2.2.0
I copied all required jar libs into the hdfs finally and it worked.
hadoop fs -mkdir -p /usr/local/sqoop/server/webapps/sqoop/WEB-INF/lib/
hadoop fs -copyFromLocal /usr/local/sqoop/server/webapps/sqoop/WEB-INF/lib/*.jar /usr/local/sqoop/server/webapps/sqoop/WEB-INF/lib/
Modify mapred-site.xml file.
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
I succeeded