Hi i am new to Oozie and i am getting this error E0902: Exception occured: [User: pramod is not allowed to impersonate pramod] when i run the following command
./oozie job -oozie htt p://localhost:11000/oozie/ -config ~/Desktop/map-reduce /job.properties -run.
My hadoop version is 1.0.3 and oozie version is 3.3.2 and running in a pseudo mode
The following is the content of my core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/pramod/hadoop-${user.name}</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
<property>
<name>hadoop.proxyuser.${user.name}.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.${user.name}.groups</name>
<value>*</value>
</property>
</configuration>
Can somebody help
Hadoop 1.0.x does not support wildcards. http://mail-archives.apache.org/mod_mbox/oozie-user/201212.mbox/%3CCAOcnVr1TZZ5X0Mrb7fFA8JdW6rO6PgoJ9u0=2UYbfXf_o8r=DA#mail.gmail.com%3E
So try
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>localhost</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>oozie,pramod</value>
</property>
One thing missed in the discussion above:
In core-site.xml you need to use the user with which oozie is started, as in the user that invoked the command "bin/oozied.sh start". For example: if you have "hadoop.proxyuser.bob.hosts" along with hadoop.proxyuser.bob.groups, then the user 'bob' would be required to start oozie using "bin/oozied.sh start".
I don't think you can use variables in the key name - you'll need to hardcode the user name rather than ${user.name}.
I assume you have an oozie user (which the oozie server is run as), so basically you want to configure as follows to allow the oozie user to impersonate anyone from any host:
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>
Make sure you restart your HDFS / MAPREDUCE services for this to take affect
Related
I'm using oozie 4.3.0 with hadoop 2.7.3.
When i want to create and run a workflow, i'm getting the following error:
Error: E0501 : E0501: Could not perform authorization operation, User: st_jsgane.gane is not allowed to impersonate st_jsgane.gane
Here is my hadoop's core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://0.0.0.0:19000</value>
</property>
<!-- OOZIE -->
<property>
<name>hadoop.proxyuser.'st_jsgane.gane'.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.'st_jsgane.gane'.groups</name>
<value>*</value>
</property>
</configuration>```
*Thanks in advance for your help.*
I have installed hbase in my local ubuntu vm , which already have hadoop running in psuedo distribution mode.Hadoop version is 3.1.2 and hbase version is 2.1.2.
i'm able to get the hbase shell but when try to create a table get the following error:
hbase(main):001:0> create 'test', 'data'
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2969)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1972)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:630)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
on further investigation i found this error in the regionserver logs
2019-02-12 06:21:21,484 ERROR [RS_OPEN_META-regionserver/ubuntu:16020-0] handler.OpenRegionHandler: Failed open of region=hbase:meta,,1.1588230740
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper
at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51)
at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:167)
at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166)
at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113)
at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:612)
at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:124)
at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:756)
at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:486)
at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<init>(AsyncFSWAL.java:251)
at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:73)
at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:48)
at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:152)
at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:60)
at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:284)
at org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2104)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
are these two related , Can somebody provide a solution for this please
this is my hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/bithu/hbase-temp</value>
</property>
<property>
<name>hbase.master.port</name>
<value>7000</value>
</property>
<property>
<name>hbase.regionserver.port</name>
<value>7010</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/home/bithu/hbase-temp/tmp</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
I have faced the same issue running Hbase-2.1.5 on top of Hadoop 3.1.3.
Adding below property in hbase-site.xml would solve this issue.
<property>
<name>hbase.wal.provider</name>
<value>filesystem</value>
</property>>
Im trying to connect to hive server 2 through a JDBC connector, but Im getting the error:
'user x cant impersonate y'
I added these properties to my core-site.xml file:
<property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property>
Also, in hive-site.xml I have:
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
<description>
Setting this property to true will have HiveServer2 execute
Hive operations as the user making the calls to it.
</description>
</property>
I have my authentication set to none and I am connecting as anonymous. I have restarted my cluster since changing the config files as well as running:
hadoop fs -chmod g+w /user/hive/warehouse
hadoop fs -chmod g+w /tmp
Can anyone suggest why Im still getting the error?
If you are trying to connect as user named anonymous, the properties should be
<property>
<name>hadoop.proxyuser.anonymous.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.anonymous.groups</name>
<value>*</value>
</property>
I'm not able to run an oozie job on local single node hadoop cluster despite setting the user "kapil.sharma02" as a proxy user. Is this due to wild card in my user name? Can you please suggest a remedy?
kapil.sharma02$ ./oozie job -oozie http://localhost:11000/oozie -config ../examples/apps/map-reduce/job.properties -run
Error: E0501 : E0501: Could not perform authorization operation, User: kapil.sharma02 is not allowed to impersonate kapil.sharma02
Here is my core-site.xml (hadoop 2.6.4)
I have tried adding this config both with and without escape character but no luck.
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.proxyuser.kapil\.sharma02.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.kapil\.sharma02.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>
Can you please check your job log for proper failure information. I am not sure whether this may help you, but similar issue is resolved here. please have a look.
I am trying to configure a Mapreduce job in oozie . This job has two different input formats and two input data folders. I used this post How to configure oozie workflow for multi-input path with multiple mappers
and added these properties to my workflow.xml :
<property>
<name>mapred.input.dir.formats</name>
<value>folder/data/*;org.apache.hadoop.mapred.SequenceFileInputFormat\,data/*;org.apache.hadoop.mapred.TextInputFormat</value>
</property>
<property>
<name>mapred.input.dir.mappers</name>
<value>folder/data/*;....PublicMapper\,data/*;....PublicMapper</value>
</property>
but when the job is launched i have the following error: " No input paths specified in job".
Is there anyone that can help me ?
thks
You need to set some additional properties:
<property>
<name>mapreduce.inputformat.class</name>
<value>org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat</value>
</property>
<property>
<name>mapreduce.map.class</name>
<value>org.apache.hadoop.mapreduce.lib.input.DelegatingMapper</value>
</property>
I faced the same issue today, so I used the following properties.
<property>
<name>mapreduce.inputformat.class</name>
<value>org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat</value>
</property>
<property>
<name>mapreduce.map.class</name>
<value>org.apache.hadoop.mapreduce.lib.input.DelegatingMapper</value>
</property>
<property>
<name>mapreduce.input.multipleinputs.dir.formats</name>
<value>/first/input/path;org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat,/second/input/path;org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat</value>
</property>
<property>
<name>mapreduce.input.multipleinputs.dir.mappers</name>
<value>/first/input/path;com.first.Mapper,/second/input/path;com.second.Mapper</value>
</property>
The difference is instead of mapred.input.dir.formats and mapred.input.dir.mappers which is part of the old map-reduce API I used mapreduce.input.multipleinputs.dir.formats and mapreduce.input.multipleinputs.dir.mappers respectively. The code worked just fine after that. I ran it on Hadoop 1.2.1 and Oozie 3.3.2.