Hue File Browser not working - hadoop

i have installed hue and the file browser in hue is not working and is throwing a "Server Error (500)"
data from error.log
webhdfs ERROR Failed to determine superuser of WebHdfs at http://namenode:50070/webhdfs/v1: SecurityException: Failed to obtain user group information: org.apache.hadoop.security.authorize.AuthorizationException: User: hue is not allowed to impersonate hue (error 401)
Traceback (most recent call last):
File "/home/hduser/huef/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py", line 108, in superuser
sb = self.stats('/')
File "/home/hduser/huef/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py", line 188, in stats
res = self._stats(path)
File "/home/hduser/huef/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py", line 182, in _stats
raise ex
Note : i have added the following to core-site.xml and i have enabled webhdfs
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
Error when i try to access hdfs file location through oozie in hue
An error occurred: SecurityException: Failed to obtain user group information: org.apache.hadoop.security.authorize.AuthorizationException: User: hue is not allowed to impersonate hduser (error 401)

core-site.xml
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
hdfs-site.xml
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

You need to specify hduser as the proxy user:
<property>
<name>hadoop.proxyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.groups</name>
<value>*</value>
</property>
BTW: why are you not running Hue as 'hue'?

What user are you logged as?
I had same issue, my solution was to create a HUE user called "hdfs" and added "hue" Linux user in "hadoop" and "hdfs" linux users groups.
So now I am logged in as "hdfs" user in HUE web UI.

You may see it says Failed to obtain user group information.
According to Hadoop docs, the group info is gathered by invoking shell command (on *nix system) groups $USERNAME. Therefore, the matching user MUST EXIST as a Linux user in HDFS Namenode, where authentication process occurs.
So the solution is simple as,
useradd hue -g root
On the Namenode.
I'm deploying hdfs in a docker container, so I use group root. The value is the same as the user running the Namenode process (who is definitely the superuser).

Related

Redirecting to log server for container when view logs of a completed spark jobs run on yarn

I'm running spark on yarn.
My spark versoin is 2.1.1, and hadoop version is apache hadoop 2.7.3.
when a spark job running on yarn in cluster mode, I can view the Executor's log via the stdout/stderr links like
http://hadoop-slave1:8042/node/containerlogs/container_1500432603585_0148_01_000001/hadoop/stderr?start=-4096
but when the job completed, view the Executor's log via the stdout/stderr links will get an error page like
Redirecting to log server for container_1500432603585_0148_01_000001
java.lang.Exception: Unknown container. Container either has not
started or has already completed or doesn't belong to this node at
all.
And then it will auto redirect to
http://hadoop-slave1:8042/node/hadoop-master:19888/jobhistory/logs/hadoop-slave1:36207/container_1500432603585_0148_01_000001/container_1500432603585_0148_01_000001/hadoop
and get other error page like
Sorry, got error 404
Please consult RFC 2616 for meanings of the error code.
Error Details
org.apache.hadoop.yarn.webapp.WebAppException: /hadoop-master:19888/jobhistory/logs/hadoop-slave1:50284/container_1500432603585_0145_01_000002/container_1500432603585_0145_01_000002/oryx: controller for hadoop-master:19888 not found
at org.apache.hadoop.yarn.webapp.Router.resolveDefault(Router.java:232)
at org.apache.hadoop.yarn.webapp.Router.resolve(Router.java:140)
at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:134)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263)
Actually i can visit the Executor's log using this url when the
spark job completed:
http://hadoop-master:19888/jobhistory/logs/hadoop-slave1:36207/container_1500432603585_0148_01_000001/container_1500432603585_0148_01_000001/hadoop
it's a little different from the previous url, it remove the head "hadoop-slave1:8042/node/".
Does anyone knows another better method to view the spark logs when the spark job completed ?
I have configed the yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
<description>The hostname of the RM.</description>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log.server.url</name>
<value>${yarn.resourcemanager.hostname}:19888/jobhistory/logs</value>
</property>
and mapred-site.xml
<property>
<name>mapreduce.jobhistory.address</name>
<value>${yarn.resourcemanager.hostname}:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.admin.address </name>
<value>${yarn.resourcemanager.hostname}:10033</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:19888</value>
</property>
I have encounter this situation.view the completed spark steaming job logs through YARN UI History tab, but get error below:
Failed while trying to construct the redirect url to the log server. Log Server url may not be configured
java.lang.Exception: Unknown container. Container either has not started or has already completed or doesn't belong to this node at all.
The solution is configure the file yarn-site.xml. Add key yarn.log.server.url :
<property>
<name>yarn.log.server.url</name>
<value>http://<LOG_SERVER_HOSTNAME>:19888/jobhistory/logs</value>
</property>
Then restart yarn cluster to reload yarn-site.xml.(this step is important!)

httpfs error Operation category READ is not supported in state standby

I am working on hadoop apache 2.7.1 and I have a cluster that consists of 3 nodes
nn1
nn2
dn1
nn1 is the dfs.default.name, so it is the master name node.
I have installed httpfs and started it of course after restarting all the services. When nn1 is active and nn2 is standby I can send this request
http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root
from my browser and a dialog of open or save for this file appears, but when I kill the name node running on nn1 and start it again as normal then because of high availability nn1 becomes standby and nn2 becomes active.
So here httpfs should work, even if nn1 becomes stand by, but sending the same request now
http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root
gives me the error
{"RemoteException":{"message":"Operation category READ is not supported in state standby","exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException"}}
Shouldn't httpfs overcome nn1 standby status and bring the file? Is that because of a wrong configuration, or is there any other reason?
My core-site.xml is
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
It looks like HttpFs is not High Availability aware yet. This could be due to the missing configurations required for the Clients to connect with the current Active Namenode.
Ensure the fs.defaultFS property in core-site.xml is configured with the correct nameservice ID.
If you have the below in hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
then in core-site.xml, it should be
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
Also configure the name of the Java class which will be used by the DFS Client to determine which NameNode is the currently Active and is serving client requests.
Add this property to hdfs-site.xml
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
Restart the Namenodes and HttpFs after adding the properties in all nodes.

Oozie on YARN - oozie is not allowed to impersonate hadoop

I'm trying to use Oozie from Java to start a job on a Hadoop cluster. I have very limited experience with Oozie on Hadoop 1 and now I'm struggling trying out the same thing on YARN.
I'm given a machine that doesn't belong to the cluster, so when I try to start my job I get the following exception:
E0501 : E0501: Could not perform authorization operation, User: oozie is not allowed to impersonate hadoop
Why is that and what to do?
I read a bit about core-site properties that need to be set
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>users</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>master</value>
</property>
Does it seem that this is the problem? Should I contact people responsible for cluster to fix that?
Could there be problems because I'm using same code for YARN as I did for Hadoop 1? Should something be changed? For example, I'm setting nameNode and jobTracker in workflow.xml, should jobTracker exist, since there is now ResourceManager? I have set the address of ResourceManager, but left the property name as jobTracker, could that be the error?
Maybe I should also mention that Ambari is used...
Hi please update the core-site.xml
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
and jobTracker address is the Resourcemananger address that will not be the case . once update the core-site.xml file it will works.
Reason:
Cause of this type of error is- You run oozie server as a hadoop user but you define oozie as a proxy user in core-site.xml file.
Solution:
change the ownership of oozie installation directory to oozie user and run oozie server as a oozie user and problem will be solved.

Error: E0902: Exception occured: [User: Root is not allowed to impersonate root

I am trying to follow the steps given at http://www.rohitmenon.com/index.php/apache-oozie-installation/
Note: I am not using cloudera distibution of hadoop
The above link is similar to http://oozie.apache.org/docs/4.0.1/DG_QuickStart.html
but with more descriptive seems to me
however while running the below command as a root user i am getting exception
./bin/oozie-setup.sh sharelib create -fs
Note: i have two live node shown at dfshealth.jsp . and i have updated the core-site.xml for all three(including namenode) with property as below
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
i understand this is point where i am making mistake Could someone please guide me
Stacktrace
org.apache.oozie.service.HadoopAccessorException: E0902: Exception occured: [User: root is not allowed to impersonate root]
at
org.apache.oozie.service.HadoopAccessorService.createFileSystem(HadoopAccessorService.java:430)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:144)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:52)
Caused by: org.apache.hadoop.ipc.RemoteException: User: root is not allowed to impersonate root
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy5.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:135)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:276)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:241)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.oozie.service.HadoopAccessorService$2.run(HadoopAccessorService.java:422)
at org.apache.oozie.service.HadoopAccessorService$2.run(HadoopAccessorService.java:420)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.oozie.service.HadoopAccessorService.createFileSystem(HadoopAccessorService.java:420)
... 2 more
--------------------------------------
Note: Getting E0902: Exception occured: [User: oozie is not allowed to impersonate oozie] i have followed this link as well but not able to solve my problem
if i change the core-site.xml as below only for NameNode
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>[NAMENODE IP]</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>hadoop</value>
</property>
I get the exception as
Unauthorized connection for super-user: hadoop
After adding the property files into core-site.xml restart your hadoop and try. Even though if it not works format the namenode and start hadoop it will work.
You need to add these properties in core-site.xml for impersonation in order to solve your whitelist error
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>
Hope this fixes your issue.
Follow the advice in the article below. Hadoop before 1.1.0 doesn't support wildcard so you have to explicitly specified the hosts and the groups
http://mail-archives.apache.org/mod_mbox/oozie-user/201212.mbox/%3CCAOcnVr1TZZ5X0Mrb7fFA8JdW6rO6PgoJ9u0=2UYbfXf_o8r=DA#mail.gmail.com%3E
I solved the problem by adding those lines in the core-site.xml-file
hadoop.proxyuser.root.hosts
value = *
hadoop.proxyuser.root.groups
value = *
and it works perfectly all my databases and tables are shown.
./oozie-setup.sh sharelib create -fs hdfs://localhost:9000
try to run this command using sudo.
check for hdfs if this path already exits i.e., /user/user_name/share/lib, if it exists remove it using
hadoop fs -rmr /user/user_name
After that run sudo ./oozied.sh. oozie will be started. Then check for your localhost:11000.

How to add an hard disk to hadoop

I installed Hadoop 2.4 on Ubuntu 14.04 and now I am trying to add an internal sata HD to the existing cluster.
I have mounted the new hd in /mnt/hadoop and assigned its ownership to the hadoop user
Then I tried to add it to the configuration file as follow:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode, file:///mnt/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode, file:///mnt/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>
Afterwards, I started the hdfs:
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-hadoop-Datastore.out
localhost: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop-Datastore.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-hadoop-Datastore.out
It seems that it does not fire up the second hd
This is my core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
In addition I tried to refresh the namenode and I get a connection problem:
Refreshing namenode [localhost:9000]
refreshNodes: Call From hadoop-Datastore/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Error: refresh of namenodes failed, see error messages above.
In addition, I can't connect to the Hadoop web interface.
It seems that I have two related problems:
1) A connection problem
2) I cannot connect to the new installed hd
Are these problem related?
How can I fix these issues?
Thanks
EDIT
I can ping the localhost and I can access localhost:50090/status.jsp
However, I cannot access 50030 and 50070
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode, file:///mnt/hadoop/hadoopdata/hdfs/namenode</value>
</property>
This is documented as:
Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
Are you sure you need this? Do you want your fsimage to be copied in both locations, for redundancy? And if yes, did you actually copy the fsimage on the new HDD before starting the namenode? See Adding a new namenode data directory to an existing cluster.
The new data directory (dfs.data.dir) is OK, the datanode should pick it up and start using it for placing blocks.
Also, as a general troubleshooting advice, look into the namenode and datanode logs for more clues.
Regarding your comment: "sudo chown -R hadoop.hadoop /usr/local/hadoop_store."
The owner has to be hdfs user. Try:
sudo chown -R hdfs.hadoop /usr/local/hadoop_store.

Resources