How to enable chown commands via Hadoop NFS Gateway - hadoop

I have a use-case where I have enabled NFS gateway for my hadoop system following this nice guide. I have mounted it on another machine via:
sudo mount -v -t nfs -o vers=3,proto=tcp,nolock,noacl $ip:/dataDir /mountDir
Now there is a use-case where I need to run chown command to a file in dataDir folder, so I run following:
chown user2 /mountDir/sample.txt
But this gives error:
chown: changing ownership of `/mountDir/sample.txt': Permission denied
and I get following in NFS gateway logs:
18/04/05 23:54:25 WARN nfs3.RpcProgramNfs3: Exception
org.apache.hadoop.security.AccessControlException: Non-super user cannot change owner
at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:83)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1669)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:703)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:464)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
I also trying added following in /etc/nfs.map file as mentioned in docs and an error faced doing this detailed here:
uid 0 594903 //where 0 is uid of root on another machine, and 594903 is uid of hdfs which is superuser on datanode machine where NFS gateway is running.
But I still get this error:
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied. user=root is not the owner of inode=sample3.txt
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:250)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1724)
at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:80)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1669)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:703)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:464)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
Any idea how to get this done?

Related

Unable to write to HDFS as non sudo user

I've changed the permission of a hdfs directory via
hdfs dfs -chmod 777 /path/to/dir
but, when writing to that directory as a non-sudo user, i get a permission error
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=crtemois, access=WRITE, inode="/aggregation/system/data/clean":owners:hdfs:drwxr-xr-x
The reason is that Apache Ranger was layered on top. Even though the permissions were changed via chmod 777, if the user permission wasn't set in Apache Ranger, writing wouldn't be possible.

Hadoop returns permission denied

I am trying to install hadoop (2.7) in cluster (two machines hmaster and hslave1). I installed hadoop in the folder /opt/hadoop/
I followed this tutorial but Iwhen I run the command start-dfs.sh, I got the following error about:
hmaster: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-hmaster.out
hmaster: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-hmaster.out
hslave1: mkdir: impossible to create the folder « /opt/hadoop\r »: Permission denied
hslave1: chown: impossible to reach « /opt/hadoop\r/logs »: no file or folder of this type
/logs/hadoop-hadoop-datanode-localhost.localdomain.out
I used the command chmod 777 for the folder hadoop in hslave but I still have this error.
Insted of using /opt/ use /usr/local/ if you get that permission issue again give the root permissions using chmod. I already configured hadoop 2.7 in 5 machines. Or else use "Sudo chown user:user /your log files directory".
Seems you have already gave master password less access to login slave.
Make sure you are logged in with username available on both servers.
(hadoop in your case, as tutorial you are following uses 'hadoop' user.)
you can edit the '/etc/sudoer' file using 'sudo' or directly type 'visudo' in the terminal and add the following permission for newly created user 'hadoop' :-
hadoop ALL = NOPASSWD: ALL
might it will resolved your issues.

Can't create directory on hadoop file system

I installed hadoop 2.7.1 from root in /usr/local
now i want to give access to multiple users
when i executed the following command
hdfs dfs -mkdir /user
from hadoop user i got the error
mkdir: Permission denied: user=hadoop, access=WRITE, inode="/user":root:supergroup:drwxr-xr-x
how to resolve this problem . please help me in this
Thanks
suchetan
hdfs user is the admin user for the HDFS. Change to hdfs user and give the necessary permissions to the user you want(hadoop)
or
you can disable the dfs.permissions.enabled in the hdfs_site.xml and restart. After that you can create a folder.

AccessControlException in Hadoop for access=EXECUTE

I have a small application which reads a file from my local machine and writes the data into hdfs.
Now i want to list the files present in the hdfs folder, say HadoopTest. When i try to do that , i am getting the below exception:
org.apache.hadoop.security.AccessControlException: Permission denied: user=rpoornima, access=EXECUTE, inode="/hbase/HadoopTest/Hadoop_File_1.txt":rpoornima:hbase:-rw-r--r--
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:4523)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:3312)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:3289)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:652)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:431)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44098)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
I'm not sure how to resolve this issue. kindly give you inputs.
You exception is clear enough to show the problem.
As the exception says
Permission denied: user=rpoornima, access=EXECUTE,
inode="/hbase/HadoopTest/Hadoop_File_1.txt":rpoornima:hbase:-rw-r--r--`
This means your account rpoornima only has -rw-r--r-- permission(no execute) on the file /hbase/HadoopTest/Hadoop_File_1.txt. So you have to use another full privilege account to do the execution.
UPDATE
If you want to give access to specified user. Use a chmod command.
chown
Usage: hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
Change the owner of files. The user must be a super-user. Additional information is in the Permissions Guide.
Options
The -R option will make the change recursively through the directory structure.

Hadoop NFS gateway - mount failed: No such file or directory

I'm trying to mount my HDFS using the NFS gateway as it is documented here:
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
Unfortunately, following the documentation step by step does not work for me (Hadoop 2.7.1 on CentOS 6.6). When executing the mount command I receive the following error message:
[root#server1 ~]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync
server1:/ /hdfsmount/ mount.nfs: mounting server1:/ failed, reason
given by server: No such file or directory
I created the folder hdfsmount so that I can say it definitely exists. My questions are now:
Did anyone faced the same issue as I do?
Do I have to configure the NFS server before I start following the steps in the documentation (e.g. I read about editing /etc/exports).
Any help is highly apreciated!
I found the problem deep in the logs. When executing the command (see below) to start the nfs3 component of HDFS, the executing user needs permissions to delete /tmp/.hdfs-nfs which is configured as nfs.dump.dir in core-site.xml.
If the permissions are not set, you'll receive a log message like:
15/08/12 01:19:56 WARN fs.FileUtil: Failed to delete file or dir
[/tmp/.hdfs-nfs]: it still exists. Exception in thread "main"
java.io.IOException: Cannot remove current dump directory:
/tmp/.hdfs-nfs
Another option is to simply start the nfs component as root.
[root]> /usr/local/hadoop/sbin/hadoop-daemon.sh --script /usr/local/hadoop/bin/hdfs start nfs3

Resources