Unable to change read write permissions to hdfs directory - hadoop

I am trying to copy text file into hdfs location.
I'm facing Access issue, so I tried changing permissions.
But I'm unable to change the same facing below error:
chaithu#localhost:~$ hadoop fs -put test.txt /user
put: Permission denied: user=chaithu, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
chaithu#localhost:~$ hadoop fs -chmod 777 /user
chmod: changing permissions of '/user': Permission denied. user=chaithu is not the owner of inode=user
chaithu#localhost:~$ hadoop fs -ls /
Found 2 items
drwxrwxrwt - hdfs supergroup 0 2017-12-20 00:23 /tmp
drwxr-xr-x - hdfs supergroup 0 2017-12-20 10:24 /user
Kindly help me how can I change the rights to full read and write for all users to access the HDFS folder.

First off, you shouldn't be writing into the /user folder directly nor set 777 on it
You're going to need a user directory for your current user to even run a mapreduce job, so you need to sudo su - hdfs first to become an HDFS superuser.
Then run these to create HDFS directories for your user account
hdfs dfs -mkdir -p /user/chaithu
hdfs dfs -chown -R chaithu /user/chaithu
hdfs dfs -chmod -R 770 /user/chaithu
Then exit from the hdfs user, and chaithu can now write to its own HDFS directory.
hadoop fs -put test.txt
That alone will put the file in the current user's folder.
Or, if that's too much work for you write to /tmp instead
A lazy option is to rewrite your user account to the super user.
export HADOOP_USER_NAME=hdfs
hadoop fs -put test.txt /user
And this is why hadoop is not secure or enforce user account access by default (i.e. never do this in production)
And finally, you can always just turn permissions completely off in hdfs-site.xml (again, only useful in development phases)
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

If you observe your hdfs dfs -ls result you see that only HDFS super user have the permissions to that path.
you have two solutions here
One is to change the permissions to chaitu through root user and making chaitu as user or owner, something like this hdfs dfs -chown -R hdfs:chaitu /path then you will be able to get access to that being a owner. Other dirty way is to give hdfs dfs -chmod -R 777 /path from the root, from the security stand point this 777 is not good.
Second one is using ACLS which gives you the temporary access
Please go through this link for more understanding.
More on ACLS
This is so basic and important for you to learn, try the above suggested ones and let me know if those don’t work I can help more based on the error you get.

Related

Why is hadoop fs -chmod useless?

This is the result of using it:
$ hadoop fs -chmod -R 777 /user/hive/
$ hdfs dfs -ls /user/hive/
Found 1 items
drwxrwx--x+ - hive hive 0 2021-08-05 14:21 /user/hive/warehouse
As can be seen, it doesn't do a thing, the mod of /user/hive is still 775. Why this could happen and how to fix it?
Is Sentry enabled on this cluster? By the permissions on the warehouse folder, I suspect Sentry is there, and if it is, Sentry provides the permissions.
You may see an warning in the NN logs indicating the command was ignored.

Permission is denied when moving file from repository to another

Assume that I want to move a csv file from /home/user to /hdfs/data/adhoc/PR/02/RDO0/OUTPUT/
So :
hadoop fs mkdir -m 777 /hdfs/data/adhoc/PR/02/RDO0/OUTPUT/
hadoop fs -moveFromLocal RDO07J420.csv $OUTPUT_FILE_OCRE/MGM7J420-${OPC_DISO8601}.csv
But, I get this problem :
moveFromLocal: Permission denied: user=fs191, access=WRITE,
inode="/hdfs/data/adhoc/PR/02/RDO0/OUTPUT/MGM7J420-.csv.COPYING":RDO0-mdoPR:bfRDO0:drwxr-x---
You local user does not have write rights in hdfs.
Try
sudo -u hdfs hadoop fs -moveFromLocal RDO07J420.csv $OUTPUT_FILE_OCRE/MGM7J420-${OPC_DISO8601}.csv
hdfs is the root user and has write rights, but I suggest managing users and permissions better
http://www.informit.com/articles/article.aspx?p=2755708&seqNum=3

hdfs dfs -put : Exception in createBlockOutputStream and java.io.EOFException: Premature EOF: no length prefix available

And I checked the webUI which show the datanodes in the unhealthy status. I do not know why this happen.
This is because your configuration or any abnormal termination of datanode(While doing any action on that node)
There is no internal problem with hdfs dfs -put , just verify whats inside your directory or use command
hdfs dfs -ls /
Please specify your problem an error cant be a problem statement until you dont know what you are trying to do.
File permission issue.
Check file permissions of dfs directory:
find /path/to/dfs -group root
In general, the user permission group is hdfs.
Since I started HDFS service with root user, some dfs block file with root permissions was generated.
I solved the problem after change to right permissions:
sudo chown -R hdfs:hdfs /path/to/dfs

No such file or directory error when using Hadoop fs --copyFromLocal command

I have a local VM that has Hortonworks Hadoop and hdfs installed on it. I ssh'ed into the VM from my machine and now I am trying to copy a file from my local filesystem into hdfs through following set of commands:
[root#sandbox ~]# sudo -u hdfs hadoop fs -mkdir /folder1/
[root#sandbox ~]# sudo -u hdfs hadoop fs -copyFromLocal /root/folder1/file1.txt /hdfs_folder1/
When I execute it I get following error as - copyFromLocal:/root/folder1/file1.txt': No such file or directory
I can see that file right in /root/folder1/ directory but with hdfs command its throwing above error. I also tried to cd to /root/folder1/ and then execute the command but same error comes. Why is the file not getting found when it is right there?
By running sudo -u hdfs hadoop fs..., it tries to read the file /root/folder1/file.txt as hdfs.
You can do this.
Run chmod 755 -R /root. It will change permissions on directory and file recursively. But it is not recommended to open up permission on root home directory.
Then you can run the copyFromLocal as sudo -u hdfs to copy file from local file system to hdfs.
Better practice is to create user space for root and copy files directly as root.
sudo -u hdfs hadoop fs -mkdir /user/root
sudo -u hdfs hadoop fs -chown root:root /user/root
hadoop fs -copyFromLocal
I had the same problem running a Hortonworks 4 node cluster. As mentioned, user "hdfs" doesn't have permission to the root directory. The solution is to copy the information from the root folder to something the "hdfs" user can access. In the standard Hortonworks installation this is /home/hdfs
as root run the following...
mkdir /home/hdfs/folder1
cp /root/folder1/file1.txt /home/hdfs/folder1
now change users to hdfs and run from the hdfs USER's accessible directory
su hdfs
cd /home/hdfs/folder1
now you can access files as the hdfs user
hdfs dfs -put file1.txt /hdfs_folder1

AccessControlException Hadoop

I want to execute the command as root:
bin/hadoop fs -mkdir data_wm
But I obtain:
mkdir: org.apache.hadoop.security.AccessControlException: Permission
denied: user=root, access=WRITE,
inode="":georgiana:supergroup:rwxr-xr-x
I configured hadoop on pseudo distributed mode like this: http://hadoop.apache.org/docs/stable/single_node_setup.html#PseudoDistributed
I also tried to put this in hdfs-site.xml, but doesn't work.
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
Does anyone have any idea how to solve this.
Permission issue because you gives full read write and execute permission to all user in group
for this issue try this command
hadoop datanode -start
if it suggest rollback then execute -rollback command
then it will give you a permission error
go to your dfs location.
change the permission of data folder
chmod 755
drwxr-xr-x 6 hduser hadoop 4096 Sep 13 18:49 data
drwxrwxr-x 5 hduser hadoop 4096 Sep 13 18:49 name
You are making directory inside hdfs directory bin/hadoop fs -mkdir data_wm which means inside user georgiana i.e /user/georgiana/data_wm while you have logged in as root. You have not given write permission to other users as per the permission msg :
rwxr-xr-x
first 3 digit rwx : Owner of the file/directory have full permission .
next 3 digit r-x : Group level permission , which means every other user who is in this group .
next 3 digit r-x : Others apart from group .
change user to georgiana using su georgiana and give password but if you intended to mkdir inside the /user/georgiana using root user then give this directory the appropriate permissions.
hadoop fs -chmod 777 /user/georgiana/
which means full permission to users within same group and others users outside the group.
Cheers!

Resources