I want to execute the command as root:
bin/hadoop fs -mkdir data_wm
But I obtain:
mkdir: org.apache.hadoop.security.AccessControlException: Permission
denied: user=root, access=WRITE,
inode="":georgiana:supergroup:rwxr-xr-x
I configured hadoop on pseudo distributed mode like this: http://hadoop.apache.org/docs/stable/single_node_setup.html#PseudoDistributed
I also tried to put this in hdfs-site.xml, but doesn't work.
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
Does anyone have any idea how to solve this.
Permission issue because you gives full read write and execute permission to all user in group
for this issue try this command
hadoop datanode -start
if it suggest rollback then execute -rollback command
then it will give you a permission error
go to your dfs location.
change the permission of data folder
chmod 755
drwxr-xr-x 6 hduser hadoop 4096 Sep 13 18:49 data
drwxrwxr-x 5 hduser hadoop 4096 Sep 13 18:49 name
You are making directory inside hdfs directory bin/hadoop fs -mkdir data_wm which means inside user georgiana i.e /user/georgiana/data_wm while you have logged in as root. You have not given write permission to other users as per the permission msg :
rwxr-xr-x
first 3 digit rwx : Owner of the file/directory have full permission .
next 3 digit r-x : Group level permission , which means every other user who is in this group .
next 3 digit r-x : Others apart from group .
change user to georgiana using su georgiana and give password but if you intended to mkdir inside the /user/georgiana using root user then give this directory the appropriate permissions.
hadoop fs -chmod 777 /user/georgiana/
which means full permission to users within same group and others users outside the group.
Cheers!
Related
Assume that I want to move a csv file from /home/user to /hdfs/data/adhoc/PR/02/RDO0/OUTPUT/
So :
hadoop fs mkdir -m 777 /hdfs/data/adhoc/PR/02/RDO0/OUTPUT/
hadoop fs -moveFromLocal RDO07J420.csv $OUTPUT_FILE_OCRE/MGM7J420-${OPC_DISO8601}.csv
But, I get this problem :
moveFromLocal: Permission denied: user=fs191, access=WRITE,
inode="/hdfs/data/adhoc/PR/02/RDO0/OUTPUT/MGM7J420-.csv.COPYING":RDO0-mdoPR:bfRDO0:drwxr-x---
You local user does not have write rights in hdfs.
Try
sudo -u hdfs hadoop fs -moveFromLocal RDO07J420.csv $OUTPUT_FILE_OCRE/MGM7J420-${OPC_DISO8601}.csv
hdfs is the root user and has write rights, but I suggest managing users and permissions better
http://www.informit.com/articles/article.aspx?p=2755708&seqNum=3
I am trying to copy text file into hdfs location.
I'm facing Access issue, so I tried changing permissions.
But I'm unable to change the same facing below error:
chaithu#localhost:~$ hadoop fs -put test.txt /user
put: Permission denied: user=chaithu, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
chaithu#localhost:~$ hadoop fs -chmod 777 /user
chmod: changing permissions of '/user': Permission denied. user=chaithu is not the owner of inode=user
chaithu#localhost:~$ hadoop fs -ls /
Found 2 items
drwxrwxrwt - hdfs supergroup 0 2017-12-20 00:23 /tmp
drwxr-xr-x - hdfs supergroup 0 2017-12-20 10:24 /user
Kindly help me how can I change the rights to full read and write for all users to access the HDFS folder.
First off, you shouldn't be writing into the /user folder directly nor set 777 on it
You're going to need a user directory for your current user to even run a mapreduce job, so you need to sudo su - hdfs first to become an HDFS superuser.
Then run these to create HDFS directories for your user account
hdfs dfs -mkdir -p /user/chaithu
hdfs dfs -chown -R chaithu /user/chaithu
hdfs dfs -chmod -R 770 /user/chaithu
Then exit from the hdfs user, and chaithu can now write to its own HDFS directory.
hadoop fs -put test.txt
That alone will put the file in the current user's folder.
Or, if that's too much work for you write to /tmp instead
A lazy option is to rewrite your user account to the super user.
export HADOOP_USER_NAME=hdfs
hadoop fs -put test.txt /user
And this is why hadoop is not secure or enforce user account access by default (i.e. never do this in production)
And finally, you can always just turn permissions completely off in hdfs-site.xml (again, only useful in development phases)
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
If you observe your hdfs dfs -ls result you see that only HDFS super user have the permissions to that path.
you have two solutions here
One is to change the permissions to chaitu through root user and making chaitu as user or owner, something like this hdfs dfs -chown -R hdfs:chaitu /path then you will be able to get access to that being a owner. Other dirty way is to give hdfs dfs -chmod -R 777 /path from the root, from the security stand point this 777 is not good.
Second one is using ACLS which gives you the temporary access
Please go through this link for more understanding.
More on ACLS
This is so basic and important for you to learn, try the above suggested ones and let me know if those don’t work I can help more based on the error you get.
And I checked the webUI which show the datanodes in the unhealthy status. I do not know why this happen.
This is because your configuration or any abnormal termination of datanode(While doing any action on that node)
There is no internal problem with hdfs dfs -put , just verify whats inside your directory or use command
hdfs dfs -ls /
Please specify your problem an error cant be a problem statement until you dont know what you are trying to do.
File permission issue.
Check file permissions of dfs directory:
find /path/to/dfs -group root
In general, the user permission group is hdfs.
Since I started HDFS service with root user, some dfs block file with root permissions was generated.
I solved the problem after change to right permissions:
sudo chown -R hdfs:hdfs /path/to/dfs
i have a problem in setting hadoop file permissions in hortonworks and cloudera.
My requirement is:
1. create a new user with new group
2. create user directory in hdfs ( ex. /user/myuser )
3. Now this folder ( in this case /user/myuser ) must be accessible to only user and its group but not other users and other groups.
Following commands are used by me. ( in centos 6)
1.create group >>> groupadd mygroup
2. create new user who belongs to new group >>>> useradd -g mygroup myuser
3. create user directory in hdfs >>> hadoop fs -mkdir /user/myuser
4. changing ownership of the folder >>> hadoop fs -chown -R myuser:mygroup /user/myuser
5. giving permissions to user folder >>> hadoop fs -chmod -R 700 /user/myuser
6. i also changed the /tmp file permission to sticky bit. >>> hadoop fs -chmod -R 1777 /tmp
Here the problem comes, even setting this permissions the other users in other groups are accessing my data. please tell me the solution for this. I turned on hdfs file permissions by setting ( dfs.permission.enabled=true ).
I believe you set the wrong property to enable permissions. You need to set the following property in hdfs-site:
dfs.permissions.enabled = true
This is a good resource for HDFS permissions
You should repeat your step on the master node (active namenode).
After that, run
hdfs dfsadmin -refreshUserToGroupsMappings
I want to set Access Control on my Hadoop distributed file system. So I have set dfs.permissions=true. But I'm confused why a user in the same group can't write to the directory with permissions: 775.
[abc#hadoop03 root]$ hadoop fs -ls /zqq
Found 1 items
-rw-r--r-- 3 hbase zq 1021 2015-11-19 11:29 /zqq/group
[abc#hadoop03 root]$ hadoop fs -put /etc/passwd /zqq
put: Permission denied: user=abc, access=WRITE, inode="/zqq":zq:zq:drwxrwxr-x
[abc#hadoop03 root]$ id
uid=1012(abc) gid=1009(zq) groups=1009(zq)