I want to set Access Control on my Hadoop distributed file system. So I have set dfs.permissions=true. But I'm confused why a user in the same group can't write to the directory with permissions: 775.
[abc#hadoop03 root]$ hadoop fs -ls /zqq
Found 1 items
-rw-r--r-- 3 hbase zq 1021 2015-11-19 11:29 /zqq/group
[abc#hadoop03 root]$ hadoop fs -put /etc/passwd /zqq
put: Permission denied: user=abc, access=WRITE, inode="/zqq":zq:zq:drwxrwxr-x
[abc#hadoop03 root]$ id
uid=1012(abc) gid=1009(zq) groups=1009(zq)
Related
I am trying to create a folder in HDFS from the command line with a user different from hdfs. The directory has permissions 775 for hdfs:hdfs:
$ hadoop fs -ls /
... directories ...
drwxrwxr-x - hdfs hdfs 0 2018-02-21 11:37 /data
... more directories
My user is in the group hdfs:
$ cat /etc/group
hdfs:x:nnnn:myusername
However, when I do hadoop fs -mkdir /data/foo I get:
mkdir: Permission denied: user=myusername, access=WRITE, inode="/data/foo":hdfs:hdfs:drwxrwxr-x
Does hdfs have to be my primary group for this?
I am trying to copy text file into hdfs location.
I'm facing Access issue, so I tried changing permissions.
But I'm unable to change the same facing below error:
chaithu#localhost:~$ hadoop fs -put test.txt /user
put: Permission denied: user=chaithu, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
chaithu#localhost:~$ hadoop fs -chmod 777 /user
chmod: changing permissions of '/user': Permission denied. user=chaithu is not the owner of inode=user
chaithu#localhost:~$ hadoop fs -ls /
Found 2 items
drwxrwxrwt - hdfs supergroup 0 2017-12-20 00:23 /tmp
drwxr-xr-x - hdfs supergroup 0 2017-12-20 10:24 /user
Kindly help me how can I change the rights to full read and write for all users to access the HDFS folder.
First off, you shouldn't be writing into the /user folder directly nor set 777 on it
You're going to need a user directory for your current user to even run a mapreduce job, so you need to sudo su - hdfs first to become an HDFS superuser.
Then run these to create HDFS directories for your user account
hdfs dfs -mkdir -p /user/chaithu
hdfs dfs -chown -R chaithu /user/chaithu
hdfs dfs -chmod -R 770 /user/chaithu
Then exit from the hdfs user, and chaithu can now write to its own HDFS directory.
hadoop fs -put test.txt
That alone will put the file in the current user's folder.
Or, if that's too much work for you write to /tmp instead
A lazy option is to rewrite your user account to the super user.
export HADOOP_USER_NAME=hdfs
hadoop fs -put test.txt /user
And this is why hadoop is not secure or enforce user account access by default (i.e. never do this in production)
And finally, you can always just turn permissions completely off in hdfs-site.xml (again, only useful in development phases)
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
If you observe your hdfs dfs -ls result you see that only HDFS super user have the permissions to that path.
you have two solutions here
One is to change the permissions to chaitu through root user and making chaitu as user or owner, something like this hdfs dfs -chown -R hdfs:chaitu /path then you will be able to get access to that being a owner. Other dirty way is to give hdfs dfs -chmod -R 777 /path from the root, from the security stand point this 777 is not good.
Second one is using ACLS which gives you the temporary access
Please go through this link for more understanding.
More on ACLS
This is so basic and important for you to learn, try the above suggested ones and let me know if those don’t work I can help more based on the error you get.
Have been working on this for a week now, here is my issue. I have setup my cluster CDH5 with security enabled using MIT Kerberos. I am now trying the extended ACL's and has done all the necessary changes responsible to set it up but it doesn't work , here is the summary in commands.
[root#dn01 ~]# kinit hdfs
Password for hdfs#VT2.HADOOP.BA.SSA.GOV:
[root#dn01 ~]# hdfs dfs -ls /vt2/testdata/dcus
Found 2 items
-rwxr----- 3 hdfs systems 3949 2016-03-16
16:13 /vt2/testdata/dcus/XXXX.JSONSEQ
drwxr-----+ - hdfs systems 0 2016-03-18
15:57 /vt2/testdata/dcus/nn
[root#dn01 ~]# hdfs dfs -getfacl /vt2/testdata/dcus
# file: /vt2/testdata/dcus
# owner: hdfs
# group: systems
user::rwx
group::r--
group:developers:r--
mask::r--
other::---
[root#dn01 ~]# kdestroy
[root#dn01 ~]# kinit 419650
Password for 419650#VT2.HADOOP.BA.SSA.GOV:
[root#dn01 ~]# hdfs dfs -ls /vt2/testdata/dcus
ls: Permission denied: user=419650, access=READ_EXECUTE,
inode="/vt2/testdata/dcus":hdfs:systems:drwxr-----:group::r--,group:developers:r--
[root#dn01 ~]# id 419650
uid=1502(419650) gid=1504(419650) groups=1504(419650),1503(systems)
[root#dn01 ~]# kinit 815677
Password for 815677#VT2.HADOOP.BA.SSA.GOV:
[root#dn01 ~]# hdfs dfs -ls /vt2/testdata/dcus
ls: Permission denied: user=815677, access=READ_EXECUTE, inode="/vt2/testdata/dcus":hdfs:systems:drwxr-----:group::r--,group:developers:r--
[root#dn01 ~]# id 815677
uid=1500(815677) gid=1500(815677) groups=1500(815677),1502(developers)
I can only access the directory if I authenticate as hdfs which is the owner of that dir otherwise not even though if that user is a member of the group which has access to the directory in question as seen in "getfacl" command.
I am trying to create a folder in hdfs hadoop file system but it is not allowing me to create a folder using the user cloudera nor as root. What should I configure to make it to allow me to hier was my attempt:
[cloudera#quickstart ~]$ sudo hadoop fs -mkdir /solr/test_core
mkdir: Permission denied: user=root, access=WRITE, inode="/solr":solr:supergroup:drwxr-xr-x
[cloudera#quickstart ~]$ su
Password:
[root#quickstart cloudera]# hadoop fs -mkdir /solr/test_core
mkdir: Permission denied: user=root, access=WRITE,inode="/solr":solr:supergroup:drwxr-xr-x
[root#quickstart cloudera]#
Neither cloudera nor root users will have permissions to run any command on /solr
to run any command you need to change into hdfs and then issue the commands like below:
su - hdfs
hadoop fs -mkdir /solr/test_core/
exit
Found the answer:
You should use these weird command.
sudo -u hdfs hdfs dfs -mkdir /solr/test_core/
To switch user to hdfs:
sudo su - hdfs.
Then you can make directory under /solr
To switch back to cloudera user
su - cloudera
and enter the password for cloudera
First, I have read this post:Is there an equivalent to `pwd` in hdfs?. It says there is no such 'pwd' in HDFS.
However, as I progressed with the instructions of Hadoop: Setting up a Single Node Cluster, I failed on this command:
$ bin/hdfs dfs -put etc/hadoop input
put: 'input': No such file or directory
It's weird that I succeed on this command for the first time I went through the instructions, but failed for the second time. It's also weird that I succeed on this command on my friends computer, which has the same system (Ubuntu 14.04) and hadoop version (2.7.1) as mine.
Can anyone explain what happened here? Is there some 'pwd' in HDFS after all?
Firstly, You are trying to run the command $ bin/hdfs dfs -put etc/hadoop input with user that doesn't exist in the VM/HDFS
Let me clearly explain you with the following example in HDP VM
[root#sandbox hadoop-hdfs-client]# bin/hdfs dfs -put /etc/hadoop input
put: `input': No such file or directory
Here I executed the command with root user and it didn't exist in the HDP VM. Check in the following command to list the users
[root#sandbox hadoop-hdfs-client]# hadoop fs -ls /user
Found 8 items
drwxrwx--- - ambari-qa hdfs 0 2015-08-20 08:33 /user/ambari-qa
drwxr-xr-x - guest guest 0 2015-08-20 08:47 /user/guest
drwxr-xr-x - hcat hdfs 0 2015-08-20 08:36 /user/hcat
drwx------ - hive hdfs 0 2015-09-04 09:52 /user/hive
drwxr-xr-x - hue hue 0 2015-08-20 09:05 /user/hue
drwxrwxr-x - oozie hdfs 0 2015-08-20 08:37 /user/oozie
drwxr-xr-x - solr hdfs 0 2015-08-20 08:41 /user/solr
drwxrwxr-x - spark hdfs 0 2015-08-20 08:34 /user/spark
In HDFS, If you want to copy a file and not mentioning the absolute path for destination argument, it will consider home of the logged user and place your file there. Here root user not found.
Now let's switch to hive user and test
[root#sandbox hadoop-hdfs-client]# su hive
[hive#sandbox hadoop-hdfs-client]$ bin/hdfs dfs -put /etc/hadoop input
[hive#sandbox hadoop-hdfs-client]$ hadoop fs -ls /user/hive
Found 1 items
drwxr-xr-x - hive hdfs 0 2015-09-04 10:07 /user/hive/input
Yay..Successfully Copied..
Hope it helps..!!!
It means that we need to move input files to hdfs location.
Suppose you have input file named input.txt and we need to move to HDFS, then follow the below command.
Command: hdfs dfs -put /input_location /hdfs_location
In case no specific directory in HDFS
hdfs dfs -put /home/Desktop/input.txt /
In case specific directory in HDFS (Note: We need to create a directory before proceeding)
hdfs dfs -put /home/Desktop/input.txt /MR_input
After that you can run the examples
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
Here Input and output are the paths which should be in HDFS.
Hope this helps.