Have been working on this for a week now, here is my issue. I have setup my cluster CDH5 with security enabled using MIT Kerberos. I am now trying the extended ACL's and has done all the necessary changes responsible to set it up but it doesn't work , here is the summary in commands.
[root#dn01 ~]# kinit hdfs
Password for hdfs#VT2.HADOOP.BA.SSA.GOV:
[root#dn01 ~]# hdfs dfs -ls /vt2/testdata/dcus
Found 2 items
-rwxr----- 3 hdfs systems 3949 2016-03-16
16:13 /vt2/testdata/dcus/XXXX.JSONSEQ
drwxr-----+ - hdfs systems 0 2016-03-18
15:57 /vt2/testdata/dcus/nn
[root#dn01 ~]# hdfs dfs -getfacl /vt2/testdata/dcus
# file: /vt2/testdata/dcus
# owner: hdfs
# group: systems
user::rwx
group::r--
group:developers:r--
mask::r--
other::---
[root#dn01 ~]# kdestroy
[root#dn01 ~]# kinit 419650
Password for 419650#VT2.HADOOP.BA.SSA.GOV:
[root#dn01 ~]# hdfs dfs -ls /vt2/testdata/dcus
ls: Permission denied: user=419650, access=READ_EXECUTE,
inode="/vt2/testdata/dcus":hdfs:systems:drwxr-----:group::r--,group:developers:r--
[root#dn01 ~]# id 419650
uid=1502(419650) gid=1504(419650) groups=1504(419650),1503(systems)
[root#dn01 ~]# kinit 815677
Password for 815677#VT2.HADOOP.BA.SSA.GOV:
[root#dn01 ~]# hdfs dfs -ls /vt2/testdata/dcus
ls: Permission denied: user=815677, access=READ_EXECUTE, inode="/vt2/testdata/dcus":hdfs:systems:drwxr-----:group::r--,group:developers:r--
[root#dn01 ~]# id 815677
uid=1500(815677) gid=1500(815677) groups=1500(815677),1502(developers)
I can only access the directory if I authenticate as hdfs which is the owner of that dir otherwise not even though if that user is a member of the group which has access to the directory in question as seen in "getfacl" command.
Related
I am trying to create a folder in HDFS from the command line with a user different from hdfs. The directory has permissions 775 for hdfs:hdfs:
$ hadoop fs -ls /
... directories ...
drwxrwxr-x - hdfs hdfs 0 2018-02-21 11:37 /data
... more directories
My user is in the group hdfs:
$ cat /etc/group
hdfs:x:nnnn:myusername
However, when I do hadoop fs -mkdir /data/foo I get:
mkdir: Permission denied: user=myusername, access=WRITE, inode="/data/foo":hdfs:hdfs:drwxrwxr-x
Does hdfs have to be my primary group for this?
After implementing hadoop federation when I give bellow command its works fine.
> hdfs dfs -ls /
-r-xr-xr-x - hdfs hadoop 0 2016-11-02 00:13 /home
-r-xr-xr-x - hdfs hadoop 0 2016-11-02 00:13 /projects
-r-xr-xr-x - hdfs hadoop 0 2016-11-02 00:13 /user
But when I give bellow command
> hdfs dfs -ls /home
ls: `/home': No such file or directory
What is the reason. If any one help me it will be better for me.
The particular user doesn't have access to /home
Try with sudo or change permission of /home path
I am trying to create a folder in hdfs hadoop file system but it is not allowing me to create a folder using the user cloudera nor as root. What should I configure to make it to allow me to hier was my attempt:
[cloudera#quickstart ~]$ sudo hadoop fs -mkdir /solr/test_core
mkdir: Permission denied: user=root, access=WRITE, inode="/solr":solr:supergroup:drwxr-xr-x
[cloudera#quickstart ~]$ su
Password:
[root#quickstart cloudera]# hadoop fs -mkdir /solr/test_core
mkdir: Permission denied: user=root, access=WRITE,inode="/solr":solr:supergroup:drwxr-xr-x
[root#quickstart cloudera]#
Neither cloudera nor root users will have permissions to run any command on /solr
to run any command you need to change into hdfs and then issue the commands like below:
su - hdfs
hadoop fs -mkdir /solr/test_core/
exit
Found the answer:
You should use these weird command.
sudo -u hdfs hdfs dfs -mkdir /solr/test_core/
To switch user to hdfs:
sudo su - hdfs.
Then you can make directory under /solr
To switch back to cloudera user
su - cloudera
and enter the password for cloudera
I want to set Access Control on my Hadoop distributed file system. So I have set dfs.permissions=true. But I'm confused why a user in the same group can't write to the directory with permissions: 775.
[abc#hadoop03 root]$ hadoop fs -ls /zqq
Found 1 items
-rw-r--r-- 3 hbase zq 1021 2015-11-19 11:29 /zqq/group
[abc#hadoop03 root]$ hadoop fs -put /etc/passwd /zqq
put: Permission denied: user=abc, access=WRITE, inode="/zqq":zq:zq:drwxrwxr-x
[abc#hadoop03 root]$ id
uid=1012(abc) gid=1009(zq) groups=1009(zq)
First, I have read this post:Is there an equivalent to `pwd` in hdfs?. It says there is no such 'pwd' in HDFS.
However, as I progressed with the instructions of Hadoop: Setting up a Single Node Cluster, I failed on this command:
$ bin/hdfs dfs -put etc/hadoop input
put: 'input': No such file or directory
It's weird that I succeed on this command for the first time I went through the instructions, but failed for the second time. It's also weird that I succeed on this command on my friends computer, which has the same system (Ubuntu 14.04) and hadoop version (2.7.1) as mine.
Can anyone explain what happened here? Is there some 'pwd' in HDFS after all?
Firstly, You are trying to run the command $ bin/hdfs dfs -put etc/hadoop input with user that doesn't exist in the VM/HDFS
Let me clearly explain you with the following example in HDP VM
[root#sandbox hadoop-hdfs-client]# bin/hdfs dfs -put /etc/hadoop input
put: `input': No such file or directory
Here I executed the command with root user and it didn't exist in the HDP VM. Check in the following command to list the users
[root#sandbox hadoop-hdfs-client]# hadoop fs -ls /user
Found 8 items
drwxrwx--- - ambari-qa hdfs 0 2015-08-20 08:33 /user/ambari-qa
drwxr-xr-x - guest guest 0 2015-08-20 08:47 /user/guest
drwxr-xr-x - hcat hdfs 0 2015-08-20 08:36 /user/hcat
drwx------ - hive hdfs 0 2015-09-04 09:52 /user/hive
drwxr-xr-x - hue hue 0 2015-08-20 09:05 /user/hue
drwxrwxr-x - oozie hdfs 0 2015-08-20 08:37 /user/oozie
drwxr-xr-x - solr hdfs 0 2015-08-20 08:41 /user/solr
drwxrwxr-x - spark hdfs 0 2015-08-20 08:34 /user/spark
In HDFS, If you want to copy a file and not mentioning the absolute path for destination argument, it will consider home of the logged user and place your file there. Here root user not found.
Now let's switch to hive user and test
[root#sandbox hadoop-hdfs-client]# su hive
[hive#sandbox hadoop-hdfs-client]$ bin/hdfs dfs -put /etc/hadoop input
[hive#sandbox hadoop-hdfs-client]$ hadoop fs -ls /user/hive
Found 1 items
drwxr-xr-x - hive hdfs 0 2015-09-04 10:07 /user/hive/input
Yay..Successfully Copied..
Hope it helps..!!!
It means that we need to move input files to hdfs location.
Suppose you have input file named input.txt and we need to move to HDFS, then follow the below command.
Command: hdfs dfs -put /input_location /hdfs_location
In case no specific directory in HDFS
hdfs dfs -put /home/Desktop/input.txt /
In case specific directory in HDFS (Note: We need to create a directory before proceeding)
hdfs dfs -put /home/Desktop/input.txt /MR_input
After that you can run the examples
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
Here Input and output are the paths which should be in HDFS.
Hope this helps.