I am using standalone HBase and so need to remove some property file from hbase-site.xml as per the suggestion provided in -get "ERROR: Can't get master address from ZooKeeper; znode data == null" when using Hbase shell
But when I try to edit the "hbase-site.xml" file, it says only read-level permission. How to resolve this?
By default, hbase-site.xml will allow the write permission to hbase / root users only, all the other users just have the Read permissions. Below is the sample file with the permissions.
-rw-r--r-- 1 hbase hadoop 4832 Apr 20 2016 hbase-site.xml
To modify the file, switch to root or hbase user and modify. You can switch to other users using:
hbase user: sudo su <user> (sudo su hbase)
root: sudo su
what is your username .
Your current user has no authority
you can use
whoami
show file ownership & authority use
ls -l
result like
-rw-r--r-- 1 user usergroup size date filename
The group is divided into: user, group, other
The files are divided into: reader:r (4), write:w (2), execute:x (1).
you can use sudo or
chown usename:usergroup filename
chmod 666 filename
Related
I'm using the listfile processor in Nifi. List+Fetchfile will look in the specified directory for the files . However , The files placed in the directory having a different username and group. In order to Nifi read my files . I have to give 777 permission . then only Nifi is listing and fetching the files to proceed further.Is it possible to specify username in Nifi to use the specific username.For example below . I want to listfiles as user_abc instead of Nifi
For example
-rw-r--r-- 1 user_abc group1 12549 Mar 26 16:04 filename.csv
-rw-r--r-- 1 user_abc group1 12366 Mar 26 16:05 files.csv
Also would it be possible to change the permission 644 to 755 using any processor in Nifi.
NiFi instance will run as the user from which you started the NiFi service.
If that user don't have access to local files, then NiFi cannot access these files.
File owner must add permission to access that files to user_abc user.
Regarding would it be possible to change the permission 644 to 755 using any processor in Nifi, it is not possible to change existing file permission. But you can change permission for the directory which you create using PutFile processor.
You could use ListFtp and FetchFtp processors with user_abc credentias.
I am trying to create HDFS Admin super user. I referred below for another super user creation.
Creating HDFS Admin user
I followed exact steps but after running
hdfs dfsadmin -report
report: Access denied for user abc. Superuser privilege is required.
Any pointer here? how should I debug this?
Instead use this command it works:
sudo -u hdfs hdfs dfsadmin -report
It worked for me
Create a local user ,
Add user to hdfs group or setup privileges to local user using Apache Ranger Web UI
Assuming you aren't using kerberos you need to create a local Linux user on each Hadoop node. If you are using Kerberos/AD/LDAP then create a user there, setup Kerberos which takes a lot more effort.
Run this on each node to add a use as root/sudo;
useradd abc
passwd abc
usermod-aG hdfs abc
-instead of hdfs about it might be superuser.
su - hdfs
hadoop fs -mkdir /user/abc
hadoop fs -chown abc:abc
exit
su - abc
hadoop fs -ls /
I am new to hadoop and I have a problem.
The problem is I want give someone to use hdfs command, but cannot give them root password, So everything that needed "sudo" or "su hdfs" could not work. I have the reason that i cannot give others root permission.
I have found some solution like:
Create a group, change and let the group have HDFS permission, and add a user in, so that user would have HDFS permission. I had try it but fail.
So, I want to let a user be able to use hdfs commands without using "sudo -su hdfs" command or any command needed sudo permission. Could you tell me how to set the related settings or files with deeper details or any useful reference website ? Thank you all!
I believe by just setting the permission of /usr/bin/hdfs to be '-rwxr-xr-x 1 root root' ,other accounts should be able to execute the command hdfs successfully. have you tried it?
Rajest is right, 'hdfs command does not need sudo', probably you are using 'sudo -su hdfs' because command is attacking to path where only user 'hdfs' has permissions, you must organize data for your users.
A workarround (if you are not using kerberos) for using hdfs with any user is executing this line before working:
export HADOOP_USER_NAME=<HDFS_USER>
I got permission denied failure from hdfs while running the command below:
hive -e "insert overwrite directory '/user/hadoop/a/b/c/d/e/f' select * from table_name limit 10;"
The error message is:
Permission denied: user=hadoop, access=WRITE, inode="/user/hadoop/a/b":hdfs:hive:drwxrwxr-x
But when I run : hadoop fs -ls /user/hadoop/a, I get:
drwxrwxrwx - hadoop supergroup 0 2014-04-08 00:56 /user/hadoop/a/b
It seems I have opened full permission on the folder b, why did I still get permission denied?
PS: I have set hive.insert.into.multilevel.dirs=true in hive config file.
I had the same problem and I have solved it simply by using the fully qualified HDFS path. Like this.
hive -e "insert overwrite directory 'hdfs://<cluster>/user/hadoop/a/b/c/d/e/f' select * from table_name limit 10;"
See here a mention of this issue.
However, I do not know the root cause but it's not related to permissions.
Open a new terminal then try this:
1.) Change user to root:
su
2.) Change user to hdfs:
su hdfs
3.) Then run this command:
hadoop fs -chown -R hadoop /user/hadoop/a
Now you can try the command you were running.
Hope it helps...!!!
The issue is not actually with the directory permissions. Hive should have access to the path, what I mean by that is not on the files level.
Below are the steps on how you can grant access to the hdfs path and to the database to a user/group. Comments on each command starts with #
#Login as hive superuser to perform the below steps
create role <role_name_x>;
#For granting to database
grant all on database to role <role_name_x>;
#For granting to HDFS path
grant all on URI '/hdfs/path' to role <role_name_x>;
#Granting the role to the user you will use to run the hive job
grant role <role_name_x> to group <your_user_name>;
#After you perform the below steps you can validate with the below commands
#grant role should show the URI or database access when you run the grant role check on the role name as below
show grant role <role_name_x>;
#Now to validate if the user has access to the role
show role grant group <your_user_name>;
Here is one of my answer to the similar question through impala. More on hive permissions
Other suggestion based on other answers and comments here, If you want to see the permissions on some hdfs path or file hdfs dfs -ls is not your friend to know more about the permissions and its old school approach. you can use hdfs dfs -getfacl /hdfs/path will give you the complete details, result looks something like below.
hdfs dfs -getfacl /tmp/
# file: /tmp
# owner: hdfs
# group: supergroup
# flags: --t
user::rwx
group::rwx
other::rwx
I have installed and setup a single node instance of hadoop using my username. I want to setup the same hadoop setup to a different user. How can I do this?
In hadoop we run different tasks and store data in HDFS.
If several users are doing tasks using the same user account, it will be difficult to trace the jobs and track the tasks/defects done by each user.
Also the other issue is with the security.
If all are given the same user account, all users will have the same privilege and all can access everyone’s data, can modify it, can perform execution, can delete it also.
This is a very serious issue.
For this we need to create multiple user accounts.
Benefits of Creating multiple users
1) The directories/files of other users cannot be modified by a user.
2) Other users cannot add new files to a user’s directory.
3) Other users cannot perform any tasks (mapreduce etc) on a user’s files.
In short data is safe and is accessible only to the assigned user and the superuser.
Steps for setting up multiple User accounts
For adding new user capable of performing hadoop operations, do the following steps.
Step 1
Creating a New User
For Ubuntu
sudo adduser --ingroup <groupname> <username>
For RedHat variants
useradd -g <groupname> <username>
passwd
Then enter the user details and password.
Step 2
we need to change the permission of a directory in HDFS where hadoop stores its temporary data.
Open the core-site.xml file
Find the value of hadoop.tmp.dir.
In my core-site.xml, it is /app/hadoop/tmp. In the proceeding steps, I will be using /app/hadoop/tmp as my directory for storing hadoop data ( ie value of hadoop.tmp.dir).
Then from the superuser account do the following step.
hadoop fs –chmod -R 1777 /app/hadoop/tmp/mapred/staging
Step 3
The next step is to give write permission to our user group on hadoop.tmp.dir (here /app/hadoop/tmp. Open core-site.xml to get the path for hadoop.tmp.dir). This should be done only in the machine(node) where the new user is added.
chmod 777 /app/hadoop/tmp
Step 4
The next step is to create a directory structure in HDFS for the new user.
For that from the superuser, create a directory structure.
Eg: hadoop fs –mkdir /user/username/
Step 5
With this we will not be able to run mapreduce programs, because the ownership of the newly created directory structure is with superuser. So change the ownership of newly created directory in HDFS to the new user.
hadoop fs –chown –R username:groupname <directory to access in HDFS>
Eg: hadoop fs –chown –R username:groupname /user/username/
Step 6
login as the new user and perform hadoop jobs..
su – username
I had similar file permission issue and it did not get fixed by executing hadoop fs –chmod -R 1777 /app/hadoop/tmp/mapred/staging.
Instead, it got fixed by executing the following Unix command $ sudo chmod -R 1777 /app/hadoop/tmp/mapred