This is the result of using it:
$ hadoop fs -chmod -R 777 /user/hive/
$ hdfs dfs -ls /user/hive/
Found 1 items
drwxrwx--x+ - hive hive 0 2021-08-05 14:21 /user/hive/warehouse
As can be seen, it doesn't do a thing, the mod of /user/hive is still 775. Why this could happen and how to fix it?
Is Sentry enabled on this cluster? By the permissions on the warehouse folder, I suspect Sentry is there, and if it is, Sentry provides the permissions.
You may see an warning in the NN logs indicating the command was ignored.
Related
I've installed hadoop and hive. I am trying to configure hive as follows:
hadoop fs -mkdir /data/hive/warehouse
I keep getting this error:
mkdir: '/data/hive/warehouse': No such file or directory
Do I need to create the directories with os commands before issuing the hadoop fs command? Any ideas?
You're missing the -p option similar to UNIX/Linux.
$ hadoop fs -mkdir -p /data/hive/warehouse
In addition, you should also chmod 1777 this directory if you're setting this up for multiple users and add /user/hive if you're running Hive as user hive.
$ hadoop fs -chmod -R 1777 /data/hive/warehouse
$ hadoop fs -mkdir -p /user/hive
$ hadoop fs -chown hive:hive /user/hive
See Apache Hive File System Permissions in CDH and Where does Hive store files in HDFS?.
I am trying to copy text file into hdfs location.
I'm facing Access issue, so I tried changing permissions.
But I'm unable to change the same facing below error:
chaithu#localhost:~$ hadoop fs -put test.txt /user
put: Permission denied: user=chaithu, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
chaithu#localhost:~$ hadoop fs -chmod 777 /user
chmod: changing permissions of '/user': Permission denied. user=chaithu is not the owner of inode=user
chaithu#localhost:~$ hadoop fs -ls /
Found 2 items
drwxrwxrwt - hdfs supergroup 0 2017-12-20 00:23 /tmp
drwxr-xr-x - hdfs supergroup 0 2017-12-20 10:24 /user
Kindly help me how can I change the rights to full read and write for all users to access the HDFS folder.
First off, you shouldn't be writing into the /user folder directly nor set 777 on it
You're going to need a user directory for your current user to even run a mapreduce job, so you need to sudo su - hdfs first to become an HDFS superuser.
Then run these to create HDFS directories for your user account
hdfs dfs -mkdir -p /user/chaithu
hdfs dfs -chown -R chaithu /user/chaithu
hdfs dfs -chmod -R 770 /user/chaithu
Then exit from the hdfs user, and chaithu can now write to its own HDFS directory.
hadoop fs -put test.txt
That alone will put the file in the current user's folder.
Or, if that's too much work for you write to /tmp instead
A lazy option is to rewrite your user account to the super user.
export HADOOP_USER_NAME=hdfs
hadoop fs -put test.txt /user
And this is why hadoop is not secure or enforce user account access by default (i.e. never do this in production)
And finally, you can always just turn permissions completely off in hdfs-site.xml (again, only useful in development phases)
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
If you observe your hdfs dfs -ls result you see that only HDFS super user have the permissions to that path.
you have two solutions here
One is to change the permissions to chaitu through root user and making chaitu as user or owner, something like this hdfs dfs -chown -R hdfs:chaitu /path then you will be able to get access to that being a owner. Other dirty way is to give hdfs dfs -chmod -R 777 /path from the root, from the security stand point this 777 is not good.
Second one is using ACLS which gives you the temporary access
Please go through this link for more understanding.
More on ACLS
This is so basic and important for you to learn, try the above suggested ones and let me know if those don’t work I can help more based on the error you get.
I am getting error while copying files from local file system to hdfs,
will you please help me regarding this,
I am using this command :
hadoopd fs -put text.txt file
put and copyFromLocal command helps you to copy data from your local system to HDFS,provided you have the permission to do so.
hadoop fs -put /path/to/textfile /path/to/hdfs
OR
hadoop dfs -put /path/to/textfile /path/to/hdfs
Comming to your error:
You typed the above command as
hadoopd fs
use
hadoop dfs -put /text.txt /file
hadoop dfs -put /path/to/local/file /path/to/hdfs/file
You can use following command
hadoop fs -copyFromLocal text.txt <path_to_hdfs_directory_where_you_want_to_keep_text.txt>
Without knowing the specific error you are getting, it's difficult to answer. The other responders posted the proper syntax. However, it is not uncommon to see permission issues when attempting to copy files to HDFS.
By default the user and group are typically "hdfs" and "supergroup". Your user account likely doesn't belong to "supergroup" and will get permission denied errors. Try running the command as:
sudo -u hdfs hadoop fs -put /path/to/local/file /path/to/hdfs/file
or
sudo -u hdfs hadoop dfs -put /path/to/local/file /path/to/hdfs/file
You can get around having to do this by changing the ownership and permission of the destination directory on HDFS to be more permissive.
"DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hduser/myfile could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock". From this I thinrk your data node is not running/properly. Check that in cluster UI.Then try
hadoop dfs -put /path/file /hdfs/file (hadoop YARN)
hadoop fs -copyFromLocal /path/file /hdfs/file (hadoop1.x)
I have my hadoop cluster set up with one master and two slaves.
when I type
hadoop fs -ls
ls: Cannot access .: No such file or directory.
But when I type the following:
hadoop fs -ls /
Found 1 items
drwxr-xr-x - Mike supergroup 0 2014-06-24 00:24 /usr
I get the same output both on master and slaves. why hadoop fs -ls does not work?
Thanks!
hadoop fs -ls
This tries to list current user's home directory on hdfs. since i think /user/{username} directory doesn't exist in your case hence you get the error,
hadoop fs -ls /
you are specifically telling it to list root directory which it does successfully as it exist.
I have constructed a single-node Hadoop environment on CentOS using the Cloudera CDH repository. When I want to copy a local file to HDFS, I used the command:
sudo -u hdfs hadoop fs -put /root/MyHadoop/file1.txt /
But,the result depressed me:
put: '/root/MyHadoop/file1.txt': No such file or directory
I'm sure this file does exist.
Please help me,Thanks!
As user hdfs, do you have access rights to /root/ (in your local hdd)?. Usually you don't.
You must copy file1.txt to a place where local hdfs user has read rights before trying to copy it to HDFS.
Try:
cp /root/MyHadoop/file1.txt /tmp
chown hdfs:hdfs /tmp/file1.txt
# older versions of Hadoop
sudo -u hdfs hadoop fs -put /tmp/file1.txt /
# newer versions of Hadoop
sudo -u hdfs hdfs dfs -put /tmp/file1.txt /
--- edit:
Take a look at the cleaner roman-nikitchenko's answer bellow.
I had the same situation and here is my solution:
HADOOP_USER_NAME=hdfs hdfs fs -put /root/MyHadoop/file1.txt /
Advantages:
You don't need sudo.
You don't need actually appropriate local user 'hdfs' at all.
You don't need to copy anything or change permissions because of previous points.
try to create a dir in the HDFS by usig: $ hadoop fs -mkdir your_dir
and then put it into it $ hadoop fs -put /root/MyHadoop/file1.txt your_dir
Here is a command for writing df directly to hdfs file system in python script:
df.write.save('path', format='parquet', mode='append')
mode can be append | overwrite
If you want to put in in hdfs using shell use this command:
hdfs dfs -put /local_file_path_location /hadoop_file_path_location
You can then check on localhost:50070 UI for verification