Hadoop Webhdfs Delete option over Amazon EMR failed - hadoop

i'm trying to see if delete option works over webhdfs :
http://ec2-ab-cd-ef-hi.compute-1.amazonaws.com:14000/webhdfs/v1/user/barak/barakFile.csv?op=DELETE&user.name=hadoop
but i get an error:
{"RemoteException":{"message":"Invalid HTTP GET operation [DELETE]",
"exception":"IOException","javaClassName":"java.io.IOException"}}
This file has all privilege ( 777 ) .
[hadoop#ip-172-99-9-99 ~]$ hadoop fs -ls hdfs:///user/someUser
Found 2 items
-rwxrwxrwx 1 hadoop hadoop 344 2015-12-10 08:33 hdfs:///user/someUser/someUser.csv
what else should i check for allowing in order to allow delete option over Amazon EMR WEBHDFS

You need to use curl -i -X command like this
curl -i -X DELETE "http://ec2-**-**-**-***.compute-1.amazonaws.com:14000/webhdfs/v1/user/hadoop/hdfs-site.xml?op=DELETE&user.name=hadoop"

I had the needed privileges for a file but didn't have all the needed privileges for the directory. changing permission for the entire Path solved it.

Related

Permission denied in copying the input file from local to HDFS

I am unable to put file into the HDFS. Whenever I try to execute put command, I receive a permission denied error. I have tried giving all read write execute permissions to the input file, but the problem still stands.
This is the command I executed. I am currently in hduser which has hadoop installed in it:
hadoop dfs -put /home/hduser/input /
The error I receive is the following:
WARNING: Use of this script to execute dfs is deprecated. WARNING:
Attempting to execute replacement "hdfs dfs" instead. put:
/input._COPYING_ (Permission denied)
According to the documentation of the put command you should use it like the following:
hadoop fs -put /path/to/localfile /home/hduser/input
where:
/path/to/localfile is the path on local FS where is the file you want to put on HDFS
/home/hduser/inputis the HDFS destination folder path

Hadoop directory file to user folder

I have created a folder in the root directly and I'm trying to copy a folder to hdfs hadoop but I'm getting an error message. This is the steps that I have followed:
[root#dh] ls
XXdirectoryXX
[root#dh] sudo –u hdfs hadoop fs –mkdir /user/uname
[root#hd] uname
[root#hd] sudo –u hdfs hadoop fs –chown uname /user/uname
[root#hd] su - uname
[uname#hd] hadoop fs –copyFromLocal XXdirectoryXX/ /user/uname
copyFromLocal: 'XXdirectoryXX/': No such file or directory
Is there a problem in the command or what I've done or should I use another command to copy the files over?
I'm using Centos 6.8 in the machine
Any ideas?
Thanks
Thanks to the comments I've managed to resolve the issue. Here is the code it it helps someone:
[root#dh] sudo -u hdfs hadoop fs -chown -R root /user/uname
[root#dh] hadoop fs –copyFromLocal XXdirectoryXX/ /user/uname
Regards

No such file or directory error when using Hadoop fs --copyFromLocal command

I have a local VM that has Hortonworks Hadoop and hdfs installed on it. I ssh'ed into the VM from my machine and now I am trying to copy a file from my local filesystem into hdfs through following set of commands:
[root#sandbox ~]# sudo -u hdfs hadoop fs -mkdir /folder1/
[root#sandbox ~]# sudo -u hdfs hadoop fs -copyFromLocal /root/folder1/file1.txt /hdfs_folder1/
When I execute it I get following error as - copyFromLocal:/root/folder1/file1.txt': No such file or directory
I can see that file right in /root/folder1/ directory but with hdfs command its throwing above error. I also tried to cd to /root/folder1/ and then execute the command but same error comes. Why is the file not getting found when it is right there?
By running sudo -u hdfs hadoop fs..., it tries to read the file /root/folder1/file.txt as hdfs.
You can do this.
Run chmod 755 -R /root. It will change permissions on directory and file recursively. But it is not recommended to open up permission on root home directory.
Then you can run the copyFromLocal as sudo -u hdfs to copy file from local file system to hdfs.
Better practice is to create user space for root and copy files directly as root.
sudo -u hdfs hadoop fs -mkdir /user/root
sudo -u hdfs hadoop fs -chown root:root /user/root
hadoop fs -copyFromLocal
I had the same problem running a Hortonworks 4 node cluster. As mentioned, user "hdfs" doesn't have permission to the root directory. The solution is to copy the information from the root folder to something the "hdfs" user can access. In the standard Hortonworks installation this is /home/hdfs
as root run the following...
mkdir /home/hdfs/folder1
cp /root/folder1/file1.txt /home/hdfs/folder1
now change users to hdfs and run from the hdfs USER's accessible directory
su hdfs
cd /home/hdfs/folder1
now you can access files as the hdfs user
hdfs dfs -put file1.txt /hdfs_folder1

How to view FsImage/Edit Logs file in hadoop

I'm Beginner in Hadoop. I wanted to view fs-image and Edit logs in hadoop. I have searched it in many blogs, nothing is clear. Please can any one tell me step by step procedure to view the Edit log/fs-image file in hadoop.
My version: Apache Hadoop: Hadoop-1.2.1
My Installed director is ![/home/students/hadoop-1.2.1]
I'm listing steps what i have tried based on some blogs.
Ex.1. $ hdfs dfsadmin -fetchImage /tmp
Ex.2. hdfs oiv -i /tmp/fsimage_0000000000000001386 -o /tmp/fsimage.txt
Nothing works for me.
It shows that hdfs is not a directory or a file.
For edit log, navigate to
/var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current
then;
ls -l
to view the complete name of the log file you want to extract; after then
hdfs oev -i editFileName -o /home/youraccount/Desktop/edits_your.xml -p XML
For the fsimage;
hdfs oiv -i fsimage -o /home/youraccount/Desktop/fsimage_your.xml
Go to the bin directory and try to execute the same commands

hadoop fs -put command

I have constructed a single-node Hadoop environment on CentOS using the Cloudera CDH repository. When I want to copy a local file to HDFS, I used the command:
sudo -u hdfs hadoop fs -put /root/MyHadoop/file1.txt /
But,the result depressed me:
put: '/root/MyHadoop/file1.txt': No such file or directory
I'm sure this file does exist.
Please help me,Thanks!
As user hdfs, do you have access rights to /root/ (in your local hdd)?. Usually you don't.
You must copy file1.txt to a place where local hdfs user has read rights before trying to copy it to HDFS.
Try:
cp /root/MyHadoop/file1.txt /tmp
chown hdfs:hdfs /tmp/file1.txt
# older versions of Hadoop
sudo -u hdfs hadoop fs -put /tmp/file1.txt /
# newer versions of Hadoop
sudo -u hdfs hdfs dfs -put /tmp/file1.txt /
--- edit:
Take a look at the cleaner roman-nikitchenko's answer bellow.
I had the same situation and here is my solution:
HADOOP_USER_NAME=hdfs hdfs fs -put /root/MyHadoop/file1.txt /
Advantages:
You don't need sudo.
You don't need actually appropriate local user 'hdfs' at all.
You don't need to copy anything or change permissions because of previous points.
try to create a dir in the HDFS by usig: $ hadoop fs -mkdir your_dir
and then put it into it $ hadoop fs -put /root/MyHadoop/file1.txt your_dir
Here is a command for writing df directly to hdfs file system in python script:
df.write.save('path', format='parquet', mode='append')
mode can be append | overwrite
If you want to put in in hdfs using shell use this command:
hdfs dfs -put /local_file_path_location /hadoop_file_path_location
You can then check on localhost:50070 UI for verification

Resources