I have my hadoop cluster set up with one master and two slaves.
when I type
hadoop fs -ls
ls: Cannot access .: No such file or directory.
But when I type the following:
hadoop fs -ls /
Found 1 items
drwxr-xr-x - Mike supergroup 0 2014-06-24 00:24 /usr
I get the same output both on master and slaves. why hadoop fs -ls does not work?
Thanks!
hadoop fs -ls
This tries to list current user's home directory on hdfs. since i think /user/{username} directory doesn't exist in your case hence you get the error,
hadoop fs -ls /
you are specifically telling it to list root directory which it does successfully as it exist.
Related
When running hadoop fs -ls
drwxr-xr-x - chiki supergroup 0 2019-01-14 17:03 Party_output
drwxr-xr-x - chiki supergroup 0 2018-01-22 18:25 party_uploads
but when try to access the directory
hadoop fs -ls /Party_output
showing output as
`/Party_output': No such file or directory
That's because hadoop fs -ls shows the contents of your home directory /home/chiki/.
You need to run hadoop fs -ls Party_output to see inside that directory (because it lives in /home/chiki/Party_output and not /Party_output).
I'm new to Hadoop, and am trying to check what data is available in HDFS. However, the dfs command returns a response that indicates the class is deprecated, and that hdfs should be used:
-bash-4.2$ hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
ls: `.': No such file or directory
When I try the hdfs command, though, I get what appears to be a Java class lookup error:
-bash-4.2$ hadoop hdfs -ls
Error: Could not find or load main class hdfs
Is there something wrong with my Hadoop setup, or have others encountered this catch-22?
It is hadoop fs or hdfs dfs, then -ls
You can run hdfs dfs -ls / to check the root of HDFS, but you will get .: No such file or directory because the output of echo "hdfs:///user/$(whoami)" does not exist yet, and you need to make it using hadoop fs -mkdir -p hdfs:///user/$(whoami).
That command must be repeated for every user account that attempts to access their HDFS user directory
First, I have read this post:Is there an equivalent to `pwd` in hdfs?. It says there is no such 'pwd' in HDFS.
However, as I progressed with the instructions of Hadoop: Setting up a Single Node Cluster, I failed on this command:
$ bin/hdfs dfs -put etc/hadoop input
put: 'input': No such file or directory
It's weird that I succeed on this command for the first time I went through the instructions, but failed for the second time. It's also weird that I succeed on this command on my friends computer, which has the same system (Ubuntu 14.04) and hadoop version (2.7.1) as mine.
Can anyone explain what happened here? Is there some 'pwd' in HDFS after all?
Firstly, You are trying to run the command $ bin/hdfs dfs -put etc/hadoop input with user that doesn't exist in the VM/HDFS
Let me clearly explain you with the following example in HDP VM
[root#sandbox hadoop-hdfs-client]# bin/hdfs dfs -put /etc/hadoop input
put: `input': No such file or directory
Here I executed the command with root user and it didn't exist in the HDP VM. Check in the following command to list the users
[root#sandbox hadoop-hdfs-client]# hadoop fs -ls /user
Found 8 items
drwxrwx--- - ambari-qa hdfs 0 2015-08-20 08:33 /user/ambari-qa
drwxr-xr-x - guest guest 0 2015-08-20 08:47 /user/guest
drwxr-xr-x - hcat hdfs 0 2015-08-20 08:36 /user/hcat
drwx------ - hive hdfs 0 2015-09-04 09:52 /user/hive
drwxr-xr-x - hue hue 0 2015-08-20 09:05 /user/hue
drwxrwxr-x - oozie hdfs 0 2015-08-20 08:37 /user/oozie
drwxr-xr-x - solr hdfs 0 2015-08-20 08:41 /user/solr
drwxrwxr-x - spark hdfs 0 2015-08-20 08:34 /user/spark
In HDFS, If you want to copy a file and not mentioning the absolute path for destination argument, it will consider home of the logged user and place your file there. Here root user not found.
Now let's switch to hive user and test
[root#sandbox hadoop-hdfs-client]# su hive
[hive#sandbox hadoop-hdfs-client]$ bin/hdfs dfs -put /etc/hadoop input
[hive#sandbox hadoop-hdfs-client]$ hadoop fs -ls /user/hive
Found 1 items
drwxr-xr-x - hive hdfs 0 2015-09-04 10:07 /user/hive/input
Yay..Successfully Copied..
Hope it helps..!!!
It means that we need to move input files to hdfs location.
Suppose you have input file named input.txt and we need to move to HDFS, then follow the below command.
Command: hdfs dfs -put /input_location /hdfs_location
In case no specific directory in HDFS
hdfs dfs -put /home/Desktop/input.txt /
In case specific directory in HDFS (Note: We need to create a directory before proceeding)
hdfs dfs -put /home/Desktop/input.txt /MR_input
After that you can run the examples
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
Here Input and output are the paths which should be in HDFS.
Hope this helps.
For the following command,
hadoop fs -put foo.txt bar.txt
After the operation succeeds, where will bar.txt locate in my local hard drive, given
a singe node setup?
pseudo distributed setup?
Will bar.txt still get replicated 3 times for backup?
bar.txt will be placed in the current hadoop user home directory as
/user/<hadoop-user> as per the following code
#Override
public Path getHomeDirectory() {
return makeQualified(new Path("/user/" + dfs.ugi.getShortUserName()));
}
Source here
If the cluster is single node, It only replicates one time even you set the dfs.replication to 3 because Hadoop will not save the same block on same node more than once.
pseudo distributed mode will have all the hadoop daemons running on the same machine. It's nothing but single node cluster.
It you set dfs.replication to 3, Hadoop just gives you warning only.
Hope it helps!
the above fs command tries to put the file foo.txt as bar.txt in current hdfs. The path of the hdfs is determined by the current user the operation is performing. This is because you are not providing the absolute path to the destination.
If you have /user as the home directory configured in hdfs, it will take the path of /user/ and places the file there.
Also, if there is no folder in hdfs that corresponds to the current user it will fail stating file doesn't exists.
e.g. Current user running is "testusr1". and the above command places the file under "/users/testusr1" .
You can verify this by executing a command #hadoop fs -ls /user/
AFAIK this will be should be same for Pseudo or single node setup.
[root#sandbox ~]# hadoop fs -ls /user
Found 11 items
drwx------ - root hdfs 0 2015-04-13 03:59 /user/root
.
.
.
.
.
drwxr-xr-x - root hdfs 0 2015-04-13 04:18 /user/testusr1
[root#sandbox ~]#
[root#sandbox ~]# su - testusr1
[testusr1#sandbox ~]$ whoami
testusr1
[testusr1#sandbox ~]$ pwd
/home/testusr1
[testusr1#sandbox ~]$ ll
total 7
-rw-rw-r-- 1 testusr1 testusr1 49 2015-04-13 04:17 foo-testusr2.txt
[testusr1#sandbox ~]$ hadoop fs -put foo-testusr2.txt bar-testusr2.txt
And for the replication factor, you can check with he help of basic hadoop fs -ls command.
[testusr1#sandbox ~]$exit
logout
[root#sandbox ~]# hdfs dfs -ls /user/testusr1
Found 1 items
-rw-r--r-- 1 testusr1 hdfs 49 2015-04-13 04:18 /user/testusr1/bar-testusr2.txt
[root#sandbox ~]#
In the above sample output, you can see the number 1 right after the file permissions. It is reflecting as 1 and it is as per my hdfs configurations.
I try to copy file from local to hadoop file system...
I'm using single node cluster
hduser#jothinathan-VirtualBox:~$ hdfs dfs -mkdir -p /usr/hduser
hduser#jothinathan-VirtualBox:~$ hadoop fs -ls
Found 1 items
drwxr-xr-x - hduser supergroup 0 2015-03-10 18:33 sample
hduser#jothinathan-VirtualBox:~$ cd Documents
hduser#jothinathan-VirtualBox:~/Documents$ ls
file hadoopFIle.txt URICat URICat.java
hduser#jothinathan-VirtualBox:~/Documents$ cd
hduser#jothinathan-VirtualBox:~$ hadoop fs -copyFromLocal /Documents/file /usr/local/hadoop
copyFromLocal: `/usr/local/hadoop': No such file or directory
I am getting this error message, please help me with this problem.
first try this command.
hadoop fs -ls /
if it is listing out the local file system files.(not hdfs),then try
hadoop fs -ls hdfs://IP-ADDRESS-of your-machine/
now copy your file to hdfs by
hadoop fs -copyFromLocal /Documents/file hdfs://Ip-addressofyourmachine/above result path