I'm new to Hadoop, and am trying to check what data is available in HDFS. However, the dfs command returns a response that indicates the class is deprecated, and that hdfs should be used:
-bash-4.2$ hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
ls: `.': No such file or directory
When I try the hdfs command, though, I get what appears to be a Java class lookup error:
-bash-4.2$ hadoop hdfs -ls
Error: Could not find or load main class hdfs
Is there something wrong with my Hadoop setup, or have others encountered this catch-22?
It is hadoop fs or hdfs dfs, then -ls
You can run hdfs dfs -ls / to check the root of HDFS, but you will get .: No such file or directory because the output of echo "hdfs:///user/$(whoami)" does not exist yet, and you need to make it using hadoop fs -mkdir -p hdfs:///user/$(whoami).
That command must be repeated for every user account that attempts to access their HDFS user directory
Related
the problem that I am facing is that when I give this command
"hadoop fs -ls" , it throws this message , "ls: `.': No such file or directory
".
For reference Output result to my "jps" command is
18276 SecondaryNameNode
19684 Jps
17942 NameNode
18566 NodeManager
18441 ResourceManager
First you should have a data node running which stores the data otherwise you will not be able to deal with hadoop fs (File System).
Try to stall all services
$start-all-sh
$jps
Ensure that data node is running and nothing obstacles it
Then try
$hadoop fs -ls /
When you don't pass any argument to this hadoop fs -ls command, the default hdfs directory it tries to list is /user/{your_user_name}
The problem in your case could be that this hdfs directory does not exist.
Try running hadoop fs -ls /user/ to see which directories are created for which users.
You can also just create your user's hdfs default directory. Running the below command will fix your error:
hadoop fs -mkdir -p /user/$(whoami)
I want to store some .tbl files in hadoop.
I am using this command: hadoop fs -put customer.tbl
But Im getting:
Usage: java FsShell [- put <localsrc> .. <dst>]
If I do hadoop fs -cat cusomer.tbl
It appears that file does note exist.
It seems like you need to provide local-src and HDFS-dst.
Can you try to add destination?
e.g. hadoop fs -put customer.tbl .
please also try execute "ls" on the HDFS:
hadoop fs -ls
please also try execute "ls" on the HDFS using hdfs command, 'hdfs' should be found under hadoop-version-number/bin/:
hdfs dfs -ls
I was trying to unzip a zip file, stored in Hadoop file system, & store it back in hadoop file system. I tried following commands, but none of them worked.
hadoop fs -cat /tmp/test.zip|gzip -d|hadoop fs -put - /tmp/
hadoop fs -cat /tmp/test.zip|gzip -d|hadoop fs -put - /tmp
hadoop fs -cat /tmp/test.zip|gzip -d|hadoop put - /tmp/
hadoop fs -cat /tmp/test.zip|gzip -d|hadoop put - /tmp
I get errors like gzip: stdin has more than one entry--rest ignored, cat: Unable to write to output stream., Error: Could not find or load main class put on terminal, when I run those commands. Any help?
Edit 1: I don't have access to UI. So, only command lines are allowed. Unzip/gzip utils are installed on my hadoop machine. I'm using Hadoop 2.4.0 version.
To unzip a gzipped (or bzipped) file, I use the following
hdfs dfs -cat /data/<data.gz> | gzip -d | hdfs dfs -put - /data/
If the file sits on your local drive, then
zcat <infile> | hdfs dfs -put - /data/
I use most of the times hdfs fuse mounts for this
So you could just do
$ cd /hdfs_mount/somewhere/
$ unzip file_in_hdfs.zip
http://www.cloudera.com/content/www/en-us/documentation/archive/cdh/4-x/4-7-1/CDH4-Installation-Guide/cdh4ig_topic_28.html
Edit 1/30/16: In case if you use hdfs ACLs: In some cases fuse mounts don't adhere to hdfs ACLs, so you'll be able to do file operations that are permitted by basic unix access privileges. See https://issues.apache.org/jira/browse/HDFS-6255, comments at the bottom that I recently asked to reopen.
To stream the data through a pipe to hadoop, you need to use the hdfs command.
cat mydatafile | hdfs dfs -put - /MY/HADOOP/FILE/PATH/FILENAME.EXTENSION
gzip use -c to read data from stdin
hadoop fs -put doesnt support read the data from stdin
I tried a lots of things and would help.I cant find the zip input support of hadoop.So it left me no choice but download the hadoop file to local fs ,unzip it and upload to hdfs again.
I am getting error while copying files from local file system to hdfs,
will you please help me regarding this,
I am using this command :
hadoopd fs -put text.txt file
put and copyFromLocal command helps you to copy data from your local system to HDFS,provided you have the permission to do so.
hadoop fs -put /path/to/textfile /path/to/hdfs
OR
hadoop dfs -put /path/to/textfile /path/to/hdfs
Comming to your error:
You typed the above command as
hadoopd fs
use
hadoop dfs -put /text.txt /file
hadoop dfs -put /path/to/local/file /path/to/hdfs/file
You can use following command
hadoop fs -copyFromLocal text.txt <path_to_hdfs_directory_where_you_want_to_keep_text.txt>
Without knowing the specific error you are getting, it's difficult to answer. The other responders posted the proper syntax. However, it is not uncommon to see permission issues when attempting to copy files to HDFS.
By default the user and group are typically "hdfs" and "supergroup". Your user account likely doesn't belong to "supergroup" and will get permission denied errors. Try running the command as:
sudo -u hdfs hadoop fs -put /path/to/local/file /path/to/hdfs/file
or
sudo -u hdfs hadoop dfs -put /path/to/local/file /path/to/hdfs/file
You can get around having to do this by changing the ownership and permission of the destination directory on HDFS to be more permissive.
"DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hduser/myfile could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock". From this I thinrk your data node is not running/properly. Check that in cluster UI.Then try
hadoop dfs -put /path/file /hdfs/file (hadoop YARN)
hadoop fs -copyFromLocal /path/file /hdfs/file (hadoop1.x)
I have my hadoop cluster set up with one master and two slaves.
when I type
hadoop fs -ls
ls: Cannot access .: No such file or directory.
But when I type the following:
hadoop fs -ls /
Found 1 items
drwxr-xr-x - Mike supergroup 0 2014-06-24 00:24 /usr
I get the same output both on master and slaves. why hadoop fs -ls does not work?
Thanks!
hadoop fs -ls
This tries to list current user's home directory on hdfs. since i think /user/{username} directory doesn't exist in your case hence you get the error,
hadoop fs -ls /
you are specifically telling it to list root directory which it does successfully as it exist.