I am using
hdfs dfs -put myfile mypath
and for some files I get
put: 'myfile': File Exists
does that mean there is a file with the same name or does that mean the same exact file (size, content) is already there?
how can I specify an -overwrite option here?
Thanks!
put: 'myfile': File Exists
Means,the file named "myfile" already exists in hdfs. You cannot have multiple files of the same name in hdfs
You can overwrite it using hadoop fs -put -f /path_to_local /path_to_hdfs
You can overwrite your file in hdfs using -f command.For example
hadoop fs -put -f <localfile> <hdfsDir>
OR
hadoop fs -copyFromLocal -f <localfile> <hdfsDir>
It worked fine for me. However -f command won't work in case of get or copyToLocal command. check this question
A file with the same name exists at the location you're trying to write to.
You can overwrite by specifying the -f flag.
Just updates to this answer, in Hadoop 3.X the command a bit different
hdfs dfs -put -f /local/to/path hdfs://localhost:9870/users/XXX/folder/folder2
Related
I am learning Hadoop and I have never worked on Unix before . So, I am facing a problem here . What I am doing is:
$ hadoop fs -mkdir -p /user/user_name/abcd
now I am gonna put a ready made file with name file.txt in HDFS
$ hadoop fs -put file.txt /user/user_name/abcd
The file gets stored in hdfs since it shows up on running -ls command.
Now , I want to remove this file from HDFS . How should i do this ? What command should i use?
If you run the command hadoop fs -usage you'll get a look at what commands the filesystem supports and with hadoop fs -help you'll get a more in-depth description of them.
For removing files the commands is simply -rm with -rf specified for recursively removing folders. Read the command descriptions and try them out.
i'm beginner in hadoop, when i use
Hadoop fs -ls /
And
Hadoop fs - mkdir /pathname
Every thing is ok, but i want to use my csv file in hadoop, my file is in c drive, i used -put and wget and copyfromlocal commands like these:
Hadoop fs -put c:/ path / myhadoopdir
Hadoop fs copyFromLoacl c:/...
Wget ftp://c:/...
But in two of above it errors in no such file or directory /myfilepathinc:
And for the third
Unable to resolve host address"c"
Thanks for your help
Looking at your command, it seems that there could be couple of reasons for this issue.
Hadoop fs -put c:/ path / myhadoopdir
Hadoop fs copyFromLoacl c:/...
Use hadoop fs -copyFromLocal correctly.
Check your local file permission. You have to give full access to that file.
You have to give your absolute path location both in local and in hdfs.
Hope it will work for you.
salmanbw's answer is exact. To be more clear.
Suppose your file is "c:\testfile.txt", use the command below.
And also make sure you have write permission to your directory in HDFS.
hadoop fs -copyFromLocal c:\testfile.txt /HDFSdir/testfile.txt
I am trying to put a file in a HDFS directory with the directory name containing space.
The following issue occurs:
Suppose hdfs directory “sub dir1” already exists.
Now I tried to put a file, sub.txt in this directory using following command:
hadoop fs -put sub.txt /user/jdutt/TempTesting/output//sub\ dir1/
It doesn’t put file in “sub dir1” directory; instead it creates another directory with the name “sub%20dir1” and puts file there.
How to solve this issue?
Please replace spaces with %20 ,It may solve your problem .
Please try to run command as
hadoop fs -put sub.txt /user/jdutt/TempTesting/output/'sub dir1'/
i have tested it on hadoop viersion 1.0.4 and it is working.
hadoop fs -copyFromLocal /home/cloudera/Documents/Hadoop/Hive%20Data/empdata hive_data
it's working for me
hadoop fs -put sub.txt "/user/jdutt/TempTesting/output/sub dir1"
I ran the following command to create a directory on the HDFS side
$HADOOP_HOME/bin/hadoop fs -mkdir 20news-bydate/
but I got a message that the directory already exist. So how can I overwrite the directory?
Thank you
You can remove directory first
$HADOOP_HOME/bin/hadoop fs -rm -R 20news-bydate/
$HADOOP_HOME/bin/hadoop fs -mkdir 20news-bydate/
How about you can try to remove first "hadoop fs -rmr" and then create....
as there are no ways pass parameter to owewrite it....
How to copy file from HDFS to the local file system . There is no physical location of a file under the file , not even directory . how can i moved them to my local for further validations.i am tried through winscp .
bin/hadoop fs -get /hdfs/source/path /localfs/destination/path
bin/hadoop fs -copyToLocal /hdfs/source/path /localfs/destination/path
Point your web browser to HDFS WEBUI(namenode_machine:50070), browse to the file you intend to copy, scroll down the page and click on download the file.
In Hadoop 2.0,
hdfs dfs -copyToLocal <hdfs_input_file_path> <output_path>
where,
hdfs_input_file_path maybe obtained from http://<<name_node_ip>>:50070/explorer.html
output_path is the local path of the file, where the file is to be copied to.
you may also use get in place of copyToLocal.
In order to copy files from HDFS to the local file system the following command could be run:
hadoop dfs -copyToLocal <input> <output>
<input>: the HDFS directory path (e.g /mydata) that you want to copy
<output>: the destination directory path (e.g. ~/Documents)
Update: Hadoop is deprecated in Hadoop 3
use hdfs dfs -copyToLocal <input> <output>
you can accomplish in both these ways.
1.hadoop fs -get <HDFS file path> <Local system directory path>
2.hadoop fs -copyToLocal <HDFS file path> <Local system directory path>
Ex:
My files are located in /sourcedata/mydata.txt
I want to copy file to Local file system in this path /user/ravi/mydata
hadoop fs -get /sourcedata/mydata.txt /user/ravi/mydata/
If your source "file" is split up among multiple files (maybe as the result of map-reduce) that live in the same directory tree, you can copy that to a local file with:
hadoop fs -getmerge /hdfs/source/dir_root/ local/destination
This worked for me on my VM instance of Ubuntu.
hdfs dfs -copyToLocal [hadoop directory] [local directory]
1.- Remember the name you gave to the file and instead of using hdfs dfs -put. Use 'get' instead. See below.
$hdfs dfs -get /output-fileFolderName-In-hdfs
if you are using docker you have to do the following steps:
copy the file from hdfs to namenode (hadoop fs -get output/part-r-00000 /out_text).
"/out_text" will be stored on the namenode.
copy the file from namenode to local disk by (docker cp namenode:/out_text output.txt)
output.txt will be there on your current working directory
bin/hadoop fs -put /localfs/destination/path /hdfs/source/path