I accidentally created a directory in HDFS that is named 'again.' and I am trying to delete the directory. I have tried everything that I can think to help but, have been unsuccessful. I tried 'hdfs dfs -rm -r /user/[username]/*'. I tried 'hdfs dfs -rm -r '/user/[username]/again.'. None of these have worked ! Even the first which deleted every directory except for the directory that I wanted to delete.
Hadoop 2.7.3
Any thoughts ?
You could try with a ? placeholder:
hdfs dfs -rm -r /user/[username]/again?
That could theoretically match other files too, but if you have only one matching file it should work tolerably well.
Try using
hdfs dfs -rm -r "/user/[username]/again\."
or
hdfs dfs -rm -r ".\ /user/[username]/again\."
Note: In case you have Hue, please do it in Hue. That will make life easy.
None of the responses worked but, thank you all for responding. I just dropped the entire directory structure and refreshed the environment from an existing instance.
Related
I was asked with below question .
Interviewer: how to recover a deleted file in hdfs.
Me: from trash directory we can copy/move back to original directory.
Interviewer: Is there any other way except from trash recovery.
Me: I said No.
So my question is , whether there is really any way to recover deleted files or interviewer just asked me to test my confidence.
I have found below way to recover which is different from hdfs -cp/mv but it is also getting file from trash .
hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -D dfs.checksum.tpe=CRC32C -m 10 -pb -update /users/vijay/.Trash/ /application/data/vijay;
Hadoop has provided HDFS snapshot (SnapShot) function since version 2.1.0
You can try to use it
First,Create SnapShot
hdfs dfsadmin -allowSnapshot /user/hdfs/important
hdfs dfs -createSnapshot /user/hdfs/important important-snapshot
Next,try to delete one file
hdfs dfs -rm -r /user/hdfs/important/important-file.txt
Final,restore it
hdfs dfs -ls /user/hdfs/important/.snapshot/
hdfs dfs -cp /user/hdfs/important/.snapshot/important-snapshot/important-file.txt /user/hdfs/important/
hdfs dfs -cat /user/hdfs/important/important-file.txt
P.S:You have to use CP Command (not MV Command) to recover deleted file in this way Because the deleted file in snapshot is only-read file
Wish my answer can help you
I am able to create directory using the below command but not able to create the subdir under already created dir. May I know what could be the reason. I have setup hdfs on my mac in pseudo distributed mode and trying to create these directories. Any help would be appreciated.
hadoop fs -mkdir /test/subdir
The above command doesn't create any sub directory however the below command creates a directory.
hadoop fs -mkdir test
To recursively create subdirectories inside parent directory, you have to provide -p option or else you can create one directory at a time.
hdfs dfs -mkdir -p /test/subdir
will work in your case.
Try giving it the parent creation flag.
hadoop fs -mkdir -p /test/subdir
I copied a folder from HDFS to my local machine using the following command:
hdfs dfs -copyToLocal hdfs:///user/myname/output-64-32/
~/Documents/fromHDFS
But I can not see any file in fromHDFS folder and also when I try to run the command again, it says "File exists".
Any help is really appreciated.
Thanks.
Try these
rm -r ~/Documents/fromHDFS/*
hdfs dfs -get /user/myname/output-64-32/ ~/Documents/fromHDFS/
I want to remove all the files containes in hadoop directory, without removing the directory itself. I've tried using rm -r
but it removed the whole directory.
Please include a wildcard character * after the desired folder you want to delete, to avoid deleting the parent folder. Please look at the example below:
hdfs dfs -rm -r '/home/user/folder/*'
referring to the previous answer, you need to quote the asterisk:
hdfs dfs -rm -r "/home/user/folder/*"
Use hdfs command to delete all files in it. For example, if your hadoop path is /user/your_user_name/* then use asterisk to delete all files inside the specific folder.
hdfs dfs -rm -r '/user/your_user_name/*'
I am learning Hadoop and I have never worked on Unix before . So, I am facing a problem here . What I am doing is:
$ hadoop fs -mkdir -p /user/user_name/abcd
now I am gonna put a ready made file with name file.txt in HDFS
$ hadoop fs -put file.txt /user/user_name/abcd
The file gets stored in hdfs since it shows up on running -ls command.
Now , I want to remove this file from HDFS . How should i do this ? What command should i use?
If you run the command hadoop fs -usage you'll get a look at what commands the filesystem supports and with hadoop fs -help you'll get a more in-depth description of them.
For removing files the commands is simply -rm with -rf specified for recursively removing folders. Read the command descriptions and try them out.