This question already has answers here:
Command line: piping find results to rm
(5 answers)
Closed last month.
Recently frigged up my external hard drive with my photos on it (most are on DVD anyway, but..) by some partition friggery.
Fortunately I was able to put things back together with PhotoRec another Unix partition utility and PDisk.
PhotoRec returned over one thousand folders chalk full of anything from .txt files to important .NEF's.
So I tried to make the sorting easier by using unix since the OSX Finder would simply crumble under such requests as to select and delete a billion .txt files.
But I encounter some BS when I tried to find and delete txt files, or find and move all jpegs recursively into a new folder called jpegs. I am a unix noob so I need some assistance please.
Here is what I did in bash. (I am in the directory that ls would list all the folders and files I need to act upon).
find . -name *.txt | rm
or
sudo find . -name *.txt | rm -f
So it's giving me some BS that I need to unlink the files. Whatever.
I need to find all .txt files recursively and delete them preferably verbose.
You can't pipe filenames to rm. You need to use xargs instead. Also, remember to quote the file pattern ".txt" or the shell will expand it.
find . -name "*.txt" | xargs rm
find . -name "*.txt" -exec rm {} \;
$ find . -name "*.txt" -type f -delete
Related
This question already has answers here:
Find files recursively and rename based on their full path
(3 answers)
Closed 2 years ago.
This should be simple but I am getting stuck somewhere.
I want to recurse through a directory and rename all pdfs the same filename. The renamed files should remain in their current subdirectory.
The current PDF filenames are arbitrary.
Assume that I am running this script from the top directory. Inside this top directory are several subdirs, each with a PDF with an arbitrary filename.
This works to rename the files in place:
find . -iname "*.pdf" -exec rename 's/test.pdf/commonname.pdf/' '{}' \;
But since the current filenames are arbitrary, I need to swap out a regex for "any characters or digits" in place of test.pdf
My understanding is that the correct regex expression is .*
So I tried:
find . -iname "*.pdf" -exec rename 's/.*/commonname.pdf/' '{}' \;
When I run this, the first PDF gets renamed to commonpdf.pdf, but it is moved into the top directory. My use case requires that the PDFs get renamed in place.
I am missing something obvious here, clearly - can you spot my mistake?
The problem is that in s/.*/commonname.pdf/, .* matches the complete path, not just the filename. You could make sure that the regular expression applies to nothing but the filename by matching on non-slashes:
find . -iname '*.pdf' -exec rename 's|[^/]*$|commonname.pdf|' '{}' \;
or you could use GNU find's -execdir, which sets the working directory to the directory containing the matching file:
find . -iname '*.pdf' -execdir rename 's/.*/commonname.pdf/' '{}' \;
or not use rename at all:
find . -iname '*.pdf' -execdir mv {} commonname.pdf \;
or not use find, but a single invocation of rename:
rename 's|[^/]*$|commonname.pdf|' **/*.pdf
This requires the globstar shell option to enable the ** glob.
Use the -n option to rename for a dry run without actually changing filenames.
I wish to create a program that zips whatever file is created in the directory the find parameters specify, and run it as a background process. I heavily comment it to give a better idea of what I'm trying to achieve. I'm running this from my MacBook Pro terminal, OS X version 10.9
#!/bin/sh
#find file in directory listed below
#type f to omit directories or special files
#mtime/ctime is modified/created -0 days or less
#name is with the name given in double quotes
#asterik meaning any file name with any file extension
#use xargs to convert find sequence to a command for the line after pipe
find /Users/name/thisdirectory type f -ctime -0 -name "'*'.'*'" | xargs zip -
Maybe you're looking for this:
find /path/to/dir -type f -ctime -0 -name "*.*" | zip -# file.zip
If you read zip -h, it explains that -# is to read the filenames from standard input.
You don't need xargs here, the function to work with a list of files received from standard input is built into zip itself, similar to most compression tools like tar.
Btw, I think you want to change -ctime -0, because I don't think it can match anything this way...
138.096.000.015.00111-138.096.201.072.38717
138.096.000.015.01008-138.096.201.072.00790
138.096.201.072.00790-138.096.000.015.01008
138.096.201.072.33853-173.194.020.147.00080
138.096.201.072.34293-173.194.034.009.00080
138.096.201.072.38717-138.096.000.015.00111
138.096.201.072.41741-173.194.034.025.00080
138.096.201.072.50612-173.194.034.007.00080
173.194.020.147.00080-138.096.201.072.33853
173.194.034.007.00080-138.096.201.072.50612
173.194.034.009.00080-138.096.201.072.34293
173.194.034.025.00080-138.096.201.072.41741
I have many folders inside which there are many files, the file names are like the above
I want to remove those files with file names having substring "138.096.000"
and sometimes I want to get the list of files with filenames with substring "00080"
To delete files with name containing "138.096.000":
find /root/of/files -type f -name '*138.096.000*' -exec rm {} \;
To list files with names containing "00080":
find /root/of/files -type f -name '*00080*'
rm $(find . -name \*138.096.000\*)
This uses the find command to find the appropriate files. This is executed within a subshell, and the output (the list of files) is used by rm. Note the escaping of the * pattern, since the shell will try and expand * itself.
This assumes you don't have filenames with spaces etc. You may prefer to do something like:
for i in $(find . -name \*138.096.000\*); do
rm $i
done
in this scenario, or even
find . -name \*138.096.000\* | xargs rm
Note that in the loop above you'll execute rm for each file, and the xargs variant will execute rm multiple times (dependin gon the number of files you have - it may only execute once).
However, if you're using zsh then you can simply do:
rm **/*138.096.000*
(I'm assuming your directories aren't named like your files. Note the -f flag as used in Kamil's answer if this is the case)
I have a folder /var/backup where a cronjob saves a backup of a database/filesystem. It contains a latest.gz.zip and lots of older dumps which are names timestamp.gz.zip.
The folder ist getting bigger and bigger and I would like to create a bash script that does the following:
Keep latest.gz.zip
Keep the youngest 10 files
Delete all other files
Unfortunately, I'm not a good bash scripter so I have no idea where to start. Thanks for your help.
In zsh you can do most of it with expansion flags:
files=(*(.Om))
rm $files[1,-9]
Be careful with this command, you can check what matches were made with:
print -rl -- $files[1,-9]
You should learn to use the find command, possibly with xargs, that is something similar to
find /var/backup -type f -name 'foo' -mtime -20 -delete
or if your find doesn't have -delete:
find /var/backup -type f -name 'foo' -mtime -20 -print0 | xargs -0 rm -f
Of course you'll need to improve a lot, this is just to give ideas.
This question already has answers here:
Argument list too long error for rm, cp, mv commands
(31 answers)
Closed 3 years ago.
If I run the command mv folder2/*.* folder, I get "argument list too long" error.
I find some example of ls and rm, dealing with this error, using find folder2 -name "*.*". But I have trouble applying them to mv.
find folder2 -name '*.*' -exec mv {} folder \;
-exec runs any command, {} inserts the filename found, \; marks the end of the exec command.
The other find answers work, but are horribly slow for a large number of files, since they execute one command for each file. A much more efficient approach is either to use + at the end of find, or use xargs:
# Using find ... -exec +
find folder2 -name '*.*' -exec mv --target-directory=folder '{}' +
# Using xargs
find folder2 -name '*.*' | xargs mv --target-directory=folder
find folder2 -name '*.*' -exec mv \{\} /dest/directory/ \;
First, thanks to Karl's answer. I have only minor correction to this.
My scenario:
Millions of folders inside /source/directory, containing subfolders and files inside. Goal is to copy it keeping the same directory structure.
To do that I use such command:
find /source/directory -mindepth 1 -maxdepth 1 -name '*' -exec mv {} /target/directory \;
Here:
-mindepth 1 : makes sure you don't move root folder
-maxdepth 1 : makes sure you search only for first level children. So all it's content is going to be moved too, but you don't need to search for it.
Commands suggested in answers above made result directory structure flat - and it was not what I looked for, so decided to share my approach.
This one-liner command should work for you.
Yes, it is quite slow, but works even with millions of files.
for i in /folder1/*; do mv "$i" /folder2; done
It will move all the files from folder /folder1 to /folder2.
find doesn't work with really long lists of files, it will give you the same error "Argument list too long". Using a combination of ls, grep and xargs worked for me:
$ ls|grep RadF|xargs mv -t ../fd/
It did the trick moving about 50,000 files where mv and find alone failed.