I have a script that finds log files older than x days within a specified directory and removes them.
find $LOG_ARCHIVE/* -mtime +$DAYS_TO_KEEP_LOGS -exec rm -f {} \;
This is working as expected but I would like to have the option to print the processing to the screen and log file so I know what files (if any) have been deleted. I've tried appending tee at the end but have had no success.
find $LOG_ARCHIVE/* -mtime +$DAYS_TO_KEEP_LOGS -exec rm -fv {} \; | tee -a $LOG
There are multiple ways the task can be done.
One possibility is to simply run find twice:
find "$LOG_ARCHIVE" -mtime +"$DAYS_TO_KEEP_LOGS" -print > "$LOG"
find "$LOG_ARCHIVE" -mtime +"$DAYS_TO_KEEP_LOGS" -exec rm -f {} +
Another possibility is to use tee along with (GNU extensions) -print0 to find and -0 to xargs:
find "$LOG_ARCHIVE" -mtime +"$DAYS_TO_KEEP_LOGS" -print0 |
tee "$LOG" |
xargs -0 rm -f
With this version, the log file will have null bytes at the end of each file name. You can arrange to replace those with newlines if you don't mind the possible ambiguity:
find "$LOG_ARCHIVE" -mtime +"$DAYS_TO_KEEP_LOGS" -print0 |
tee >(tr '\0' '\n' >"$LOG") |
xargs -0 rm -f
This uses Bash (and Korn shell) process substitution to pass the log file through tr to map null bytes '\0' to newlines '\n'.
Another way of doing it is to write a tiny custom script (call it remove-log.sh):
printf '%s\n' "$#" >> "$LOG"
rm -f "$#"
and then use:
find "$LOG_ARCHIVE" -mtime +"$DAYS_TO_KEEP_LOGS" -exec bash remove-log.sh {} +
Note that the script needs to see the value of $LOG, so that must be exported as an environment variable. You could avoid that by passing the log name explicitly:
logfile="$1"
shift
printf '%s\n' "$#" >> "$logfile"
rm -f "$#"
plus:
find "$LOG_ARCHIVE" -mtime +"$DAYS_TO_KEEP_LOGS" -exec bash remove-log.sh "$LOG" {} +
Note that both of these use >> to append because the script might be invoked more than once (though it probably won't be). The onus is on you to ensure that the log file is empty before you run the find command.
Note that I dropped the /* from the path argument for find; it wasn't really needed. You might want to add -type f to ensure that only files are removed. The + is a feature from the POSIX 2008 specification of find which makes find act rather like xargs without needing to explicitly use xargs.
find $LOG_ARCHIVE/* -mtime +$DAYS_TO_KEEP_LOGS -exec sh -c 'echo {} |tee -a "$LOG"; rm -f {}' \;
Try and see if it works.
Related
I want to run a loop over all files of a particular extension in a directory:
for i in *.bam
do
...
done
However, if the command that I run inside the loop creates a temporary file of the same extension, the loop tries to process this new tmp file as well. This is unwanted. So, I thought the following would solve the problem: first list all the *.bam files in the directory, save that list to a variable, and then loop over this saved list:
list_bam=$(for i in *.bam; do echo $i; done)
for i in $list_bam
do
...
done
To my surprise, this runs into the same problem! Could someone please explain the logic behind this and how to fix it so that the loop only processes the pre-existing .bam files?
Instead of a loop you can use find and xargs
find . -maxdepth 1 -type f -name "*.bam" -print0 | \
xargs -0 -I{} bash -c 'echo "{}" > "{}.new.bam"'
or
find . -maxdepth 1 -type f -name "*.bam" -print0 | \
xargs -0 -I{} bash -c 'echo "$1" > "$1.new.bam"' -- {}
example:
$ touch a.bam b.bam
$ ls
a.bam b.bam
$ find . -maxdepth 1 -type f -name "*.bam" -print0 | \
xargs -0 -I{} bash -c 'echo "{}" > "{}.new.bam"'
$ ls
a.bam a.bam.new.bam b.bam b.bam.new.bam
You should perhaps make sure that your globbing expression *.bam couldn't be interpreted afterward with something like:
list_bam=$(ls *.bam)
...
...but, as noticed by #glenn in the comments, this is a bad idea.
Something similar should be made using a find ... -print0 | xargs -0 ... command template.
I would like to recursively go through all subdirectories and remove the oldest two PDFs in each subfolder named "bak":
Works:
find . -type d -name "bak" \
-exec bash -c "cd '{}' && pwd" \;
Does not work, as the double quotes are already in use:
find . -type d -name "bak" \
-exec bash -c "cd '{}' && rm "$(ls -t *.pdf | tail -2)"" \;
Any solution to the double quote conundrum?
In a double quoted string you can use backslashes to escape other double quotes, e.g.
find ... "rm \"\$(...)\""
If that is too convoluted use variables:
cmd='$(...)'
find ... "rm $cmd"
However, I think your find -exec has more problems than that.
Using {} inside the command string "cd '{}' ..." is risky. If there is a ' inside the file name things will break and might execcute unexpected commands.
$() will be expanded by bash before find even runs. So ls -t *.pdf | tail -2 will only be executed once in the top directory . instead of once for each found directory. rm will (try to) delete the same file for each found directory.
rm "$(ls -t *.pdf | tail -2)" will not work if ls lists more than one file. Because of the quotes both files would be listed in one argument. Therefore, rm would try to delete one file with the name first.pdf\nsecond.pdf.
I'd suggest
cmd='cd "$1" && ls -t *.pdf | tail -n2 | sed "s/./\\\\&/g" | xargs rm'
find . -type d -name bak -exec bash -c "$cmd" -- {} \;
You have a more fundamental problem; because you are using the weaker double quotes around the entire script, the $(...) command substitution will be interpreted by the shell which parses the find command, not by the bash shell you are starting, which will only receive a static string containing the result from the command substitution.
If you switch to single quotes around the script, you get most of it right; but that would still fail if the file name you find contains a double quote (just like your attempt would fail for file names with single quotes). The proper fix is to pass the matching files as command-line arguments to the bash subprocess.
But a better fix still is to use -execdir so that you don't have to pass the directory name to the subshell at all:
find . -type d -name "bak" \
-execdir bash -c 'ls -t *.pdf | tail -2 | xargs -r rm' \;
This could stll fail in funny ways because you are parsing ls which is inherently buggy.
You are explicitely asking for find -exec. Usually I would just concatenate find -exec find -delete but in your case only two files should be deleted. Therefore the only method is running subshell. Socowi already gave nice solution, however if your file names do not contain tabulator or newlines, another workaround is find while read loop.
This will sort files by mtime
find . -type d -iname 'bak' | \
while read -r dir;
do
find "$dir" -maxdepth 1 -type f -iname '*.pdf' -printf "%T+\t%p\n" | \
sort | head -n2 | \
cut -f2- | \
while read -r file;
do
rm "$file";
done;
done;
The above find while read loop as "one-liner"
find . -type d -iname 'bak' | while read -r dir; do find "$dir" -maxdepth 1 -type f -iname '*.pdf' -printf "%T+\t%p\n" | sort | head -n2 | cut -f2- | while read -r file; do rm "$file"; done; done;
find while read loop can also handle NUL terminated file names. However head can not handle this, so I did improve other answers and made it work with nontrivial file names (only GNU + bash)
replace 'realpath' with rm
#!/bin/bash
rm_old () {
find "$1" -maxdepth 1 -type f -iname \*.$2 -printf "%T+\t%p\0" | sort -z | sed -zn 's,\S*\t\(.*\),\1,p' | grep -zim$3 \.$2$ | xargs -0r realpath
}
export -f rm_old
find -type d -iname bak -execdir bash -c 'rm_old "{}" pdf 2' \;
However bash -c might still exploitable, to make it more secure let stat %N do the quoting
#!/bin/bash
rm_old () {
local dir="$1"
# we don't like eval
# eval "dir=$dir"
# this works like eval
dir="${dir#?}"
dir="${dir%?}"
dir="${dir//"'$'\t''"/$'\011'}"
dir="${dir//"'$'\n''"/$'\012'}"
dir="${dir//$'\047'\\$'\047'$'\047'/$'\047'}"
find "$dir" -maxdepth 1 -type f -iname \*.$2 -printf '%T+\t%p\0' | sort -z | sed -zn 's,\S*\t\(.*\),\1,p' | grep -zim$3 \.$2$ | xargs -0r realpath
}
find -type d -iname bak -exec stat -c'%N' {} + | while read -r dir; do rm_old "$dir" pdf 2; done
I would like to rename several files picked by find in some directory, then use xargs and mv to rename the files, with parameter expansion. However, it did not work...
example:
mkdir test
touch abc.txt
touch def.txt
find . -type f -print0 | \
xargs -I {} -n 1 -0 mv {} "${{}/.txt/.tx}"
Result:
bad substitution
[1] 134 broken pipe find . -type f -print0
Working Solution:
for i in ./*.txt ; do mv "$i" "${i/.txt/.tx}" ; done
Although I finally got a way to fix the problem, I still want to know why the first find + xargs way doesn't work, since I don't think the second way is very general for similar tasks.
Thanks!
Remember that shell variable substitution happens before your command runs. So when you run:
find . -type f -print0 | \
xargs -I {} -n 1 -0 mv {} "${{}/.txt/.tx}"
The shell tries to expan that ${...} construct before xargs even
runs...and since that contents of that expression aren't a valid shell variable reference, you get an error. A better solution would be to use the rename command:
find . -type f -print0 | \
xargs -I {} -0 rename .txt .tx {}
And since rename can operate on multiple files, you can simplify
that to:
find . -type f -print0 | \
xargs -0 rename .txt .tx
This script has taken me too long (!!) to compile, but I finally have a reasonably nice script which does what I want:
find "$#" -type d -print0 | while IFS= read -r -d $'\0' dir; do
find "$dir" -iname '*.flac' -maxdepth 1 ! -exec bash -c '
metaflac --list --block-type=VORBIS_COMMENT "$0" 2>/dev/null | grep -i "REPLAYGAIN_ALBUM_PEAK" &>/dev/null
exit $?
' {} ';' -exec bash -c '
echo Adding ReplayGain tags to "$0"/\*.flac...
metaflac --add-replay-gain "${#:1}"
' "$dir" {} '+'
done
The purpose is to search the file tree for directories containing FLAC files, test whether any are missing the REPLAYGAIN_ALBUM_PEAK tag, and scan all the files in that directory for ReplayGain if they are missing.
The big stumbling block is that all the FLAC files for a given album must be passed to metaflac as one command, otherwise metaflac doesn't know they're all one album. As you can see, I've achieved this using find ... -exec ... +.
What I'm wondering is if there's a more elegant way to do this. In particular, how can I skip the while loop? Surely this should be unnecessary, because find is already iterating over the directories?
You can probably use xargs to achieve it.
For example, if you are looking for text foo in all your files you'll have something like
find . type f | xargs grep foo
xargs passes each result from left-end expression (find) to the right-end invokated command.
Then, if no command exists to achieve what you want to do, you can always create a function, and pass if to xargs
I can't comment on the flac commands themselves, but as for the rest:
find . -name '*.flac' \
! -exec bash -c 'metaflac --list --block-type=VORBIS_COMMENT "$1" | grep -qi "REPLAYGAIN_ALBUM_PEAK"' -- {} \; \
-execdir bash -c 'metaflac --add-replay-gain *.flac' \;
You just find the relevant files, and then treat the directory it's in.
find . -name "filename including space" -print0 | xargs -0 ls -aldF > log.txt
find . -name "filename including space" -print0 | xargs -0 rm -rdf
Is it possible to combine these two commands into one so that only 1 find will be done instead of 2?
I know for xargs -I there may be ways to do it, which may lead to errors when proceeding filenames including spaces. Any guidance is much appreciated.
find . -name "filename including space" -print0 |
xargs -0 -I '{}' sh -c 'ls -aldF {} >> log.txt; rm -rdf {}'
Ran across this just now, and we can invoke the shell less often:
find . -name "filename including space" -print0 |
xargs -0 sh -c '
for file; do
ls -aldF "$file" >> log.txt
rm -rdf "$file"
done
' sh
The trailing "sh" becomes $0 in the shell. xargs provides the files (returrned from find) as command line parameters to the shell: we iterate over them with the for loop.
If you're just wanting to avoid doing the find multiple times, you could do a tee right after the find, saving the find output to a file, then executing the lines as:
find . -name "filename including space" -print0 | tee my_teed_file | xargs -0 ls -aldF > log.txt
cat my_teed_file | xargs -0 rm -rdf
Another way to accomplish this same thing (if indeed it's what you're wanting to accomplish), is to store the output of the find in a variable (supposing it's not TB of data):
founddata=`find . -name "filename including space" -print0`
echo "$founddata" | xargs -0 ls -aldF > log.txt
echo "$founddata" | xargs -0 rm -rdf
I believe all these answers by now have given out the right ways to solute this problem. And I tried the 2 solutions of Jonathan and the way of Glenn, all of which worked great on my Mac OS X. The method of mouviciel did not work on my OS maybe due to some configuration reasons. And I think it's similar to Jonathan's second method (I may be wrong).
As mentioned in the comments to Glenn's method, a little tweak is needed. So here is the command I tried which worked perfectly FYI:
find . -name "filename including space" -print0 |
xargs -0 -I '{}' sh -c 'ls -aldF {} | tee -a log.txt ; rm -rdf {}'
Or better as suggested by Glenn:
find . -name "filename including space" -print0 |
xargs -0 -I '{}' sh -c 'ls -aldF {} >> log.txt ; rm -rdf {}'
As long as you do not have newline in your filenames, you do not need -print0 for GNU Parallel:
find . -name "My brother's 12\" records" | parallel ls {}\; rm -rdf {} >log.txt
Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ
Just a variation of the xargs approach without that horrible -print0 and xargs -0, this is how I would do it:
ls -1 *.txt | xargs --delimiter "\n" --max-args 1 --replace={} sh -c 'cat {}; echo "\n"'
Footnotes:
Yes I know newlines can appear in filenames but who in their right minds would do that
There are short options for xargs but for the reader's understanding I've used the long ones.
I would use ls -1 when I want non-recursive behavior rather than find -maxdepth 1 -iname "*.txt" which is a bit more verbose.
You can execute multiple commands after find using for instead of xargs:
IFS=$'\n'
for F in `find . -name "filename including space"`
do
ls -aldF $F > log.txt
rm -rdf $F
done
The IFS defines the Internal Field Separator, which defaults to <space><tab><newline>. If your filenames may contain spaces, it is better to redefine it as above.
I'm late to the party, but there is one more solution that wasn't covered here: user-defined functions. Putting multiple instructions on one line is unwieldy, and can be hard to read/maintain. The for loop above avoids that, but there is the possibility of exceeding the command line length.
Here's another way (untested).
function processFiles {
ls -aldF "$#"
rm -rdf "$#"
}
export -f processFiles
find . -name "filename including space"` -print0 \
| xargs -0 bash -c processFiles dummyArg > log.txt
This is pretty straightforward except for the "dummyArg" which gave me plenty of grief. When running bash in this way, the arguments are read into
"$0" "$1" "$2" ....
instead of the expected
"$1" "$2" "$3" ....
Since processFiles{} is expecting the first argument to be "$1", we have to insert a dummy value into "$0".
Footnontes:
I am using some elements of bash syntax (e.g. "export -f"), but I believe this will adapt to other shells.
The first time I tried this, I didn't add a dummy argument. Instead I added "$0" to the argument lines inside my function ( e.g. ls -aldf "$0" "$#" ). Bad idea.
Aside from stylistic issues, it breaks when the "find" command returns nothing. In that case, $0 is set to "bash", Using the dummy argument instead avoids all of this.
Another solution:
find . -name "filename including space" -print0 \
| xargs -0 -I FOUND echo "$(ls -aldF FOUND > log.txt ; rm -rdf FOUND)"