Hello stackoverflow community,
I'm facing a problem with removing files that contain spaces in filename, i have this part of code which is responsible of deleting files that we get from a directory,
for f in $(find $REP -type f -name "$Filtre" -mtime +${DelAvtPurge})
do
rm -f $f
I know that simple or double quotes are working for deleting files with spaces, it works for me when i try them in a command line, but when i put them in $f in the file it doesn't work at all.
Could anybody help me to find a solution for this ?
GNU find has -delete for that:
find "$REP" -type f -name "$Filtre" -mtime +"$DelAvtPurge" -delete
With any other find implementation, you can use bulk-exec:
find "$REP" -type f -name "$Filtre" -mtime +"$DelAvtPurge" -exec rm -f {} +
For a dry-run, drop -delete from the first and see the list of files to be deleted; for second, insert echo before rm.
The other answer has shown how to do this properly. But fundamentally the issue in your command is the lack of quoting, due to the way the shell expands variables:
rm -f $f
needs to become
rm -f "$f"
In fact, always quoting your variables is safe and generally a good idea.
However, this will not fix your code. Now filenames with spaces will work, but filenames with other valid characters (to wit, newlines) won’t. Try it:
touch foo$'\n'bar
for f in $(find . -maxdepth 1 -name foo\*); do echo "rm -f $f"; done
Output:
rm -f ./foo
rm -f bar
Clearly that won’t do. In fact, you mustn’t parse the output of find, for this reason. The only way of making this safe, apart from the solution via find -exec is to use the -print0 option:
find "$REP" -type f -name "$Filtre" -mtime +"$DelAvtPurge" -print0 \
| IFS= while read -r -d '' f; do
rm -f "$f"
done
Using -print0 instead of (implicit) -print causes find to delimit hits by the null character instead of newline. Correspondingly, IFS= read -r -d '' reads a null-character delimited input string, which we do in a loop using while (the -r option prevents read from interpreting backslashes as escape sequences).
Related
Is there any way to get the basename in the command find?
What I don't need:
find /dir1 -type f -printf "%f\n"
find /dir1 -type f -exec basename {} \;
Why you may ask? Because I need to continue using the found file. I basically want something like this:
find . -type f -exec find /home -type l -name "*{}*" \;
And it uses ./file1, not file1 as the agrument for -name.
Thanks for your help :)
If you've got Bash version 4.3 or later, try this Shellcheck-clean pure Bash code:
#! /bin/bash -p
shopt -s dotglob globstar nullglob
for path in ./**; do
[[ -L $path ]] && continue
[[ -f $path ]] || continue
base=${path##*/}
for path2 in /home/**/*"$base"*; do
[[ -L $path2 ]] && printf '%s\n' "$path2"
done
done
shopt -s ... enables some Bash settings that are required by the code:
dotglob enables globs to match files and directories that begin with .. find shows such files by default.
globstar enables the use of ** to match paths recursively through directory trees. globstar was introduced in Bash 4.0, but it was dangerous to use before Bash 4.3 (2014) because it followed symlinks when looking for matches.
nullglob makes globs expand to nothing when nothing matches (otherwise they expand to the glob pattern itself, which is almost never useful in programs).
See Removing part of a string (BashFAQ/100 (How do I do string manipulation in bash?)) for an explanation of ${path##*/}. That always works, even in some rare cases where $(basename "$path") doesn't.
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why I used printf instead of echo to output the found paths.
This solution works correctly if you've got files that contain pattern characters (?, *, [, ], \) in their names.
Spawn a shell and make the second call to find from there
find /dir1 -type f -exec sh -c '
for p; do
find /dir2 -type l -name "*${p##*/}*"
done' sh {} +
If your files may contain special characters in their names (like [, ?, etc.), you may want to escape them like this to avoid false positives
find /dir1 -type f -exec sh -c '
for p; do
esc=$(printf "%sx\n" "${p##*/}" | sed "s/[?*[\]/\\\&/g")
esc=${esc%x}
find /dir2 -type l -name "*$esc*"
done' sh {} +
You'll have to forward it to another evaluator. There is no way to do that in find.
find . -type f -printf '%f\0' |
xargs -r0I{} find /home -type l -name '*{}*'
This answers your question about trying to merge the functionality of %f and -exec find and is based on your example but your example injects raw filenames as -name patterns so avoid that and look at other solutions instead.
Simply spawn a bash shell:
find /dir1 -type f -exec bash -c '
base=$(basename "$1")
echo "$base"
do_something_else "$base"
' bash {} \;
$1 in the bash part is each file filtered by find.
What is the shell command for renaming all files in a directory and sub-directory (recursively)?
I would like to add an underscore to all the files ending with *scss from filename.scss to _filename.scss in all the directories and sub-directories.
I have found answers relating to this but most if not all require you to know the filename itself, and I do not want this because the filenames differ and are a lot to know by heart or even type them manually and some of them are deeply nested in directories.
Edit: I was under the impression that the bash -c bit was somehow necessary for multiple expansion of the found element; anubhava's answer proved me wrong. I am leaving that bit in the answer for now as it worked for the OP.
find . -type f -name *scss -exec bash -c 'mv $1 _$1' -- {} \;
find . -- find in current directory (recursively)
-type f -- files
-name *scss -- matching the pattern *scss
-exec -- execute for each element found
bash -c '...' -- execute command in a subshell
-- -- end option parsing
{} -- expands to the name of the element found (which becomes the positional parameter for the bash -c command)
\; -- end the -exec command
You can use -execdir option here:
find ./src/components -iname "*.scss" -execdir mv {} _{} \;
You are close to a solution:
find ./src/components -iname "*.scss" -print0 | xargs -0 -n 1 -I{} mv {} _{}
In this approach, the "loop" is executed by xargs. I prefer this solution overt the usage of the -exec in find. The syntax is clear to me.
Also, if you want to repeat the command and avoid double-adding the underscore to the already processed files, use a regexp to get only the files not yet processed:
find ./src/components -iregex ".*/[^_][^/]*\.scss" -print0 | xargs -0 -n 1 -I{} mv {} _{}
By adding the -print0/-0 options, you also avoid problems with whitespaces.
#!/bin/sh
EXTENSION='.scss'
cd YOURDIR
find . -type f | while read -r LINE; do
FILE="$( basename "$LINE" )"
case "$LINE" in
*"$EXTENSION")
DIRNAME="$( dirname "$LINE" )"
mv -v "$DIRNAME/$FILE" "$DIRNAME/_$FILE"
;;
esac
done
I am trying to delete old files, I first need to store the files in a variable and delete them one by one, my code works for normal files but if filename has white spaces in it, it cant delete that file and throws an error
code-
OLD_FILES=`find . -name "*.txt" -type f -newermt $2000-01-01 ! -newermt $2017-12-12`
for i in $OLD_FILES
do
rm $i
done
I can't use
OLD_FILES=`find . -name "*.$FILE_TYPE" -type f -newermt $START_DATE ! -newermt $DATE -delete `
because find and delete needs to be separate functions and to avoid code repetition
Filenames on UNIX may contain more or less any character, especially characters which are used by the shell to split input into words, like whitespace and newlines. If you use a for in loop, word splitting happens and that's what you are seeing.
I recommend to use the -print0 option of find which would separate files by null bytes and then use while read with the null byte as the delimiter to read them in the shell one by one:
find ... -print0 | while read -d $'\0' file ; do
do_something "${file}"
rm "${file}"
done
Have a look at the following link
https://unix.stackexchange.com/questions/208140/deleting-files-with-spaces-in-their-names
There are many solutions you might be able to apply to your problem:
like deleting the files via their inum
using a regex with space to fetch the file: find . -regex '.* .*' -delete
using xargs and find . -type -f -print 0
make a function to escape all spaces of your filenames \, before running the rm command
I see that this question is getting popular.
I answered my own question below.
What says Inian is correct and it helped me to analyze my source code better.
My problem was in the FIND and not in the RM. My answer gives a block of code, which I am currently using, to avoid problems when FIND finds nothing but still would pass arguments to RM, causing the error mentioned above.
OLD QUESTION BELOW
I'm writing many and many different version of the same command.
All, are executed but with an error/info:
rm: missing operand
Try 'rm --help' for more information.
These are the commands I'm using:
#!/bin/bash
BDIR=/home/user/backup
find ${BDIR} -type d -mtime +180 -print -exec rm -rf {} \;
find ${BDIR} -type d -mtime +180 -print -exec rm -rf {} +
find "$BDIR" -type d -mtime +180 -print -exec rm -rf {} \;
find "$BDIR" -depth -type d -mtime +180 -print -exec rm -rf {} \;
find ${BDIR} -depth -type d -mtime +180 -print -exec rm -rf {} +
find $BDIR -type d -mtime +180 -print0 | xargs -0 rm -rf
DEL=$(FIND $BDIR -type d -mtime +180 -print)
rm -rf $DEL
I'm sure all of them are correct (because they all do their job), and if I run them manually I do not get that message back, but while in a .sh script I do.
EDIT: since I have many of these RM's, the problem could be somewhere else. I'm checking all of them. All of the above codes works but the best answer is the one marked ;)
The problem is when using find/grep along with xargs you need to be sure to run the piped command only if the previous command is successful. Like in the above case, if the find command does not produce any search results, the rm command is invoked with an empty argument list.
The man page of xargs
-r Compatibility with GNU xargs. The GNU version of xargs runs the
utility argument at least once, even if xargs input is empty, and
it supports a -r option to inhibit this behavior. The FreeBSD
version of xargs does not run the utility argument on empty
input, but it supports the -r option for command-line compatibil-
ity with GNU xargs, but the -r option does nothing in the FreeBSD
version of xargs.
Moreover, you don't to try all the commands like you pasted the below simple one will suit your need.
Add the -r argument to xargs like
find "$BDIR" -type d -mtime +180 -print0 | xargs -0 -r rm -rf
-f option of rm suppresses the rm: missing operand error:
-f, --force
ignore nonexistent files and arguments, never prompt
After researches, the command I'm comfortable using is:
HOME=/home/user
FDEL=$HOME/foldersToDelete
BDIR=/backup/my_old_folders
FLOG=/var/log/delete_old_backup.log
find ${BDIR} -mindepth 1 -daystart -type d -mtime +180 -printf "%f\n" > ${FDEL}
if [[ $? -eq 0 && $(wc -l < ${FDEL}) -gt 0 ]]; then
cd ${BDIR}
xargs -d '\n' -a ${FDEL} rm -rf
LOG=" - Folders older than 180 were deleted"
else
LOG=" - There aren't folders older than 180 days to delete"
fi
echo ${LOG} >> ${FLOG}
Why? I search all the old folders I want to delete and print them all into a file, regardless for their naming with or without space. If the file is bigger than 0 byte this means that there are folder I want no more.
If your 'FIND' fails with a 'rm: missing operand', it probably isn't to search in the RM rather in the FIND itself.
A good way of removing the file using FIND, is the one I felt to share with you.
I would like to replace :2f with a - in all file/dir names and for some reason the one-liner below is not working, is there any simpler way to achieve this?
Directory name example:
AN :2f EXAMPLE
Command:
for i in $(find /tmp/ \( -iname ".*" -prune -o -iname "*:*" -print \)); do { mv $i $(echo $i | sed 's/\:2f/\-/pg'); }; done
You don't have to parse the output of find:
find . -depth -name '*:2f*' -execdir bash -c 'echo mv "$0" "${0//:2f/-}"' {} \;
We're using -execdir so that the command is executed from within the directory containing the found file. We're also using -depth so that the content of a directory is considered before the directory itself. All this to avoid problems if the :2f string appears in a directory name.
As is, this command is harmless and won't perform any renaming; it'll only show on the terminal what's going to be performed. Remove echo if you're happy with what you see.
This assumes you want to perform the renaming for all files and folders (recursively) in current directory.
-execdir might not be available for your version of find, though.
If your find doesn't support -execdir, you can get along without as so:
find . -depth -name '*:2f*' -exec bash -c 'dn=${0%/*} bn=${0##*/}; echo mv "$dn/$bn" "$dn/${bn//:2f/-}"' {} \;
Here, the trick is to separate the directory part from the filename part—that's what we store in dn (dirname) and bn (basename)—and then only change the :2f in the filename.
Since you have filenames containing space, for will split these up into separate arguments when iterating. Pipe to a while loop instead:
find /tmp/ \( -iname ".*" -prune -o -iname "*:*" -print \) | while read -r i; do
mv "$i" "$(echo "$i" | sed 's/\:2f/\-/pg')"
Also quote all the variables and command substitutions.
This will work as long as you don't have any filenames containing newline.