I am trying to delete old files, I first need to store the files in a variable and delete them one by one, my code works for normal files but if filename has white spaces in it, it cant delete that file and throws an error
code-
OLD_FILES=`find . -name "*.txt" -type f -newermt $2000-01-01 ! -newermt $2017-12-12`
for i in $OLD_FILES
do
rm $i
done
I can't use
OLD_FILES=`find . -name "*.$FILE_TYPE" -type f -newermt $START_DATE ! -newermt $DATE -delete `
because find and delete needs to be separate functions and to avoid code repetition
Filenames on UNIX may contain more or less any character, especially characters which are used by the shell to split input into words, like whitespace and newlines. If you use a for in loop, word splitting happens and that's what you are seeing.
I recommend to use the -print0 option of find which would separate files by null bytes and then use while read with the null byte as the delimiter to read them in the shell one by one:
find ... -print0 | while read -d $'\0' file ; do
do_something "${file}"
rm "${file}"
done
Have a look at the following link
https://unix.stackexchange.com/questions/208140/deleting-files-with-spaces-in-their-names
There are many solutions you might be able to apply to your problem:
like deleting the files via their inum
using a regex with space to fetch the file: find . -regex '.* .*' -delete
using xargs and find . -type -f -print 0
make a function to escape all spaces of your filenames \, before running the rm command
Related
I want to have a single line command to find files recursive and be able to exclude several file types by there file ending.
This is what I came up with and it works.
find "$path" -type f \( ! -iname "*.txt" ! -iname ".*" ! -iname "*.swf" \)
but my problem is:
If I want to exclude more files the command gets really long.
I would prefer to have the option to put all excluded file endings in one variable to have something similar like this exclude="(*.txt|*.swf|.*)"; find "$path" -type f \( ! -iname "$exclude" \) (I know that this code doesn't work but as an explanation what I try to get.)
Like said above, it drives me crazy that I'm not able to find a short single line command which is posix compliant and is able to find files recursive and is able to exclude file by there file ending.
The nearest solution I found is https://stackoverflow.com/a/22558474 but in this solution a file is used to store the to be excluded files. So it's not a single line command.
Thanks in advance!
Assuming you don't have any newlines in filenames, you can pipe the output of find through grep to filter out unwanted ones using a regular expression:
find "$path" -type f | grep -Ev '(^\.)|(\.(txt|swf)$)'
I'm building a script that sends the "find" command output to a temp file and from there I use "read" to iterate through all the paths and print two fields into a csv file, one field for the name of the file and the other for the complete path.
find -type f \( -iname \*.mp4 -o -iname \*.mkv \) >> $tempfile
while read -r file; do
printf '%s\n' ${file##*/} ${file} | paste -sd ' ' >> $csvfile
done < $tempfile
rm $tempfile
The problem is in the field for the names ${file##*/}. Some files have spaces in their names and this is causing they not being printed correctly in the csv file, I know I could use this ${file//[[:blank:]]/} to remove the spaces but I also need to preserve this ${file##*/} since that parameter expansion allows me to cut all but the name itself of my files (and print those in the first field of the csv file).
I was searching for a way to kinda join the two parameter expansion ${file##*/} and ${file//[[:blank:]]/} but I didn't found anything related. Is it possible to solve this using only parameter expansion?, if no what other solutions can fix this? maybe regex?
Edit: Also I will need to add a 3rd field in which the value will depend on a variable.
If you're using GNU find (And possibly other implementations?) it can be simplified a lot:
find dir/ -type f \( -iname "*.mp4" -o -iname "*.mkv" \) \
-printf '"%f","'"${newvar//%/%%}"'","%p"\n' > "$csvfile"
I put quotes around the fields of the CSV output, to handle cases where the filenames might have commas in them. It'll still have an issue with filenames with doublequotes in the name, though.
If using some other version of find... well, there's no need for a temporary file. Just pipe the output directly to your while loop:
find test1/ -type f \( -iname "*.mp4" -o -iname "*.mkv" \) -print0 |
while IFS= read -d '' -r file; do
name=$(basename "$file")
printf '"%s","%s","%s"\n' "${name//\"/\"\"}" "$newvar" "${file//\"/\"\"}"
done > "$csvfile"
This one will escape double quotes appearing in the filename, so if that's the case with your files, prefer it.
Hello stackoverflow community,
I'm facing a problem with removing files that contain spaces in filename, i have this part of code which is responsible of deleting files that we get from a directory,
for f in $(find $REP -type f -name "$Filtre" -mtime +${DelAvtPurge})
do
rm -f $f
I know that simple or double quotes are working for deleting files with spaces, it works for me when i try them in a command line, but when i put them in $f in the file it doesn't work at all.
Could anybody help me to find a solution for this ?
GNU find has -delete for that:
find "$REP" -type f -name "$Filtre" -mtime +"$DelAvtPurge" -delete
With any other find implementation, you can use bulk-exec:
find "$REP" -type f -name "$Filtre" -mtime +"$DelAvtPurge" -exec rm -f {} +
For a dry-run, drop -delete from the first and see the list of files to be deleted; for second, insert echo before rm.
The other answer has shown how to do this properly. But fundamentally the issue in your command is the lack of quoting, due to the way the shell expands variables:
rm -f $f
needs to become
rm -f "$f"
In fact, always quoting your variables is safe and generally a good idea.
However, this will not fix your code. Now filenames with spaces will work, but filenames with other valid characters (to wit, newlines) won’t. Try it:
touch foo$'\n'bar
for f in $(find . -maxdepth 1 -name foo\*); do echo "rm -f $f"; done
Output:
rm -f ./foo
rm -f bar
Clearly that won’t do. In fact, you mustn’t parse the output of find, for this reason. The only way of making this safe, apart from the solution via find -exec is to use the -print0 option:
find "$REP" -type f -name "$Filtre" -mtime +"$DelAvtPurge" -print0 \
| IFS= while read -r -d '' f; do
rm -f "$f"
done
Using -print0 instead of (implicit) -print causes find to delimit hits by the null character instead of newline. Correspondingly, IFS= read -r -d '' reads a null-character delimited input string, which we do in a loop using while (the -r option prevents read from interpreting backslashes as escape sequences).
I'm trying to list all files, except hidden ones, in only the subdirectories of a folder in bash by doing:
$ find ./public -mindepth 3 -type f -not -path '*/\.*'
That returns:
./public/mobile/images/image1.jpg
./public/mobile/images/image2.png
./public/mobile/images/image3.jpg
./public/mobile/javascripts/java1.js
./public/mobile/javascripts/java2.js
./public/mobile/javascripts/java3.js
./public/mobile/stylesheets/main.css
./public/mobile/views/doc1.html
./public/mobile/views/doc2.html
./public/mobile/views/doc3.html
How can I ignore the file path and show only the file name with the extension?
Thank you :)
Use -printf additionally to the find command, instead of -print.
find ./public -mindepth 3 -type f -not -path '*/\.*' -printf %f\\n
Note the usage of \\n - you need \n to add a new line after file name, but add another \ as escape or add some quotes to prevent interpreting \n by shell
If you are using bash 4 or later, you can skip using find and using a file pattern instead.
shopt -s globstar # For **
printf "%s\n" public/*/*/**/*.*
If you expect some files to have no extension, you'll need to use a loop and filter out
non-file matches manually.
for f in */*/*/**/*; do
[[ -f $f ]] || continue
printf "%s\n" "${f##*/}"
done
So basically, I have a folder with a bunch of subfolders all with over 100 files in them. I want to take all of the mp3 files (really generic extension since I'll have to do this with jpg, etc.) and move them to a new folder in the original directory. So basically the file structure looks like this:
/.../dir/recup1/file1.mp3
/.../dir/recup2/file2.mp3
... etc.
and I want it to look like this:
/.../dir/music/file1.mp3
/.../dir/music/file2.mp3
... etc.
I figured I would use a bash script that looked along these lines:
#!/bin/bash
STR=`find ./ -type f -name \*.mp3`
FILES=(echo $STR | tr ".mp3 " "\n")
for x in $FILES
do
echo "> [$x]"
done
I just have it echo for now, but eventually I would want to use mv to get it to the correct folder. Obviously this doesn't work though because tr sees each character as a delimiter, so if you guys have a better idea I'd appreciate it.
(FYI, I'm running netbook Ubuntu, so if there's a GUI way akin to Windows' search, I would not be against using it)
If the music folder exists then the following should work -
find /path/to/search -type f -iname "*.mp3" -exec mv {} path/to/music \;
A -exec command must be terminated with a ; (so you usually need to type \; or ';' to avoid interpretion by the shell) or a +. The difference is that with ;, the command is called once per file, with +, it is called just as few times as possible (usually once, but there is a maximum length for a command line, so it might be split up) with all filenames.
You can do it like this:
find /some/dir -type f -iname '*.mp3' -exec mv \{\} /where/to/move/ \;
The \{\} part will be replaced by the found file name/path. The \; part sets the end for the -exec part, it can't be left out.
If you want to print what was found, just add a -print flag like:
find /some/dir -type f -iname '*.mp3' -print -exec mv \{\} /where/to/move/ \;
HTH