How to delete all files in a dir except ones with a certain pattern in their name? - bash

I have a lot of kernel .deb files from my custom kerenls. I would like to write a bash that would delete all the old files except the ones associated with the currently installed kernel version. My script:
#!/bin/bash
version='uname -r'
$version
dir=~/Installed-kernels
ls | grep -v '$version*' | xargs rm
Unfortunately, this deletes all files in the dir.
How can I get the currently installed kernel version and set said version as a perimeter with? Each .deb I want to keep contains the kernel version (5.18.8) but have other strings in their name (linux-headers-5.18.8_5.18.8_amd64.deb).
Edit: I am only deleting .deb files inside the noted directory. The current list of file names in the tree are
linux-headers-5.18.8-lz-xan1_5.18.8-lz-1_amd64.deb
linux-libc-dev_5.18.8-lz-1_amd64.deb
linux-image-5.18.8-lz-xan1_5.18.8-lz-1_amd64.deb

This can be done as a one-liner, though I've preserved your variables:
#!/bin/bash
version="$(uname -r)"
dir="$HOME/Installed-kernels"
find "$dir" -maxdepth 1 -type f -not -name "*$version*" -print0 |xargs -0 rm
To set a variable to the output of a command, you need either $(…) or `…`, ideally wrapped in double-quotes to preserve spacing. A tilde isn't always interpreted correctly when passed through variables, so I expanded that out to $HOME.
The find command is much safer to parse than the output of ls, plus it lets you better filter things. In this case, -maxdepth 1 will look at just that directory (no recursion), -type f seeks only files, and -not -name "*$version*" removes paths or filenames that match the kernel version (which is a glob, not a regex—you'd otherwise have to escape the dots). Also note those quotes; we want find to see the asterisks, and without the quotes, the shell will expand the glob prematurely. The -print0 and corresponding -0 ensure that you preserve spacing by delimiting entries with null characters.
You can remove the prompts regarding read-only files with rm -f.
If you also want to delete directories, remove the -type f part and add -r to the end of that final line.

Related

Is this shell command to delete all but last X directories safe?

I've seen a lot of warnings against the dangers of filenames with funny characters wreaking havoc in shell scripts.
I've scoured SO and seen dozens of variants of xargs and -exec rm -rf {} \;, and "don't use ls for scripting" and I've come up with what I think is "safe" to run.
find /path/to/dir -mindepth 1 -maxdepth 1 -type d -print0 | sort -z | head -z -n -10 | xargs -r0 rm -rf
I've got a directory full of sub-directores in this format:
# find /srv/mywebsite/releases -mindepth 1 -maxdepth 1 -type d | sort
/srv/mywebsite/releases/2017-01-01T01:43:23Z
/srv/mywebsite/releases/2017-01-01T02:09:44Z
/srv/mywebsite/releases/2017-01-01T02:20:06Z
...
/srv/mywebsite/releases/2017-04-22T01:34:45Z
/srv/mywebsite/releases/2017-04-30T03:24:19Z
/srv/mywebsite/releases/2017-05-02T01:48:39Z
I want to delete all but the last 10 of them, sorted by the date in the directory name, not the directory mod/create-time. This is just a precaution in case one of the dirs gets touched and mtime/ctime doesn't match.
I think my shell command above should do exactly that, but I just want to double check that it won't blow up my server if one of the dirs ever contains a * or . or something.
This is safe, in that:
No shell evaluation whatsoever is run on the names. This specifically includes glob expansion, so a name containing a * will not result in additional rm arguments.
Since all names are prefixed with /path/to/dir, we don't need to worry about leading dashes being interpreted as options. (In a scenario where you did have this concern, xargs -r0 rm -rf -- would be appropriate; per POSIX utility syntax guideline #10, passing the string -- ensures that all subsequent arguments are parsed as positional).
Since all names are separated with NULs, and NULs can't exist in names, we can't have a single name result in multiple arguments to rm. (Poorly-written scripts often make a similar assumption about newlines, but that assumption is unfounded).
Inasmuch as you're depending on the names representing UTC timestamps in a specific format (and on new names continuing to match that format so they can be appropriately compared against old names), you might want to add an appropriate filter, making the full command something like:
find /path/to/dir -mindepth 1 -maxdepth 1 -type d \
-regextype posix-extended \
-regex '.*/[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}T[[:digit:]]{2}:[[:digit:]]{2}:[[:digit:]]{2}Z$' \
-print0 | sort -z | head -z -n -10 | xargs -r0 rm -rf --
None of this is particularly portable -- both the original code and the above suggestion require non-POSIX extensions to find, sort, head and xargs; and the naming convention itself wouldn't be allowed on Windows filesystems (where : is reserved) -- but if you're running a modern GNU toolchain on a UNIXy platform, this looks good to me.

Go into every subdirectory and mass rename files by stripping leading characters

From the current directory I have multiple sub directories:
subdir1/
001myfile001A.txt
002myfile002A.txt
subdir2/
001myfile001B.txt
002myfile002B.txt
where I want to strip every character from the filenames before myfile so I end up with
subdir1/
myfile001A.txt
myfile002A.txt
subdir2/
myfile001B.txt
myfile002B.txt
I have some code to do this...
#!/bin/bash
for d in `find . -type d -maxdepth 1`; do
cd "$d"
for f in `find . "*.txt"`; do
mv "$f" "$(echo "$f" | sed -r 's/^.*myfile/myfile/')"
done
done
however the newly renamed files end up in the parent directory
i.e.
myfile001A.txt
myfile002A.txt
myfile001B.txt
myfile002B.txt
subdir1/
subdir2/
In which the sub-directories are now empty.
How do I alter my script to rename the files and keep them in their respective sub-directories? As you can see the first loop changes directory to the sub directory so not sure why the files end up getting sent up a directory...
Your script has multiple problems. In the first place, your outer find command doesn't do quite what you expect: it outputs not only each of the subdirectories, but also the search root, ., which is itself a directory. You could have discovered this by running the command manually, among other ways. You don't really need to use find for this, but supposing that you do use it, this would be better:
for d in $(find * -maxdepth 0 -type d); do
Moreover, . is the first result of your original find command, and your problems continue there. Your initial cd is without meaningful effect, because you're just changing to the same directory you're already in. The find command in the inner loop is rooted there, and descends into both subdirectories. The path information for each file you choose to rename is therefore stripped by sed, which is why the results end up in the initial working directory (./subdir1/001myfile001A.txt --> myfile001A.txt). By the time you process the subdirectories, there are no files left in them to rename.
But that's not all: the find command in your inner loop is incorrect. Because you do not specify an option before it, find interprets "*.txt" as designating a second search root, in addition to .. You presumably wanted to use -name "*.txt" to filter the find results; without it, find outputs the name of every file in the tree. Presumably you're suppressing or ignoring the error messages that result.
But supposing that your subdirectories have no subdirectories of their own, as shown, and that you aren't concerned with dotfiles, even this corrected version ...
for f in `find . -name "*.txt"`;
... is an awfully heavyweight way of saying this ...
for f in *.txt;
... or even this ...
for f in *?myfile*.txt;
... the latter of which will avoid attempts to rename any files whose names do not, in fact, change.
Furthermore, launching a sed process for each file name is pretty wasteful and expensive when you could just use bash's built-in substitution feature:
mv "$f" "${f/#*myfile/myfile}"
And you will find also that your working directory gets messed up. The working directory is a characteristic of the overall shell environment, so it does not automatically reset on each loop iteration. You'll need to handle that manually in some way. pushd / popd would do that, as would running the outer loop's body in a subshell.
Overall, this will do the trick:
#!/bin/bash
for d in $(find * -maxdepth 0 -type d); do
pushd "$d"
for f in *.txt; do
mv "$f" "${f/#*myfile/myfile}"
done
popd
done
You can do it without find and sed:
$ for f in */*.txt; do echo mv "$f" "${f/\/*myfile/\/myfile}"; done
mv subdir1/001myfile001A.txt subdir1/myfile001A.txt
mv subdir1/002myfile002A.txt subdir1/myfile002A.txt
mv subdir2/001myfile001B.txt subdir2/myfile001B.txt
mv subdir2/002myfile002B.txt subdir2/myfile002B.txt
If you remove the echo, it'll actually rename the files.
This uses shell parameter expansion to replace a slash and anything up to myfile with just a slash and myfile.
Notice that this breaks if there is more than one level of subdirectories. In that case, you could use extended pattern matching (enabled with shopt -s extglob) and the globstar shell option (shopt -s globstar):
$ for f in **/*.txt; do echo mv "$f" "${f/\/*([!\/])myfile/\/myfile}"; done
mv subdir1/001myfile001A.txt subdir1/myfile001A.txt
mv subdir1/002myfile002A.txt subdir1/myfile002A.txt
mv subdir1/subdir3/001myfile001A.txt subdir1/subdir3/myfile001A.txt
mv subdir1/subdir3/002myfile002A.txt subdir1/subdir3/myfile002A.txt
mv subdir2/001myfile001B.txt subdir2/myfile001B.txt
mv subdir2/002myfile002B.txt subdir2/myfile002B.txt
This uses the *([!\/]) pattern ("zero or more characters that are not a forward slash"). The slash has to be escaped in the bracket expression because we're still inside of the pattern part of the ${parameter/pattern/string} expansion.
Maybe you want to use the following command instead:
rename 's#(.*/).*(myfile.*)#$1$2#' subdir*/*
You can use rename -n ... to check the outcome without actually renaming anything.
Regarding your actual question:
The find command from the outer loop returns 3 (!) directories:
.
./subdir1
./subdir2
The unwanted . is the reason why all files end up in the parent directory (that is .). You can exclude . by using the option -mindepth 1.
Unfortunately, this was onyl the reason for the files landing in the wrong place, but not the only problem. Since you already accepted one of the answers, there is no need to list them all.
a slight modification should fix your problem:
#!/bin/bash
for f in `find . -maxdepth 2 -name "*.txt"`; do
mv "$f" "$(echo "$f" | sed -r 's,[^/]+(myfile),\1,')"
done
note: this sed uses , instead of / as the delimiter.
however, there are much faster ways.
here is with the rename utility, available or easily installed wherever there is bash and perl:
find . -maxdepth 2 -name "*.txt" | rename 's,[^/]+(myfile),/$1,'
here are tests on 1000 files:
for `find`; do mv 9.176s
rename 0.099s
that's 100x as fast.
John Bollinger's accepted answer is twice as fast as the OPs, but 50x as slow as this rename solution:
for|for|mv "$f" "${f//}" 4.316s
also, it won't work if there is a directory with too many items for a shell glob. likewise any answers that use for f in *.txt or for f in */*.txt or find * or rename ... subdir*/*. answers that begin with find ., on the other hand, will also work on directories with any number of items.

How to search for *~ as in anything ending with ~ in a bash script

I'm writing a Bash script and I need to find and move/delete all files with names ending in ~ or beginning and ending with #, that is file~ or #file#, emacs junk files.
I'm trying to use [ -f *~ ] && ( ... move or delete those files ... ) to determine if any files of this kind exist before I try to do anything to them, so as not to get error messages from the rm or mv function if they don't find the files. However, this results in "binary operator expected". I think it has something to do with the fact that ~ is an unary operator. Is there a way to make it work as intended?
Nothing wrong with what you were doing originally for current directory (not any slower than find), though not as one-liney.
#!/bin/bash
for file in *"~"; do
if [ -f "$file" ]; then
#do something with $file
fi
done
Also, "binary operator expected" is just coming from bash expecting a single argument for the "-f" operator, whereas *~ can expand to multiple arguments, e.g.
$ mkdir test && cd test
$ touch "1~"
$ if [ -f *"~" ]; then echo "Confirmed file ending in ~"; fi
Confirmed file ending in ~
$ touch {2..10}"~" && echo *"~"
1~ 10~ 2~ 3~ 4~ 5~ 6~ 7~ 8~ 9~
$ if [ -f *"~" ]; then echo "Confirmed file ending in ~"; fi
bash: [: too many arguments
$ if [ -f "arg1" "arg2"; then echo "Confirmed file ending in ~"; fi
bash: [: arg1: binary operator expected
Not positive why errors are different for the two cases, but pretty sure either error can result depending on expansion.
Your problem stems from the fact that file-testing operators such as -f are not designed to be used with globbing patterns - only with a single, literal path.
You can simply let bash's path expansion (globbing) do the work:
Note: The approaches below are an alternative to using a loop (as demonstrated in #BroSlow's answer).
Simplest approach:
rm -f *'~' '#'*'#'
This removes all matching files, if any, and, if there are no matches, does nothing (and outputs nothing and reports exit code 0) - thanks to the -f option (tip of the hat to #chris).
Caveat: This also silently removes files marked as read-only, IF you have sufficient permissions to make them writable. In other words: if files match that you have intentionally marked as read-only, they will still get removed.
Also, if directories happen to match, they will NOT be removed, an error message will be displayed and the exit code will be 1 - matching files, however, are still removed.
At your own peril you may add -r to also quietly remove any matching directories (whether they're empty or not).
Using find, if explicitly ruling out directories is desired:
To avoid matching directories, you can use find, but to make it safe, the command gets lengthy:
# delete
find . -maxdepth 1 -type f -name '*~' -delete -or -name '#*#' -delete
# move
find . -maxdepth 1 -type f \
-name '*~' -exec mv {} /tmp/ \; -or \
-name '#*#' -exec mv {} /tmp/ \;
(Two general notes on find:
The path itself (., in this case) is by default included in the set of items (not a concern in this particular case due to excluding directories from matching) - to avoid that, add -mindepth 1.
Terminating the command passed to the -exec primary with + rather than \; is generally preferable, as find then substitutes as many matches as will safely fit for {}, resulting in much fewer invocations (typically just 1) of the command (assuming, of course, that your command can take argument lists of variable length) - this is similar to xargs' behavior.
Here's the catch: -exec only accepts commands terminated with + if {} is the command's last argument (and will otherwise fail with the misleading error message find: missing argument to '-exec').
Thus, in the case at hand + cannot be used, because the mv command's last argument must be the target.
)
The shell will expand your *~ to a list of all files ending in ~. So if you have more than one of them, they all will be in the parameter list of -f, but -f handles only one parameter.
Try
find . -name "*~" -print | xargs rm
and read about the parameters to find if you want to stop it from recursing your whole directory structure.
The find command is generally used for things of this nature. It even has a built-in -delete flag.
find -name '*~' -delete
or, with xargs (to move, for example)
# Moves files to /tmp using the replacement string specified with the -I flag
find -name '*~' -print0 | xargs -0 -I _ mv _ /tmp/
If you prefer to use xargs for deletion as well, you can do away with the use of -I
find -name '*~' -print0 | xargs -0 rm
Note the use of the -print0 and -0 flags to specify null-terminated paths. This allows paths with spaces to run properly. Without -0, filenames with spaces (including spaces anywhere in the path) will be treated as two separate (possibly invalid) paths.

Dyamic find in shell scripting

Here is the problem I'm working on
I have a bunch of files in different directories which are all in one root dir. I have scan all these folders and pick up the files in them avoiding some specific sub-dirs. These folder that I have to avoid will come from a config file. For the purpose I prepared this find command (with some help)
find ${ROOT}/au* -type f -not \( -path '*auction123/incoming*' -o -path '*autobase/incoming*'* \)
Here ROOT will also come from the config file. can I also provide a place holder for this
-path '*auction123/incoming*' -o -path '*autobase/incoming*'*
so that I can add any number of folders that has to be ignored in the find. I read at some places that find with grep is a better option.
You can do:
find ${ROOT}/au* -type f | grep -v -f files_containing_list_of_ignore_directories
Where files_containing_list_of_ignore_directories places each directory you wish to ignore on a separate line (without the wildcards *).
(This is what files_containing_list_of_ignore_directories should look like:)
auction123/incoming
autobase/incoming
...
Explanation:
find ${ROOT}/au* -type f: recursively find all files under any directories that match au* in ${ROOT}
| is the pipe operator in shell: it means "take whatever is output to stdout from the previous command (in this case, find) and feed that output to stdin of the next command (in this case, grep)
grep: invoke grep, a tool used for searching
-v: Use the --invert-match option for grep. Basically it means to "filter out" anything that is matched.
-f files_containing_list_of_ignore_directories: Supply a file that holds the list of patterns you want grep to match (and filter out in this case).

dealing filenames with shell regex

138.096.000.015.00111-138.096.201.072.38717
138.096.000.015.01008-138.096.201.072.00790
138.096.201.072.00790-138.096.000.015.01008
138.096.201.072.33853-173.194.020.147.00080
138.096.201.072.34293-173.194.034.009.00080
138.096.201.072.38717-138.096.000.015.00111
138.096.201.072.41741-173.194.034.025.00080
138.096.201.072.50612-173.194.034.007.00080
173.194.020.147.00080-138.096.201.072.33853
173.194.034.007.00080-138.096.201.072.50612
173.194.034.009.00080-138.096.201.072.34293
173.194.034.025.00080-138.096.201.072.41741
I have many folders inside which there are many files, the file names are like the above
I want to remove those files with file names having substring "138.096.000"
and sometimes I want to get the list of files with filenames with substring "00080"
To delete files with name containing "138.096.000":
find /root/of/files -type f -name '*138.096.000*' -exec rm {} \;
To list files with names containing "00080":
find /root/of/files -type f -name '*00080*'
rm $(find . -name \*138.096.000\*)
This uses the find command to find the appropriate files. This is executed within a subshell, and the output (the list of files) is used by rm. Note the escaping of the * pattern, since the shell will try and expand * itself.
This assumes you don't have filenames with spaces etc. You may prefer to do something like:
for i in $(find . -name \*138.096.000\*); do
rm $i
done
in this scenario, or even
find . -name \*138.096.000\* | xargs rm
Note that in the loop above you'll execute rm for each file, and the xargs variant will execute rm multiple times (dependin gon the number of files you have - it may only execute once).
However, if you're using zsh then you can simply do:
rm **/*138.096.000*
(I'm assuming your directories aren't named like your files. Note the -f flag as used in Kamil's answer if this is the case)

Resources