Im trying to write a command that deletes all files/folders starting with "test", expcet zip files that also start with test.
So far i have:
rm -rf test*[!zip]
This almost works, but not quite.
If i have following files:
test
test1
test.zip
Then after the command remaining are:
test
test.zip
So the problem is that test should also be deleted. I understand that my command only matches things that have at least one extra character besides "test", but not sure how to fix it.
Your glob pattern test*[!zip] will require at least one character after test due to presence of [!zip] part.
You may use this extended glob pattern in bash:
shopt -s extglob nullglob
printf '%s\n' test!(*zip)
Or else, you may use this find:
find . -maxdepth 1 -mindepth 1 -type f -name 'test*' -not -name 'test*zip'
Related
As a part of my bash routine I am trying to add IF condition which should remove all csv filles contained pattern "filt" in their names:
# this is a folder contained all subfolders
results=./results
# looping all directories located in the $results
for d in "${results}"/*/; do
if [ -f "${d}"/*filt*.csv ]; check if csv file is present within dir $d
rm "${d}"/*filt*.csv # remove this csv file
fi
done
Although a version without condition find and removes the csv properly:
rm "${d}"/*filt*.csv
While executing my example with IF gives the following error:
line 27: [: too many arguments
where the line 27 corresponds to the IF condition. How it could be fixed?
can I use something like without any IF statement?
# find all CSV filles matching pattern within the "${d}"
find "${d}" -type f -iname *filt*.csv* -delete
You could use shopt -s nullglob and then skip the test and use rm -f "$d"/*filt*.csv directly. The nullglob option makes sure that the glob not matching anything expands to the empty string, and -f would silence rm.
You could also skip the outer loop and simplify everything to
shopt -s nullglob
rm -f results/*filt*.csv
This could fail if the glob matches so many files that the maximum line length is exceeded. In that case, you're better off with find:
find results -name '*filt*.csv' -exec rm {} +
or, with GNU find:
find results -name '*filt*.csv' -delete
If there are subdirectories you want to skip, use -maxdepth 1. If there are directories matching the pattern, use -type f.
I would like to know how to deleted all the contents of a folder (it contains other folders and some files) except for 2 folders and its contents
The below command keeps the folder conf and removes all the other folders
find . ! -name 'conf' -type d -exec rm -rf {} +
I have tried to pipe it like below
find . -maxdepth 1 -type d ! -name 'conf' |find . -maxdepth 1 -type d ! -name 'foldername2'
but didnt work.
is it possible to do with a single command
You haven't specified which shell you're using, but if you're using bash then extended globs can help:
printf '%s\n' !(#(conf|foldername2)/)
If you're happy with the list of files and directories produced by that, then pass the same glob to rm -rf:
rm -rf !(#(conf|foldername2)/)
Inside a script, you may need to enable extglob using shopt -s extglob. Later, you can change -s to -u to unset the option.
If you're using a different shell, then you can add some more options to your find command:
find -maxdepth 1 ! -name 'conf' -a ! -name 'foldername2' -exec rm -rf {} +
Try it without the -exec part first to print the matches rather than deleting everything.
It may my little program utility can help you. I hope so.
First of all you should find the path of your files .sh
then you should find the main folder that contains those files .sh
then remove anything except those folders
I wrote drr for such a purpose that it can do such a task so easy
drr, stands for: remove or rename files based on regular expression in D language. So you must compile it before using.
See the screenshot:
Please be careful since this is not an appropriate tool for beginner.
From the current directory I have multiple sub directories:
subdir1/
001myfile001A.txt
002myfile002A.txt
subdir2/
001myfile001B.txt
002myfile002B.txt
where I want to strip every character from the filenames before myfile so I end up with
subdir1/
myfile001A.txt
myfile002A.txt
subdir2/
myfile001B.txt
myfile002B.txt
I have some code to do this...
#!/bin/bash
for d in `find . -type d -maxdepth 1`; do
cd "$d"
for f in `find . "*.txt"`; do
mv "$f" "$(echo "$f" | sed -r 's/^.*myfile/myfile/')"
done
done
however the newly renamed files end up in the parent directory
i.e.
myfile001A.txt
myfile002A.txt
myfile001B.txt
myfile002B.txt
subdir1/
subdir2/
In which the sub-directories are now empty.
How do I alter my script to rename the files and keep them in their respective sub-directories? As you can see the first loop changes directory to the sub directory so not sure why the files end up getting sent up a directory...
Your script has multiple problems. In the first place, your outer find command doesn't do quite what you expect: it outputs not only each of the subdirectories, but also the search root, ., which is itself a directory. You could have discovered this by running the command manually, among other ways. You don't really need to use find for this, but supposing that you do use it, this would be better:
for d in $(find * -maxdepth 0 -type d); do
Moreover, . is the first result of your original find command, and your problems continue there. Your initial cd is without meaningful effect, because you're just changing to the same directory you're already in. The find command in the inner loop is rooted there, and descends into both subdirectories. The path information for each file you choose to rename is therefore stripped by sed, which is why the results end up in the initial working directory (./subdir1/001myfile001A.txt --> myfile001A.txt). By the time you process the subdirectories, there are no files left in them to rename.
But that's not all: the find command in your inner loop is incorrect. Because you do not specify an option before it, find interprets "*.txt" as designating a second search root, in addition to .. You presumably wanted to use -name "*.txt" to filter the find results; without it, find outputs the name of every file in the tree. Presumably you're suppressing or ignoring the error messages that result.
But supposing that your subdirectories have no subdirectories of their own, as shown, and that you aren't concerned with dotfiles, even this corrected version ...
for f in `find . -name "*.txt"`;
... is an awfully heavyweight way of saying this ...
for f in *.txt;
... or even this ...
for f in *?myfile*.txt;
... the latter of which will avoid attempts to rename any files whose names do not, in fact, change.
Furthermore, launching a sed process for each file name is pretty wasteful and expensive when you could just use bash's built-in substitution feature:
mv "$f" "${f/#*myfile/myfile}"
And you will find also that your working directory gets messed up. The working directory is a characteristic of the overall shell environment, so it does not automatically reset on each loop iteration. You'll need to handle that manually in some way. pushd / popd would do that, as would running the outer loop's body in a subshell.
Overall, this will do the trick:
#!/bin/bash
for d in $(find * -maxdepth 0 -type d); do
pushd "$d"
for f in *.txt; do
mv "$f" "${f/#*myfile/myfile}"
done
popd
done
You can do it without find and sed:
$ for f in */*.txt; do echo mv "$f" "${f/\/*myfile/\/myfile}"; done
mv subdir1/001myfile001A.txt subdir1/myfile001A.txt
mv subdir1/002myfile002A.txt subdir1/myfile002A.txt
mv subdir2/001myfile001B.txt subdir2/myfile001B.txt
mv subdir2/002myfile002B.txt subdir2/myfile002B.txt
If you remove the echo, it'll actually rename the files.
This uses shell parameter expansion to replace a slash and anything up to myfile with just a slash and myfile.
Notice that this breaks if there is more than one level of subdirectories. In that case, you could use extended pattern matching (enabled with shopt -s extglob) and the globstar shell option (shopt -s globstar):
$ for f in **/*.txt; do echo mv "$f" "${f/\/*([!\/])myfile/\/myfile}"; done
mv subdir1/001myfile001A.txt subdir1/myfile001A.txt
mv subdir1/002myfile002A.txt subdir1/myfile002A.txt
mv subdir1/subdir3/001myfile001A.txt subdir1/subdir3/myfile001A.txt
mv subdir1/subdir3/002myfile002A.txt subdir1/subdir3/myfile002A.txt
mv subdir2/001myfile001B.txt subdir2/myfile001B.txt
mv subdir2/002myfile002B.txt subdir2/myfile002B.txt
This uses the *([!\/]) pattern ("zero or more characters that are not a forward slash"). The slash has to be escaped in the bracket expression because we're still inside of the pattern part of the ${parameter/pattern/string} expansion.
Maybe you want to use the following command instead:
rename 's#(.*/).*(myfile.*)#$1$2#' subdir*/*
You can use rename -n ... to check the outcome without actually renaming anything.
Regarding your actual question:
The find command from the outer loop returns 3 (!) directories:
.
./subdir1
./subdir2
The unwanted . is the reason why all files end up in the parent directory (that is .). You can exclude . by using the option -mindepth 1.
Unfortunately, this was onyl the reason for the files landing in the wrong place, but not the only problem. Since you already accepted one of the answers, there is no need to list them all.
a slight modification should fix your problem:
#!/bin/bash
for f in `find . -maxdepth 2 -name "*.txt"`; do
mv "$f" "$(echo "$f" | sed -r 's,[^/]+(myfile),\1,')"
done
note: this sed uses , instead of / as the delimiter.
however, there are much faster ways.
here is with the rename utility, available or easily installed wherever there is bash and perl:
find . -maxdepth 2 -name "*.txt" | rename 's,[^/]+(myfile),/$1,'
here are tests on 1000 files:
for `find`; do mv 9.176s
rename 0.099s
that's 100x as fast.
John Bollinger's accepted answer is twice as fast as the OPs, but 50x as slow as this rename solution:
for|for|mv "$f" "${f//}" 4.316s
also, it won't work if there is a directory with too many items for a shell glob. likewise any answers that use for f in *.txt or for f in */*.txt or find * or rename ... subdir*/*. answers that begin with find ., on the other hand, will also work on directories with any number of items.
I'm writing a Bash script and I need to find and move/delete all files with names ending in ~ or beginning and ending with #, that is file~ or #file#, emacs junk files.
I'm trying to use [ -f *~ ] && ( ... move or delete those files ... ) to determine if any files of this kind exist before I try to do anything to them, so as not to get error messages from the rm or mv function if they don't find the files. However, this results in "binary operator expected". I think it has something to do with the fact that ~ is an unary operator. Is there a way to make it work as intended?
Nothing wrong with what you were doing originally for current directory (not any slower than find), though not as one-liney.
#!/bin/bash
for file in *"~"; do
if [ -f "$file" ]; then
#do something with $file
fi
done
Also, "binary operator expected" is just coming from bash expecting a single argument for the "-f" operator, whereas *~ can expand to multiple arguments, e.g.
$ mkdir test && cd test
$ touch "1~"
$ if [ -f *"~" ]; then echo "Confirmed file ending in ~"; fi
Confirmed file ending in ~
$ touch {2..10}"~" && echo *"~"
1~ 10~ 2~ 3~ 4~ 5~ 6~ 7~ 8~ 9~
$ if [ -f *"~" ]; then echo "Confirmed file ending in ~"; fi
bash: [: too many arguments
$ if [ -f "arg1" "arg2"; then echo "Confirmed file ending in ~"; fi
bash: [: arg1: binary operator expected
Not positive why errors are different for the two cases, but pretty sure either error can result depending on expansion.
Your problem stems from the fact that file-testing operators such as -f are not designed to be used with globbing patterns - only with a single, literal path.
You can simply let bash's path expansion (globbing) do the work:
Note: The approaches below are an alternative to using a loop (as demonstrated in #BroSlow's answer).
Simplest approach:
rm -f *'~' '#'*'#'
This removes all matching files, if any, and, if there are no matches, does nothing (and outputs nothing and reports exit code 0) - thanks to the -f option (tip of the hat to #chris).
Caveat: This also silently removes files marked as read-only, IF you have sufficient permissions to make them writable. In other words: if files match that you have intentionally marked as read-only, they will still get removed.
Also, if directories happen to match, they will NOT be removed, an error message will be displayed and the exit code will be 1 - matching files, however, are still removed.
At your own peril you may add -r to also quietly remove any matching directories (whether they're empty or not).
Using find, if explicitly ruling out directories is desired:
To avoid matching directories, you can use find, but to make it safe, the command gets lengthy:
# delete
find . -maxdepth 1 -type f -name '*~' -delete -or -name '#*#' -delete
# move
find . -maxdepth 1 -type f \
-name '*~' -exec mv {} /tmp/ \; -or \
-name '#*#' -exec mv {} /tmp/ \;
(Two general notes on find:
The path itself (., in this case) is by default included in the set of items (not a concern in this particular case due to excluding directories from matching) - to avoid that, add -mindepth 1.
Terminating the command passed to the -exec primary with + rather than \; is generally preferable, as find then substitutes as many matches as will safely fit for {}, resulting in much fewer invocations (typically just 1) of the command (assuming, of course, that your command can take argument lists of variable length) - this is similar to xargs' behavior.
Here's the catch: -exec only accepts commands terminated with + if {} is the command's last argument (and will otherwise fail with the misleading error message find: missing argument to '-exec').
Thus, in the case at hand + cannot be used, because the mv command's last argument must be the target.
)
The shell will expand your *~ to a list of all files ending in ~. So if you have more than one of them, they all will be in the parameter list of -f, but -f handles only one parameter.
Try
find . -name "*~" -print | xargs rm
and read about the parameters to find if you want to stop it from recursing your whole directory structure.
The find command is generally used for things of this nature. It even has a built-in -delete flag.
find -name '*~' -delete
or, with xargs (to move, for example)
# Moves files to /tmp using the replacement string specified with the -I flag
find -name '*~' -print0 | xargs -0 -I _ mv _ /tmp/
If you prefer to use xargs for deletion as well, you can do away with the use of -I
find -name '*~' -print0 | xargs -0 rm
Note the use of the -print0 and -0 flags to specify null-terminated paths. This allows paths with spaces to run properly. Without -0, filenames with spaces (including spaces anywhere in the path) will be treated as two separate (possibly invalid) paths.
I'm reviewing for an exam and one of the questions is asking me to write a single command that will delete the files in a given directory that are at least 6 characters long.
Example:
person#ubuntumachine:~$ ls
abc.txt, abcdef.txt, 123456.txt, helloworld.txt, rawr.txt
The command would delete the files "abcdef.txt", "12346.txt" and "helloworld.txt".
I'm aware the at the * would be used at some point but I'm not sure what to use to indicate 6 characters long...
Thank you <3
Since the question can have 2 interpretations, both answers are given:
1. To delete files with 6 or more characters in the FILE NAME:
rm ??????*
Explanation:
??????: The ?'s are to be entered literally. Each ? matches any single character. So here it means "match any 6 characters"
*: The * wildcard matches zero or more characters
Therefore it removes any file with 6 or more characters.
Alternatively:
find -type f -name "??????*" -delete
Explanation:
find: invoke the find command
-type f: find only files.
-name "??????*": match any file with at least 6 characters, same idea as above.
-delete: delete any such files found.
2. To delete files with 6 or more characters in its CONTENTS:
find -type f -size +5c -delete
Explanation:
find: invoke the find command
-type f: find only files (not directories etc)
-size +5c: find only files greater than 5 characters long. Note: recall that EOF (end of file) counts as a character in this case. If you'd like to exclude EOF from your counter, change it from 5 to 6.
-delete: delete any such files found
Something like this should work:
$ ls|while read filename; do test ${#filename} -gt 6 && echo rm "$filename"; done
The trick is to use the ${#foo} construct to get the length of the filename.
Once you're satisfied with the output, immediately run the following after the previous command:
$ !! | sh
This repeats the last command (which shows the rm command to delete the files) and pipe it to sh to really execute it.
This will perform the requested logic on the current directory and all subdirectories.
find . -type f -regextype posix-egrep -regex ".*/[^/]{5}[^/]+$" -exec rm -vf {} \;
find .
searches the local directory (change the .
to search elsewhere)
-type f
considers files only
-regextype posix-egrep
use egrep regex syntax (this is what I know)
-regex ".*/[^/]{5}[^/]+$"
find will match all paths matching this regex
the regex deconstructs as follows:
.*/ effectively ignores the path until the filename
[^/]{5} finds 5 characters that are not slashes
[^/]+$ requires at least one more character (thus: 6 or more) that is not a slash to appear before the end of line ($)
-exec rm -vf {} \;
find will replace the {} with each file its search query matches (in this case, files with paths that match our regex). Thus, this achieves the deletion. -vf added to print the results so you know what's happened.
-exec is picky about syntax - the \; is necessary to avoid find: missing argument to '-exec' encountered if a simple ; is used in its place.
You can test this by using -print instead of -exec rm -vf {} \; or simply removing the -exec rm -vf {} \; (-print is find's default behavior)