How to delete files of certain name from terminal recursively - terminal

I would like to delete all the emacs backup (~) files from subfolders.
I am aware that I can cd in every single folder and delete them using rm *~ (e.g. for backup file test.cpp~).
How I can delete these files with one command, without cd'ing in every folder?
(I tried rm -r *~ and rm -rf *~ but they don't seem to work)

You can do this with find and exec. Here's an example that does what you want to do:
find -name '*~' -exec rm {} \;
Let's break it down how this works. The find command will recurse through the directory it's executed from, and by default it will print out everything it finds. Using -name '*~' tells us only to select entries whose name matches the regex *~. We have to quote it because otherwise the shell might expand it for us. Using -exec rm {} will execute rm for each thing it finds, with {} as a placeholder for the filename. (The final ; is something required to tell find that this is where the command ends. It's not really a big deal but it'll whine and do nothing if you don't use it. The \ is to escape it because ; is a special shell character.)

find /path/to/directory/ -type f -name '*filtercondition*' -delete
Above command will find the file recursively in the folder matching pattern and delete Files only

You would use find:
find ./ -name *~ -exec rm {} \;
This command will recursively list all files, that match the pattern given for the name. Then it'll execute the command provided for each one of them, substituting the curly braces with the filename.
The only tricky part is the semicolon, as that is closing the command, but it must be protected from bash, hence the backslash.
See https://linux.die.net/man/1/find for more options.

First, the best way to delete these files is to not create them (or rather, create them in a central place): https://stackoverflow.com/a/151946/245173
Here's a way to use find from Emacs to gather a list of files and selectively do operations on them. M-x find-name-dired RET /path/to/files RET *~ RET should get all of the backup files under /path/to/files/ and put them in a dired buffer. Then you can do normal dired things like mark files with m, invert the selection with t, and delete the selection with D.

Related

removing directory and sub directory which is not present in the list

This is my directory structure
find -type f -print0 | xargs -0 ls -t
./lisst.txt ./SAMN03272855/SRR1734376/SRR1734376_1.fastq.gz
./SAMN03272854/SRR1734375/SRR1734375_2.fastq.gz ./SAMN07605670/SRR6006890/SRR6006890_2.fastq.gz
./SAMN03272854/SRR1734375/SRR1734375_1.fastq.gz ./SAMN07605670/SRR6006890/SRR6006890_1.fastq.gz
./SAMN03272855/SRR1734376/SRR1734376_2.fastq.gz
So this is a small subset of my folder/files where i have around 70.
I have a made a list of files which i want to keep and other i would like to delete.
My list.txt contains SAMN03272854,SAMN03272855 but I want to remove SAMN07605670.
I ran this
find . ! -name 'lisst.txt' -type d -exec rm -vrf {} +
It removed everything
QUESTION UPDATE
In my list it contains the folder i want to keep and the one which are not there are to be removed.
The folders which are to be removed also contains subdirectories and files. I want to remove everything
Your command selects each directory in the tree, except a directories of the funny name lisst.txt. Once it finds a directory, you do a recursive remove of this directory. No surprise that your files are gone.
You can't use rm -r when you want to spare certain files from deletion. This means that you also can't remove a directory, which somewhere below in its subtree has a file you want to keep.
I would run two find commands: The first removes all the files, ignoring directories, and second one removes all directories, which are empty (bottom-up). Assuming that SAMN03272854 is indeed a file (as you told us in your question), this would be:
find . -type f \( ! \( -name SAMN03272854 -o -name SAMN03272855 \) \) -exec rm {}
find . -depth -type d -exec rmdir {} 2>/dev/null
The error redirection in the latter command suppresses messages from rmdir for directories which still contain files you want to keep. Of course other messages are also suppressed. I would during debugging run the command without error redirection, to see whether it is basically correct.
Things would get more complicated, if you have files and directories to keep, because to keep a directory likely implies to keep all the files below it. In this case, you can use the -prune option of find, which excludes directories including their subdirectories from being processed. See the find man page, which gives examples for this.

Bash: Loop through each file in each subfolder and rename

I'm in a directory with 3 subdirectories: sub1, sub2, and sub3. Each subdirectory has files in it. I would like to rename each file by prepending sample_ to it.
Here's what I have:
for d in */; do
for f in "$d"; do
mv "$f" "sample_$f"
done
done
This prepends to the folder name, which isn't what I want. What am I doing incorrectly?
Thanks!
You can easily accomplish this with find and brace expansion (part of shell expansion):
find . -type f -execdir mv {,sample_}{} \;
This should recursively find only files (-type f) within each subdirectory then move them (renaming them) using the -execdir option (see below), prepending sample_ to each filename. The remaining mv {,_sample}{} is the Cartesian product way of doing mv {} sample_{}.
-execdir command {} + Like -exec, but the specified command is run from the subdirectory
containing the matched file, which is not normally the directory in
which you started find. This a much more secure method for invoking
commands, as it avoids race conditions during resolution of the paths
to the matched files. As with the -exec option, the '+' form of
-execdir will build a command line to process more than one matched file, but any given invocation of command will only list files that
exist in the same subdirectory. If you use this option, you must
ensure that your $PATH environment variable does not reference the
current directory; otherwise, an attacker can run any commands they
like by leaving an appropriately-named file in a directory in which
you will run -execdir.
↳ GNU : Brace / Shell Expansions
you need to use dirname and basename to split your file name.
for d in */; do
for f in $d/*; do
mv "$f" "$d/sample_$(basename $f)"
done
done

Remove all files that start with same prefix, but different filetype

How do I remove all files in a folder that start with the same prefix? For example:
I have files:
SVM1.txt
SVM2.csv
SVM3.mat
helloworld.txt
README.txt
I want to delete all the files that start with 'SVM'. Note that they start with the same prefix, but are of different filetype!
With wildcards, of course.
rm SVM*
In addition to the straightforward
rm SVM*
which might fail (command line too long) if there are many, many matching files, you can use
find . -prune -name 'SVM*' -exec rm {} +
which will repeatedly run rm on as many files at a time as possible until all matching files are deleted. -prune keeps find from descending into any subdirectories to find matching files.
In the directory where the files are,
ls | grep '^SVM.*' | xargs rm
Stop at grep ^SVM.* to double check that you have the right files to delete, then add the xargs rm.

dealing filenames with shell regex

138.096.000.015.00111-138.096.201.072.38717
138.096.000.015.01008-138.096.201.072.00790
138.096.201.072.00790-138.096.000.015.01008
138.096.201.072.33853-173.194.020.147.00080
138.096.201.072.34293-173.194.034.009.00080
138.096.201.072.38717-138.096.000.015.00111
138.096.201.072.41741-173.194.034.025.00080
138.096.201.072.50612-173.194.034.007.00080
173.194.020.147.00080-138.096.201.072.33853
173.194.034.007.00080-138.096.201.072.50612
173.194.034.009.00080-138.096.201.072.34293
173.194.034.025.00080-138.096.201.072.41741
I have many folders inside which there are many files, the file names are like the above
I want to remove those files with file names having substring "138.096.000"
and sometimes I want to get the list of files with filenames with substring "00080"
To delete files with name containing "138.096.000":
find /root/of/files -type f -name '*138.096.000*' -exec rm {} \;
To list files with names containing "00080":
find /root/of/files -type f -name '*00080*'
rm $(find . -name \*138.096.000\*)
This uses the find command to find the appropriate files. This is executed within a subshell, and the output (the list of files) is used by rm. Note the escaping of the * pattern, since the shell will try and expand * itself.
This assumes you don't have filenames with spaces etc. You may prefer to do something like:
for i in $(find . -name \*138.096.000\*); do
rm $i
done
in this scenario, or even
find . -name \*138.096.000\* | xargs rm
Note that in the loop above you'll execute rm for each file, and the xargs variant will execute rm multiple times (dependin gon the number of files you have - it may only execute once).
However, if you're using zsh then you can simply do:
rm **/*138.096.000*
(I'm assuming your directories aren't named like your files. Note the -f flag as used in Kamil's answer if this is the case)

How can I use terminal to copy and rename files from multiple folders?

I have a folder called "week1", and in that folder there are about ten other folders that all contain multiple files, including one called "submit.pdf". I would like to be able to copy all of the "submit.pdf" files into one folder, ideally using Terminal to expedite the process. I've tried cp week1/*/submit.pdf week1/ as well as cp week1/*/*.pdf week1/, but it had only been ending up copying one file. I just realized that it has been writing over each file every time which is why I'm stuck with one...is there anyway I can prevent that from happening?
You don't indicate your OS, but if you're using Gnu cp, you can use cp week1/*/submit.pdf --backup=t week/ to have it (arbitrarily) number files that already exist; but, that won't give you any real way to identify which-is-which.
You could, perhaps, do something like this:
for file in week1/*/submit.pdf; do cp "$file" "${file//\//-}"; done
… which will produce files named something like "week1-subdir-submit.pdf"
For what it's worth, the "${var/s/r}" notation means to take var, but before inserting its value, search for s (\/, meaning /, escaped because of the other special / in that expression), and replace it with r (-), to make the unique filenames.
Edit: There's actually one more / in there, to make it match multiple times, making the syntax:
"${ var / / \/ / - }"
take "var" replace every instance of / with -
find to the rescue! Rule of thumb: If you can list the files you want with find, you can copy them. So try first this:
$ cd your_folder
$ find . -type f -iname 'submit.pdf'
Some notes:
find . means "start finding from the current directory"
-type -f means "only find regular files" (i.e., not directories)
-iname 'submit.pdf' "... with case-insensitive name 'submit.dpf'". You don't need to use 'quotation', but if you want to search using wildcards, you need to. E.g.:
~ foo$ find /usr/lib -iname '*.So*'
/usr/lib/pam/pam_deny.so.2
/usr/lib/pam/pam_env.so.2
/usr/lib/pam/pam_group.so.2
...
If you want to search case-sensitive, just use -name instead of -iname.
When this works, you can copy each file by using the -exec command. exec works by letting you specify a command to use on hits. It will run the command for each file find finds, and put the name of the file in {}. You end the sequence of commands by specifying \;.
So to echo all the files, do this:
$ find . -type f -iname submit.pdf -exec echo Found file {} \;
To copy them one by one:
$ find . -type f -iname submit.pdf -exec cp {} /destination/folder \;
Hope this helps!

Resources