Duplicate
Unable to remove everything else in a folder except FileA
I guess that it is slightly similar to this:
delete [^Music]
However, it does not work.
Put the following command to your ~/.bashrc
shopt -s extglob
You can now delete everything else in the folder except the Music folder by
rm -r !(Music)
Please, be careful with the command.
It is powerful, but dangerous too.
I recommend to test it always with the command
echo rm -r !(Music)
The command
rm (ls | grep -v '^Music$')
should work. If some of your "files" are also subdirectories, then you want to recursively delete them, too:
rm -r (ls | grep -v '^Music$')
Warning: rm -r can be dangerous and you could accidentally delete a lot of files. If you would like to confirm what you will be deleting, try looking at the output of
ls | grep -v '^Music$'
Explanation:
The ls command lists directory contents; without an argument, it defaults to the current directory.
The pipe symbol | redirects output to another command; when the output of ls is redirected in this way, it prints filenames one-per-line, rather than in a column format as you would see if you type ls at an interactive terminal.
The grep command matches lines for patterns; the -v switch means to print lines that don't match the pattern.
The pattern ^Music$ means to match a line starting and ending with Music -- that is, only the string Music; the effect of the ^ (beginning of line) and $ (end of line) characters can also be achieved with the -x switch, as in grep -vx Music.
The syntax command (subcommand) is fish's way of taking the output of one command and passing it over as command-line arguments to another.
The rm command removes files. By default, it does not remove directories, but the -r ("recursive") option changes that.
You can learn about these commands and more by typing man command, where command is what you want to learn about.
So I was looking all over for a way to remove all files in a directory except for some directories, and files, I wanted to keep around. After much searching I devised a way to do it using find.
find -E . -regex './(dir1|dir2|dir3)' -and -type d -prune -o -print -exec rm -rf {} \;
Essentially it uses regex to select the directories to exclude from the results then removes the remaining files. Just wanted to put it out here in case someone else needed it.
Related
I have a directory which has 70000 xml files in it. Each file has a tag which looks something like this, for the sake of simplicity:
<ns2:apple>, <ns2:orange>, <ns2:grapes>, <ns2:melon>. Each file has only one fruit tag, i.e. there cannot be both apple and orange in the same file.
I would like rename every file (add "1_" before the beginning of each filename) which has one of: <ns2:apple>, <ns2:orange>, <ns2:melon> inside of it.
I can find such files with egrep:
egrep -r '<ns2:apple>|<ns2:orange>|<ns2:melon>'
So how would it look as a bash script, which I can then user as a cron job?
P.S. Sorry I don't have any bash script draft, I have very little experience with it and the time is of the essence right now.
This may be done with this script:
#!/bin/sh
find /path/to/directory/with/xml -type f | while read f; do
grep -q -E '<ns2:apple>|<ns2:orange>|<ns2:melon>' "$f" && mv "$f" "1_${f}"
done
But it will rescan the directory each time it runs and append 1_ to each file containing one of your tags. This means a lot of excess IO and files with certain tags will be getting 1_ prefix each run, resulting in names like 1_1_1_1_file.xml.
Probably you should think more on design, e.g. move processed files to two directories based on whether file has certain tags or not:
#!/bin/sh
# create output dirs
mkdir -p /path/to/directory/with/xml/with_tags/ /path/to/directory/with/xml/without_tags/
find /path/to/directory/with/xml -maxdepth 1 -mindepth 1 -type f | while read f; do
if grep -q -E '<ns2:apple>|<ns2:orange>|<ns2:melon>'; then
mv "$f" /path/to/directory/with/xml/with_tags/
else
mv "$f" /path/to/directory/with/xml/without_tags/
fi
done
Run this command as a dry run, then remove --dry_run to actually rename the files:
grep -Pl '(<ns2:apple>|<ns2:orange>|<ns2:melon>)' *.xml | xargs rename --dry-run 's/^/1_/'
The command-line utility rename comes in many flavors. Most of them should work for this task. I used the rename version 1.601 by Aristotle Pagaltzis. To install rename, simply download its Perl script and place into $PATH. Or install rename using conda, like so:
conda install rename
Here, grep uses the following options:
-P : Use Perl regexes.
-l : Suppress normal output; instead print the name of each input file from which output would normally have been printed.
SEE ALSO:
grep manual
I am flattening a directory of nested folders/picture files down to a single folder. I want to move all of the nested files up to the root level.
There are 3,381 files (no directories included in the count). I calculate this number using these two commands and subtracting the directory count (the second command):
find ./ | wc -l
find ./ -type d | wc -l
To flatten, I use this command:
find ./ -mindepth 2 -exec mv -i -v '{}' . \;
Problem is that when I get a count after running the flatten command, my count is off by 46. After going through the list of files before and after (I have a backup), I found that the mv command is overwriting files sometimes even though I'm using -i.
Here's details from the log for one of these files being overwritten...
.//Vacation/CIMG1075.JPG -> ./CIMG1075.JPG
..more log
..more log
..more log
.//dog pics/CIMG1075.JPG -> ./CIMG1075.JPG
So I can see that it is overwriting. I thought -i was supposed to stop this. I also tried a -n and got the same number. Note, I do have about 150 duplicate filenames. Was going to manually rename after I flattened everything I could.
Is it a timing issue?
Is there a way to resolve?
NOTE: it is prompting me that some of the files are overwrites. On those prompts I just press Enter so as not to overwrite. In the case above, there is no prompt. It just overwrites.
Apparently the manual entry clearly states:
The -n and -v options are non-standard and their use in scripts is not recommended.
In other words, you should mimic the -n option yourself. To do that, just check if the file exists and act accordingly. In a shell script where the file is supplied as the first argument, this could be done as follows:
[ -f "${1##*/}" ]
The file, as first argument, contains directories which can be stripped using ##*/. Now simply execute the mv using ||, since we want to execute when the file doesn't exist.
[ -f "${1##*/}" ] || mv "$1" .
Using this, you can edit your find command as follows:
find ./ -mindepth 2 -exec bash -c '[ -f "${0##*/}" ] || mv "$0" .' '{}' \;
Note that we now use $0 because of the bash -c usage. It's first argument, $0, can't be the script name because we have no script. This means the argument order is shifted with respect to a usual shell script.
Why not check if file exists, prior move? Then you can leave the file where it is or you can rename it or do something else...
Test -f or, [] should do the trick?
I am on tablet and can not easyly include the source.
I want to understand the real power of pipe and redirection command.As per my understanding,| takes the output of one command result as the input of itself. And, > is helps in output redirecting .If it is so,
find . -name "*.swp" | rm
find . -name "*.swp" > rm
why this command is not working as expected .For me above command means
Find the all files recursively whose extension is .swp in current directory .
take the output of 1. and remove all whose resulted files .
FYI,yes i know how to accomplish this task . it can be done by passing -exec flag .
find . -name "*.swp"-exec rm -rf {} \;
But as I already mentioned,i want to accomplish it with > or | command.
If i was wrong and going in wrong direction,please correct me and explain redirection and pipe command. Where we use whose commands ? please dont mention simple book examples i read all whose thing . try to explain me some complicated thing .
I'll break this down by the three methods you have shown:
> will redirect all output from find into a file named rm (will not work, because you're just appending to a file).
| will pipe output from find into the rm command (will not work, because rm does not read on stdin)
-exec rm -rf {} \; will run rm -rf on each item ({}) that find finds (will work, because it passes the files as argument to rm).
You will want to use -exec flag, or pipe into the xargs command (man xargs), not | or > in order to achieve the desired behavior.
EDIT: as #dmckee said, you can also use the $() operator for string interpolation, ie: rm -rf $(find . -name "*.swp") (this will fail if you have a large number of files, due to argument length limits).
> simply redirects to a file named rm.
Piping via | to rm doesn't work because rm doesn't expect filenames via STDIN.
So you have to use xargs, which passes values from STDIN as arguments:
find . -name "*.swp"|xargs rm
This is dangerous because the filename may contain characters your shell considers a field seperator ($IFS).
So, you use:
find . -name "*.swp" -print0|xargs -0 rm
Which causes find print the filenames \0 sperated to STDOUT and xargs to read the filenames \0 seperated and pass them as arguments to rm.
Of course, the easiest way to achieve this would have been:
rm **/*.swp
assuming you use bash.
You should take some time and read about the basics of shell redirection again :) I think this is a good document: http://wiki.bash-hackers.org/howto/redirection_tutorial
I'll try to explain what went wrong for you:
find . -name "*.swp" | rm
This command redirects the find results, i.e. the stdout of find, to the stdin of the program rm. However, rm does not read on stdin (this is something you can read in the documentation of rm). rm is controlled via command line arguments, not via stdin. I think there is no way to make rm read from stdin at all. That's why nothing is deleted.
find . -name "*.swp" > rm
This command redirects newline-delimited find results (stdout of find) to a file called 'rm'. Again, nothing is deleted :)
Basically the <, >, >>, &>, &>> operators perform redirection from/to a file that actually exists in the file system. The pipe | redirects the standard output from one command to the standard input of another command. Simply spoken there are no files involved here. However, this approach only makes sense if the program to the left of the pipe actually writes something to stdout and the program to the right of the pipe reads from stdin and both programs understand each other, i.e. the reading program (the consumer) understands the output of the feeding program.
Redirection creates a file. So your >rm example just creates a file named ./rm into which the output of your command is saved.
Pipes are essentially a shorthand. one | two is like one >tmp; two <tmp except without the (explicit) temporary file.
Of course, rm doesn't read file names from standard input, so cmd | rm is basically useless (apart from situations where the pipeline continues with yet another command which does something with the input which rm didn't read). If you want that, there's xargs.
find . -name "*.swp" | xargs rm
I'm able to use the 'rename' command to add the missing character to all filenames in the current directory like this:
echo "Renaming files..."
rename -v "s/^abcd124(.+)/abcd1234$1/" *.wav.gz;
echo "Done."
However, I'd like to do this for the current directory and all its subdirectories. I tried this:
echo "Renaming files..."
for dir in $(find ./ -type d); do
rename -v "s/^$dir\/abcd124(.+)/$dir\/abcd1234$1/" *.wav.gz;
done;
echo "Done."
However, if the $dir variable contains any of these special characters: {}[]()^$.|*+?\ then they are not escaped properly with \ and my script fails.
What would be the best way to solve this problem? Also, what do you guys think of using awk to solve this problem (advantages/disadvantages?)
You can also try:
find . -name "*.wav.gz" | xargs rename -v "s/abcd124*/abcd1234$1/"
It works on newer Unix systems with "xargs" command available. Note that I edited the regular expression slightly.
Try:
find ./ -type d -execdir rename -v "s/^abcd124(.+)/abcd1234\1/" *.wav.gz ";"
Find does already provide an iterator over your files - you don't need for around it or xargs behind , which are often seen. Well - in rare cases, they might be helpful, but normally not.
Here, -execdir is useful. Gnu-find has it; I don't know if your find has it too.
But you need to make sure not to have a *.wav.gz-file in the dir you're starting this command, because else your shell will expand it, and hand the expanded names over to rename.
Note: I get an warning from rename, that I should replace \1 with $1 in the regex, but if I do so, the pattern isn't catched. I have to use \1 to make it work.
Here is another approach. Why at all search for directories, if we search for wav.gz-files?
find . -name "*.wav.gz" -exec rename -v "s/^abcd124(.+)/abcd1234\1/" {} ";"
In bash 4:
shopt -s globstar
rename -v "s/^$dir\/abcd124(.+)/$dir\/abcd1234$1/" **/*.wav.gz;
Just be aware that Gentoo Linux has its rename utility points to
http://developer.berlios.de/project/showfiles.php?group_id=413
by its ebuild
http://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/sys-apps/rename/rename-1.3.ebuild?view=markup
for Debian or maybe Ubuntu,
rename is /usr/bin/prename, which is a perl script
See rename --help before your move.
I run unsuccessfully in Mac
mv .* *
and
mv .* ./*
My files disappeared into thin air.
How can you convert dot-files to non-dotfiles safely?
for i in `ls -d .*`; do mv $i "`echo $i | sed 's/^.//'`"; done
or, much easier,
rename 's/^.//' `ls -d .*`
if your system have got it.
In zsh, you could just use .* safely, but in bash you'll have to use ls -d .*
You can't use mv to rename multiple files like that. What you want is mmv (get it here).
mmv .\* \#1
You have to escape the asterisk to prevent bash from expanding it. Use the -n flag to do a test run to make sure what will happen is what you want.
You could also do this in shell scripting but I much prefer mmv because the -n flag shows what it would do. You'd have to alter your script to echo instead of mv, which seems more dangerous than dropping the -n flag (especially when you get more complicated.
The tricky part about this is selecting dotfiles without selecting "." and "..".
ls .??* is sometimes used for this, since it forces the filenames to be three or more characters long. There is a risk though, of overlooking a dotfile with a short name, such as ".x"
ls -d .* prevents directories from being expanded, but it doesn't filter out "." or ".."
The find command could be used, as in find . -maxdepth 1 -type f -name '.*'. The maxdepth limits it to the current directory and not subdirectories. The -type f limits it to files, eliminating directories such as "." and "..". Then again, maybe you want to rename the .ssh directory to ssh.
Here's an alternative that selects dotfiles while avoiding "." and "..".
ls -A | sed -n 's/^\.\(.*\)/mv ".\1" "\1"/p' | bash
The -A lists all files and dotfiles, yet eliminates "." and ".." for us. Then the sed command selects only those lines with "." as the first character, and prints out appropriate "mv" commands, complete with quotes in case you have a bizarre dotfilename with a space in it.
Run it without the "| bash" first, to see what mv commands are generated.
i don't know what type of system you're on, but it looks unix like, i would do
ls -1 .?* | cut -b1- | xargs -i{} mv .{} {}
this lists, everything that starts with a ., but isn't . or .., then cut the first column off, then pipe that list to a move command
In Linux, there is usually a rename utility available (a perl script, if I am not mistaken):
rename 's/^.//' .*
It is available on a Mac. You can install it by following tips at here.
Even simpler:
for x in .*; do mv $x ${x/./}; done