How to use the .* wildcard in bash but exclude the parent directory (..)? - bash

There are often times that I want to execute a command on all files (including hidden files) in a directory. When I try using
chmod g+w * .*
it changes the permissions on all the files I want (in the directory) and all the files in the parent directory (that I want left alone).
Is there a wildcard that does the right thing or do I need to start using find?

You will need two glob patterns to cover all the potential “dot files”: .[^.]* and ..?*.
The first matches all directory entries with two or more characters where the first character is a dot and the second character is not a dot. The second picks up entries with three or more characters that start with .. (this excludes .. because it only has two characters and starts with a ., but includes (unlikely) entries like ..foo).
chmod g+w .[^.]* ..?*
This should work well in most all shells and is suitable for scripts.
For regular interactive use, the patterns may be too difficult to remember. For those cases, your shell might have a more convenient way to skip . and ...
zsh always excludes . and .. from patterns like .*.
With bash, you have to use the GLOBIGNORE shell variable.
# bash
GLOBIGNORE=.:..
echo .*
You might consider setting GLOBIGNORE in one of your bash customization files (e.g. .bash_profile/.bash_login or .bashrc).
Beware, however, becoming accustomed to this customization if you often use other environments.
If you run a command like chmod g+w .* in an environment that is missing your customization, then you will unexpectedly end up including . and .. in your command.
Additionally, you can configure the shells to include “dot files” in patterns that do not start with an explicit dot (e.g. *).
# zsh
setopt glob_dots
# bash
shopt -s dotglob
# show all files, even “dot files”
echo *

Usually I would just use . .[a-zA-Z0-9]* since my file names tend to follow certain rules, but that won't catch all possible cases.
You can use:
chmod g+w $(ls -1a | grep -v '^..$')
which will basically list all the files and directories, strip out the parent directory then process the rest. Beware of spaces in file names though, it'll treat them as separate files.
Of course, if you just want to do files, you can use:
find . -maxdepth 0 -type f -exec chmod g+w {} ';'
or, yet another solution, which should do all files and directories except the .. one:
for i in * .* ; do if [[ ${i} != ".." ]] ; then chmod g+w "$i"; fi done
but now you're getting into territory where scripts or aliases may be necessary.

What i did was
tar --directory my_directory --file my_directory.tar --create `ls -A mydirectory/`
Works just fine the ls -A my_directory expands to everything in the directory except . and ... No wierd globs, and on a single line.
ps: Perhaps someone will tell me why this is not a good idea. :p

How about:
shopt -s dotglob
chmod g+w ./*

Since you may not want to set dotglob for the rest of your bash session you can set it for a single set of commands by running in a subprocess like so:
$ (shopt -s dotglob; chmod g+w ./*)

If you are sure that two character hidden file names will never be used, then the simplest option is just be to do:
chmod g+w * ...*

Related

cp -r * except dont copy any .pdf files - copy a directory subtree while excluding files with a given extension

Editor's note: In the original form of the question the aspect of copying an entire subtree was not readily obvious.
How do I copy all the files from one directory subtree to another but omit all files of one type?
Does bash handle regex?
Something like: cp -r !*.pdf /var/www/ .?
EDIT 1
I have a find expression: find /var/www/ -not -iname "*.pdf"
This lists all the files that I want to copy. How do I pipe this to a copy command?
EDIT 2
This works so long as the argument list is not too long:
sudo cp `find /var/www/ -not -iname "*.pdf"` .
EDIT 3
One issue though is that I am running into issues with losing the directory structure.
Bash can't help here, unfortunately.
Many people use either tar or rsync for this type of task because each of them is capable of recursively copying files, and each provides an --exclude argument for excluding certain filename patterns. tar is more likely to be installed on a given machine, so I'll show you that.
Assuming you are currently in the destination directory, the shell command:
tar -cC /var/www . | tar -x
will copy all files from /var/www into the current directory recursively.
To filter out the PDF files, use:
tar -cC /var/www --exclude '*.pdf' . | tar -x
Multiple --exclude arguments can be given, so:
tar -cC /var/www --exclude '*.pdf' --exclude '*.txt' . | tar -x
would exclude .txt files as well.
K. A. Buhr's helpful answer is a concise solution that reflects the intent well and is easily extensible if multiple extensions should be excluded.
Trying to do it with POSIX utilities and POSIX-compliant options alone requires a slightly different approach:
cp -pR /var/www/. . && find . -name '*.pdf' -exec rm {} +
In other words: copy the whole subtree first, then remove all *.pdf files from the destination subtree.
Note:
-p preserves the original files' attributes in terms of file timestamps, ownership, and permission bits (tar appears to do that by default); without -p, the copies will be owned by the current user and receive new timestamps (though the permission bits are preserved).
Using cp has one advantage over tar: you get more control over how symlinks among the source files are handled, via the -H, -L, and -P options - see the POSIX spec. for cp.
tar invariably seems to copy symlinks as-is.
-R supersedes the legacy -r option for cp, as the latter's behavior with non-regular files is ill-defined - see the RATIONALE section in the POSIX spec. for cp
Neither -iname for case-insensitive matching nor -delete are part of the POSIX spec. for find, but both GNU find and BSD/macOS find support them.
Note how source path /var/www/. ends in /. to ensure that its contents are copied to the destination path (as opposed to putting everything into a www subfolder).
With BSD cp, /var/www/ (trailing /) would work too, but GNU cp treats /var/www and /var/www/ the same.
As for your questions and solution attempts:
Does bash handle regex?
In the context of filename expansion (globbing), Bash only understands patterns, not regexes (Bash does have the =~ regex-matching operator for string matching inside [[ ... ]] conditionals, however).
As a nonstandard extension, Bash implements the extglob shell option, which adds additional constructs to the pattern-matching notation to allow for more sophisticated matching, such as !(...) for negating matchings, which is what you're looking for.
If you combine that with another nonstandard shell option, globstar (**, Bash v4+), you can construct a single pattern that matches all items except a given sub-pattern across an entire subtree:
/var/www/**/!(*.pdf)
does find all non-PDF filesystem items in the subtree of /var/www/.
However, combining that pattern with cp won't work as intended: with -R, any subdirs. are still copied in full; without -R, subdirs. are ignored altogether.
Caveats:
By default, patterns (globs) ignore hidden items unless explicitly matched (* will only match non-hidden items). To include them, set shell option dotglob first.
Matching is case-sensitive by default; turn on shell option nocaseglob to make it case-insensitive.
find /var/www/ -not -iname "*.pdf" in essence yields the same as the extended glob above, except with case-insensitive matching, hidden items invariably included, and the output paths (generally) not in the same order.
However, copying the output paths to their intended destination is the nontrivial part: you'd have to construct analogous subdirs. in the destination dir. on the fly, and you'd have to do so for each input path separately, which will also be quite slow.
Your own attempt, sudo cp `find /var/www/ -not -iname "*.pdf"` ., falls short in several respects:
As you've discovered yourself, this copies all matching items into a single destination directory.
The output of the command substitution, `...`, is subject to shell expansions, namely word-splitting and filename expansion, which may break the command, notably with filenames with embedded spaces.
Note: As written, all destination items will be owned by the root user.
Edit As per #mklement0's comment below, these solutions are not suitable for directory tree recursion--they will only work on one directory, as per the OP's original form of the question.
#rorschach. Yes, you can do this.
Using cp:
Set your Bash shell's extglob option and type:
shopt -s extglob #You can set this in your shell startup to enable it by default
cp /var/www/!(*.pdf) .
If you wish to turn off (unset) this (or any other) shell option, use:
shopt -u extglob #or whatever shell option you wish to unset
Using find
If you prefer using find, you can use xargs to execute the operation you would like Bash to perform:
find /var/www/ ! -iname "*.pdf" -maxdepth 1 | xargs -I{} cp {} .

Bash script: variable output to rm with single quotes

I'm trying to pass parameter to rm in bash script to clean my system automatically. For example, I want to remove everything except the *.doc files. So I wrote the following codes.
#!/bin/bash
remove_Target="!*.txt"
rm $remove_Target
However, the output always say
rm: cannot remove ‘!*.txt’: No such file or directory
It is obviously that bash script add single quotes for me when passing the variable to rm. How can I remove the single quotes?
Using Bash
Suppose that we have a directory with three files
$ ls
a.py b.py c.doc
To delete all except *.doc:
$ shopt -s extglob
$ rm !(*.doc)
$ ls
c.doc
!(*.doc) is an extended shell glob, or extglob, that matches all files except those ending in .doc.
The extglob feature requires a modern bash.
Using find
Alternatively:
find . -maxdepth 1 -type f ! -name '*.doc' -delete

How to delete a file named "~" in Mac?

I run the following command on my Macbook:
mkdir ~/tmp/~
Now, I want to delete this ~/tmp/~
.
How to do it? It is not a link actually, if I run rm -rf ~/tmp/~, the home files are all dropped.
So interesting. This task can be done in this way.
# This form is safe and functional.
rm -rf ~/tmp/~
But if you try to do this, your home data is going to be lost:
# THIS FORM IS DANGEROUS; DO NOT USE IT
cd ~/tmp
rm -rf ~
I agree with Ivan X in his comment, the ~ in this context does not cause any particular problems and rmdir ~/tmp~ removes the directory without problem (see #1 below).
However, what you describe is a classic Unix/BSD problem. In order to remove a directory that contains special characters you have to make sure your shell ignores interpolation of them. There are two methods to achieve this.
You can use a full path, e.g. rmdir ~/tmp/~ or...
In the case of a file or directory starting with -, you can use -- to tell the shell that there are no options following, ala rmdir -- -foo-
So interesting. This task can be done in this way.
# This form is safe and functional.
rm -rf ~/tmp/~
But if you try to do this, your home data is going to be lost:
# THIS FORM IS DANGEROUS; DO NOT USE IT
cd ~/tmp
rm -rf ~
Quite so, and quite simply because (see Tilde Expansion):
If a word begins with an unquoted tilde character
(‘~’), all of the characters up to the first
unquoted slash (or all characters, if there is no unquoted slash) are
considered a tilde-prefix. If none of the characters in the
tilde-prefix are quoted, the characters in the tilde-prefix following
the tilde are treated as a possible login name. If this
login name is the null string, the tilde is replaced with the value of
the HOME shell variable.
Thus, rm -rf ~ is expanded as rm -rf $HOME.
Deletion of any file with a funny name is better done with a File browser. It is simply easier and safer.
If you don't want to use Finder, use any other text based file browser, e.g. emacs dired mode.
[...]
In case you insist in doing this using a shell...
ls -i # will show you the inode number of the files you have
Once you have the right inode, say 123456789, you can use find to delete it.
find . -maxdepth 1 -type f -inum 123456789 -delete

Is this correct way to copy symlinked directory in bash?

I have directory a that is symlinked somewhere. I want to copy its contents to directory b. Doesn't the following simple solution break in some corner cases (e.g. hidden files, exotic characters in filenames, etc.)?
mkdir b
cp -rt b a/*
Simply adding a trailing '/' will follow the symlink and copy the contents rather than the link itself.
cp -a symlink/ dest
Bash globbing does not choke on special characters in filenames. This is the reason to use globbing, rather than parsing the output of a command such as ls. The following would also be fine.
shopt -s dotglob
mkdir -p dest
cp -a symlink/* dest/

How to remove files starting with double hyphen?

I have some files on my Unix machine that start with
--
e.g. --testings.html
If I try to remove it I get the following error:
cb0$ rm --testings.html
rm: illegal option -- -
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
I tried
rm "--testings.html" || rm '--testings.html'
but nothing works.
How can I remove such files on terminal?
rm -- --testings.html
The -- option tells rm to treat all further arguments as file names, not as options, even if they start with -.
This isn't particular to the rm command. The getopt function implements it, and many (all?) UNIX-style commands treat it the same way: -- terminates option processing, and anything after it is a regular argument.
http://www.gnu.org/software/hello/manual/libc/Using-Getopt.html#Using-Getopt
rm -- --somefile
While that works, it's a solution that relies on rm using getopts for parsing its options. There are applications out there that do their own parsing and will puke on that too (because they might not necessarily implement the "-- means end of options" logic).
Because of that, the solution you should drive through your skull is this one:
rm ./--somefile
It will always work, because this way your arguments never begin with a -.
Moreover, if you're trying to make really decent shell scripts; you should technically be putting ./ in front of all your filename parameter expansions to prevent your scripts from breaking due to funky filename input (or to prevent them being abused/exploited to do things they're not supposed to do: for instance, rm will delete files but skip over directories; while rm -rf * will delete everything. Passing a filename of "-rf" to a script or somebody touch ~victim/-rf'ing could in this way be used to change its behaviour with really bad consequences).
Either rm -- --testings.html or rm ./--testings.html.
rm -- --testings.html
Yet another way to do it is to use find ... -name "--*" -delete
touch -- --file
find -x . -mindepth 1 -maxdepth 1 -name "--*" -delete
For a more generalised approach for deleting files with impossible characters in the filename, one option is to use the inode of the file.
It can be obtained via ls -i.
e.g.
$ ls -lai | grep -i test
452998712 -rw-r--r-- 1 dim dim 6 2009-05-22 21:50 --testings.html
And to erase it, with the help of find:
$ find ./ -inum 452998712 -exec rm \{\} \;
This process can be beneficial when dealing with lots of files with filename peculiarities, as it can be easily scripted.
rm ./--testings.html
or
rm -- --testings.html

Resources