I tried
rm -r #*
and
rm #*
But it just outputs this message:
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
What's the problem?
# is a shell comment. You'll need to quote it, like so:
rm '#'*
Note that the hash is in quotes and the glob is outside the quotes.
rm \#*
should do the trick for you. Remember # has got special meaning in the shell, it starts a comment.
To quote
Lines beginning with a # (with the exception of #!) are comments and
will not be executed.
Comments may also occur following the end of a command.
&
escape [backslash]. A quoting mechanism for single characters.
\X escapes the character X. This has the effect of "quoting" X,
equivalent to 'X'. The \ may be used to quote " and ', so they are
expressed literally.
Had you have files 'file1,'file2 & 'file3, to delete them you would have used :
rm \'file* #Comment : This deletes all the files starting with 'file
Reference:TLDP note on special characters
This command will list all the file starting with # and feed them to rm:
ls . |grep "^#.*" |xargs rm -rf
Related
There is a file #A.py# that appears to be a copy of the original A.py in the same directory - when I try rm, I get the following:
rm: missing operand
What does the ## notation mean? How did this file appear?
Add quotes around:
rm "#A.py#"
Without quotes it's interpreted as a beginning of the comment
You could also escape the #:
$ touch \#rmme
$ ls|grep \#
#rmme
$ rm \#rmme
Like mention in other answers by using quotes should work:
rm "#A.py#"
Also this:
rm \#A.py\#
To remove all:
rm \#*
And just in case check the option --
The rm command supports the -- (two consecutive dashes) parameter as a delimiter that indicates the end of the options. This is useful when the name of a file or directory begins with a dash or hyphen. For example, the following removes a directory named -dir1
rm -- -filename
In one script sh file these lines are present. I know we can do it using sed, but please let me know the way. I can use any suitable command.
BEFORE:
export HOME=${INSTALLROOT}/Subsystem
cd ${INSTALLROOT}
AFTER:
I want to add few lines after this string matches - export ASE_HOME
export HOME=${INSTALLROOT}/Subsystem
cd ${HOME}/tmp # added
rm -rf packed* # added
cd ${INSTALLROOT}
You can use this sed,
sed '/export HOME=/a cd ${HOME}/tmp # added \n rm -rf packed* # added' yourfile
man sed:
a \
text Append text, which has each embedded newline preceded by a backslash.
I want to do something like this in a bash script. I'm using bash 4.1.10.
# rm -rf /some/path/{folder1,folder2,folder3}
Works nicely (and as expected) from the shell itself. It deletes the 3 desired folders leaving all others untouched.
When I put it into script something unwanted happens. For example, my script:
#!/bin/bash
set -x
VAR="folder1,folder2,folder3"
rm -rf /some/path/{$VAR}
When I execute this script, the folders are not deleted.
I think this is due to the fact that some unwanted quoting is occurring. Output from the script using #!/bin/bash -x:
rm -rf '/some/path/{folder1,folder2,folder3}'
which of course cannot succeed due to the ' marks.
How can I get this working within my script?
According to the man page:
The order of expansions is: brace expansion, tilde expansion, parameter, variable and arithmetic expansion and command substitution (done in a left-to-right fashion), word splitting, and pathname expansion.
So to get around this, add another level of expansion:
eval "rm -rf /some/path/{$VAR}"
Since you're writing a script, there's no reason to write hard-to-maintain code using eval tricks
VAR="f1,f2,f3"
IFS=,
set -- $VAR
for f; do
rm -r "/path/to/$f"
done
or
VAR=( f1 f2 f3 )
for f in "${VAR[#]}"; do
rm -r "/path/to/$f"
done
No, it's due to the fact that brace expansion happens before parameter expansion. Find another way of doing this, such as with xargs.
xargs -d , -I {} rm -rf /some/path/{} <<< "$VAR"
If your code can be rearranged, you can use echo and command substitution in bash.
Something like this:
#!/bin/bash
set -x
VAR=`echo /some/path/{folder1,folder2,folder3}`
rm -rf $VAR
You need to enable braceexpand flag:
#!/bin/bash
set -B
for i in /some/path/{folder1,folder2,folder3}
do
rm -rf "$i"
done
The problem is not that in script mode some unwanted quoting is happening but that you put the folder names into a variable and the variable content is expanded after the brace expansion is done.
If you really want to do it like this you have to use eval:
eval "rm -rf /some/path/{$VAR}"
#!/bin/bash
set -x
VAR="folder1,folder2,folder3"
eval "rm -rf /some/path/{$VAR}"
Edit The remainder is just for info. I try to be informative, but not wordy :_)
Recent bashes have the globstar option. This might perhaps come in handy in the future
shopt -s globstar
rm -rfvi some/**/folder?
Another trick you can use (instead of the dangerous eval) is just plain echo inside a subshell. This works, for instance:
paths=`echo /some/path/{folder1,folder2,folder3}`
echo rm -rf $paths
outputs:
rm -rf /some/path/folder1 /some/path/folder2 /some/path/folder3
as desired. (Remove the "echo" in the second line, to make it actually do the rm.)
The crucial point is that bash does brace expansion before parameter expansion, so you never want to put a comma-separated list (surrounded by braces or not) into a variable -- if you do, then you'll have to resort to eval. You can however put a list of space-separated strings into a variable, by having the brace expansion happen in a subshell before assignment.
replace {$VAR} by ${VAR} :-)
mv $1 $(echo $1 | sed s:\ :_:g)
It's a simple script that renames the file passed as argument, exchanging spaces to underlines. However, when I try to rename the file "a e i" to "a_e_i" for example, it returns the following error:
./spc2und a\ e\ i
mv: target `a_e_i' is not a directory
You need double-quotes around the variables and command substitution to prevent spaces in the filename from being mistaken for argument separators. Also, you don't need sed, since bash can do character replacement by itself:
mv "$1" "${1// /_}"
Edit: a few more things occurred to me. First, you really should use mv -i in case there's already a file with underscores ("a_e_i" or whatever). Second, this only works on simple filenames -- if you give it a file path with spaces in an enclosing directory, (e.g. "foo bar/baz quux/a e i"), it tries to rename it into a directory with the spaces converted, which doesn't exist, leading to comedy. So here's a proposed better version:
mv -i "$1" "$(dirname "$1")/$(basename "${1// /_}")"
BTW, the other answers leave off the double-quotes on the filename after replacing spaces with underscores -- this isn't entirely safe, as there are other funny characters that might still cause trouble. Rule 1: when in doubt, wrap it in double-quotes for safety. Rule 2: be in doubt.
try this - pure bash:
mv "$1" ${1// /_}
Your $1 expands to a e i, which is then used as the first three arguments to mv, so your call becomes
mv a e i a_e_i
This is the reason for the error message you get.
To fix this, all you have to do is quote the $1:
mv "$1" $(echo "$1" | sed s:\ :_:g)
Just witting a simple shell script and little confused:
Here is my script:
% for f in $FILES; do echo "Processing $f file.."; done
The Command:
ls -la | grep bash
produces:
% ls -a | grep bash
.bash_from_cshrc
.bash_history
.bash_profile
.bashrc
When
FILES=".bash*"
I get the same results (different formatting) as ls -a. However when
FILES="*bash*"
I get this output:
Processing *bash* file..
This is not the expected output and not what I expect. Am I not allowed to have a wild card at the beginning of the file name? Is the . at the beginning of the file name "special" somehow?
Setting
FILES="bash*"
Also does not work.
The default globbing in bash does not include filenames starting with a . (aka hidden files).
You can change that with
shopt -s dotglob
$ ls -a
. .. .a .b .c d e f
$ ls *
d e f
$ shopt -s dotglob
$ ls *
.a .b .c d e f
$
To disable it again, run shopt -u dotglob.
If you want hidden and non hidden, set dotglob (bash)
#!/bin/bash
shopt -s dotglob
for file in *
do
echo "$file"
done
FILES=".bash*" works because the hidden files name begin with a .
FILES="bash*" doesn't work because the hidden files name begin with a . not a b
FILES="*bash*" doesn't work because the * wildcard at the beginning of a string omits hidden files.
Yes, the . at the front is special, and normally won't be matched by a * wildcard, as documented in the bash man page (and common to most Unix shells):
When a pattern is used for pathname expansion, the character “.”
at the start of a name or immediately following a slash must
be matched explicitly, unless the shell option dotglob is
set. When matching a pathname, the slash character must
always be matched explicitly. In other cases, the “.”
character is not treated specially.
If you want to include hidden files, you can specify two wildcards; one for the hidden files, and another for the others.
for f in .[!.]* *; do
echo "Processing $f file.."
done
The wildcard .* would expand to all the dot files, but that includes the parent directory, which you normally would want to exclude; so .[!.]* matches all files whose first character is a dot, but the second one isn't.
If you have other files with two leading dots, you need to specify a third wildcard to cover those but exclude the parent directory! Try ..?* which requires there to be at least one character after the second dot.
for file in directory/{.[!.]*,*};do echo $file;done
Should echo either hidden files and normal file. Thanks to tripleee for the .[!.]* tip.
The curly brackets permits a 'or' in the pattern matching. {pattern1,pattern2}