Difference between alias rm and /bin/rm - shell

What is the difference between using /bin/rm abc.txt and the times when sometimes you have to alias rm which is then performed with rm abc.txt

/bin/rm will always refer to the binary rm command on your system. If you just write rm abc.txt one of these may happen:
Your shell implements rm directly as a builtin function or there is a shell function called rm (no external command is run).
rm has previously been aliased (with alias rm=<substituted-command>) to mean something different. Usually the aliased command is similar in function but it does not have to be.
If none of the above is applicable, the shell looks up the external command in /bin and runs it.
You can use alias to see all defined aliases. Also check out the command -V shell builtin which can tell you if a given command is an external command, shell function, builtin or special builtin.

A typical reason to create an alias for rm is to add the -i or -I option. In "interactive" mode rm will ask for confirmation before deleting anything.
$ alias rm="/bin/rm -i"
$ rm myfile
rm: remove regular file ‘myfile’? _

Related

Cygwin BASH script file - unwanted single quotes added automatically to constant string - how to prevent

I have this BASH script which I run in a Cygwin terminal instance via the command
bash -f myfile.sh
All I need it to do is delete all *.txt files in the Cygwin /home/user directory.
#!/bin/bash
set -x
rm -rf /home/user/*.txt
This does not work, running the file (I only added "set -x" to debug when it started failing) shows
+ rm -rf '/home/user/.txt*
The problem is literally that I specify in my code in the Cygwin BASH script
rm -rf /home/user/*.txt
without any quotes, but when ran in Cygwin terminal in the BASH script, it resolves to
rm -rf '/home/user/*.txt'
e.g. single quotes are added by Cygwin BASH.
I've scoured other posts where the responses indicate the quotes are only there due to "set -x" formatting the output to show a unitary string, but without "set -x" in the script file the rm command still fails, e. g. the rm command string IS still quoted (or some other mangling is applied?), and therefore the rm line in the script does not work.
I managed to confirm that by manually running in the Cygwin terminal
rm -rf '/home/user/*.txt'
which does nothing (it just returns, leaving the .txt files intact in /home/user/), and then running
rm -rf /home/user/*.txt
manually, which does work perfectly, deleting all .txt files in the /home/user/ directory under the Cygwin terminal.
How can I get the above command to remove all .txt iles in /home/user/ from inside a Cygwin terminal BASH script file?
Thanks!
As intimated above, the answer to this is to not use -f when calling bash, e. g.
just
bash myfile.sh

Delete Everything in a Directory Except One FIle - Using SSH

In BASH, the following command removes everything in a directory except one file:
rm -rf !(filename.txt)
However, in SSH the same command changes nothing in the directory and it returns the following error: -jailshell: !: event not found
So, I escaped the ! with \, (the parentheses also require escaping) but it still doesn't work:
rm -rf \!\(filename.txt\)
It returns no error and nothing in the directory changed.
Is it even possible to run this command in SSH? I found a workaround but if this is possible it would expedite things considerably.
I connect to the ssh server using the alias below:
alias devssh="ssh -p 2222 -i ~/.ssh/private_key user#host"
!(filename.txt) is an extglob, a bash future that might have to be enabled. Make sure that your ssh server runs bash and that extglob is enabled:
ssh user#host "bash -O extglob -c 'rm -rf !(filename.txt)'"
Or by using your alias:
devssh "bash -O extglob -c 'rm -rf !(filename.txt)'"
If you are sure that the remote system uses bash by default, you can also drop the bash -c part. But your error message indicates that the ssh server runs jailshell.
ssh user#host 'shopt -s extglob; rm -rf !(filename.txt)'
devssh 'shopt -s extglob; rm -rf !(filename.txt)'
I wouldn't do it that way. I wouldn't rely on bash being on the remote, and I wouldn't rely on any bashisms. I would use:
$ ssh user#host 'rm $(ls | grep -v "^filename.txt$")'
If I wanted to protect against the possibility that the directory might be empty, I'd assign the output of $(...) to a variable, and test it for emptiness. If I was concerned the command might get too long, I'd write the names to a file, and send the grep output to rm with xargs.
If it got too elaborate, I'd copy a script to the remote and execute it.

When creating symbolic links on ubuntu I sometimes get an odd result

I'm trying to create a bunch of symbolic links for all the files in a directory. It seems like, when I type this command in the shell manually, it works just fine, but when I run it in a shell script, or even use the up arrow to re-run it, I get the following problem.
$ sudo ln -s /path/to/my/files/* /the/target/directory/
This should create a bunch of sym links in /path/to/my/files and if I type the command in manuall, it indeed does, however, when I run the command from a shell script, or use the up arrow to re-run it I get a single symbolic link in /the/target/directory/ called * as in the link name is actually '*' and I then have to run
$ sudo rm *
To delete it, which just seems insane to me.
When you run that command in the script, are there any files in /path/to/my/files? If not, then by default the wildcard has nothing to expand to, and it is not replaced. You end up with the literal "*". You might want to check out shopt -s nullglob and run the ln command like this:
shopt -s nullglob
sudo ln -s -t /the/target/directory /path/to/my/files/*
Maybe the script uses sh and your using bash when executing the command.
You may try something like this:
for file in $(ls /path/to/my/files/*) do
ln -s "${file}" "/the/target/directory/"${file}"
done

command alias with appended commands

I'm trying to write a bash alias which would take a command:
$ ptex example.tex
and run:
$ pdflatex example.tex && rm !(*.tex|*.pdf|*.bib)
The thing I don't understand how to do is to get the argument in the right place and then append the remove command.
Or if there is an option for pdflatex which would not generate the additional files, that would be even better, but I've looked and never found one.
Thanks in advance!
Can't do that with an alias, they're not that flexible. Functions however, are perfectly suited:
ptex() {
pdflatex "$1" && rm !(*.tex|*.pdf|*.bib)
}

Including a chunk of code in a shell script

I have a number of shell scripts that all look like this:
#!/bin/bash
cd ~/Dropbox/cms_sites/examplesite/media
sass -C --style compressed --update css:css
cd ~/Dropbox/cms_sites/examplesite
rm -f ./cache/*.html
rm -fr ./media/.sass-cache/
rm -fr ./admin/media/.sass-cache/
rsync -auvzhL . username#host:/home/username/remote_folder
(I know the use of cd seems weird, but they have evolved!)
Now, all these scripts have a few differences, in that they have different usernames, hosts, local folder and remote folder names, and I want an inexperienced user to be able to run them without arguments (so he can drag and drop them into a terminal without issue).
What I'd like to do is something like:
#!/bin/bash
cd ~/Dropbox/cms_sites/examplesite/media
sass -C --style compressed --update css:css
cd ~/Dropbox/cms_sites/examplesite
include ~/scripts/common.sh
rsync -auvzhL . username#host:/home/username/remote_folder
then have a file in common.sh that looks like:
rm -f ./cache/*.html
rm -fr ./media/.sass-cache/
rm -fr ./admin/media/.sass-cache/
so that I can easily change sections of the code in lots of scripts at once.
Is this possible, or is there a better way to do this without using arguments and having one script?
Use the source command. It's bash's version of 'include'
No need for "include" if the script is executable:
~/scripts/common.sh
If the script is not executable or does not have an appropriate shebang line then you'll need to specify the interpreter:
bash ~/scripts/common.sh

Resources