"Exception handling" in shell scripts - shell

I know that you can use the shortcutting boolean operators in shell scripts to do some sort of exception handling like so:
my_first_command && my_second_command && my_third_command
But this quickly becomes unreadable and unmaintainable as the number of commands you want to chain grows. If I'm writing a script (or a shell function), is there a good way to have execution of the script or function halt on the first nonzero return code, without writing on one big line?
(I use zsh, so if there are answers that only work in zsh that's fine by me.)

The -e option does this:
ERR_EXIT (-e, ksh: -e)
If a command has a non-zero exit status, execute the ZERR trap,
if set, and exit. This is disabled while running initialization
scripts.
You should be able to put this on the shebang line, like:
#!/usr/bin/zsh -e
Most shells have this option, and it's usually called -e.

Related

Abort issue when error/exception occurs?

I have a bash script which calls multiple scripts (bash & python) from some directories.
I would like to get it aborted when any of the script throws an error/exception.
#!/bin/bash
/usr/bin/test.sh /usr/1/sample.sh /usr/2/temp.py
exit 0
Any suggestion on how to achieve this ?
FYI : I'm a beginner in bash scripting.
You can put set -e at the top of the script:
-e errexit If not interactive, exit immediately if any
untested command fails. The exit status of a com‐
mand is considered to be explicitly tested if the
command is used to control an if, elif, while, or
until; or if the command is the left hand operand
of an “&&” or “||” operator.
This will only work if one of your commands exit with an exit code of non-zero on failure. Well-behaved programs should always exit with 0 only on success, and if yours don't, you probably want to fix that.
I'm not entirely sure what you expect this to do:
/usr/bin/test.sh /usr/1/sample.sh /usr/2/temp.py
Since this will run one command (/usr/bin/test.sh) with two arguments, you probably want to put them on separate lines.

what will bash do with an unset variable

I am confused by the behavior about how do bash treat a unset variable used in a shell command, like below:
rm -rf /$TO_BE_REMOVED
what will be done if i have not defined a variable TO_BE_REMOVED.
If you do that, the command executed will effectively try to remove / which is very, very bad. I mean, it will probably mostly fail (unless you're running as root), but still, it will be very bad.
You can avoid many of these sorts of bugs in Bash automatically with one simple command:
set -eu
If you put that at the top of your Bash script, the interpreter will stop and return an error code if your script ever invokes a command which returns an error which is not checked (that's the -e part), or if it uses an undefined variable (the -u part). This makes Bash considerably safer.

Bash: What is the effect of "#!/bin/sh" in a bash script with curl

I make a complex and long line command to successful login in a site. If I execute it in Console it work. But if I copy and paste the same line in a bash script it not work.
I tried a lot of thing, but accidentally discovery that if I NOT use the line
#!/bin/sh
it work! Why this happens in my mac OSX Lion? What this config line do in a bash script?
A bash script that is run via /bin/sh runs in sh compatibility mode, which means that many bash-specific features (herestrings, process substitution, etc.) will not work.
sh-4.2$ cat < <(echo 123)
sh: syntax error near unexpected token `<'
If you want to be able to use full bash syntax, use #!/bin/bash as your shebang line.
"#!/bin/sh" is a common idiom to insure that the correct interpreter is used to run the script. Here, "sh" is the "Bourne Shell". A good, standard "least common denominator" for shell scripts.
In your case, however, "#!/bin/sh" seems to be the wrong interpreter.
Here's a bit more info:
http://www.unix.com/answers-frequently-asked-questions/7077-what-does-usr-bin-ksh-mean.html
Originally, we only had one shell on unix. When you asked to run a
command, the shell would attempt to invoke one of the exec() system
calls on it. It the command was an executable, the exec would succeed
and the command would run. If the exec() failed, the shell would not
give up, instead it would try to interpet the command file as if it
were a shell script.
Then unix got more shells and the situation became confused. Most
folks would write scripts in one shell and type commands in another.
And each shell had differing rules for feeding scripts to an
interpreter.
This is when the "#! /" trick was invented. The idea was to let the
kernel's exec() system calls succeed with shell scripts. When the
kernel tries to exec() a file, it looks at the first 4 bytes which
represent an integer called a magic number. This tells the kernel if
it should try to run the file or not. So "#! /" was added to magic
numbers that the kernel knows and it was extended to actually be able
to run shell scripts by itself. But some people could not type "#! /",
they kept leaving the space out. So the kernel was exended a bit again
to allow "#!/" to work as a special 3 byte magic number.
So #! /usr/bin/ksh and
#!/usr/bin/ksh now mean the same thing. I always use the former since at least some kernels might still exist that don't understand the
latter.
And note that the first line is a signal to the kernel, and not to the
shell. What happens now is that when shells try to run scripts via
exec() they just succeed. And we never stumble on their various
fallback schemes.
The very first line of the script can be used to select which script interpreter to use.
With
#!/bin/bash
You are telling the shell to invoke /bin/bash interpreter to execute your script.
Assure that there are not spaces or empty lines before #!/bin/bash or it will not work.

Code that is a no-op in bash but stops with an error message in csh?

I am working with someone on a data analysis project and we frequently document the steps we perform by putting them into small shell scripts. The problem is that I use bash and the other person uses csh. The other person has a habit of using source to run these scripts instead of executing them directly (this habit probably dates back to times when spawning an extra shell was an extravagent waste of resources, so it's probably too entrenched to change) , and I want to have my scripts (which are, of course, bash scripts) simply stop with a message reminding the user to run them with bash instead of csh when this person sources them from within csh. At the same time, I would like them to continue to function as bash scripts.
So is there some code I can put at the beginning of my scripts that is a no-op in bash but will signal an error and cancel the execution of the rest of the file (but not kill the shell itself) when sourced from cshell?
This is harder than I thought due to csh's ancient variable substitution flavor. However, $?BASH_VERSION expands to 0 (not set) in csh and to 0BASH_VERSION (or whatever the last commands' RV was) in bash. So,
test "$?BASH_VERSION" = 0 && exit 1
should do the trick.
This is not easy, as you cannot assign variables the same way or run if statements the same way.
You can use csh's meagre string parsing skills against itself. The following executes cleanly in all shells, include KSH, BASH, ZSH, CSH and SH on all platforms that I tested it on (Linux, AIX, HP-UX, Solaris):
test '\\' = "\\" && echo "CSH detected"
The idea that is used is here is that backslashes are not special in double quoted strings on CSH, whereas all other shells do see them as different.
However, that is only half an answer as what do you want to do if you don't want to simply exit the script if the 'wrong' shell is detected? Well, you may want to have a Bournish sh part to your script and a csh part.
If you can keep the csh code limited to code that does not use single quotes, the following will work everywhere:
test '\\' = "\\" && goto csh
# Just skip the block containing the csh code. Again we use csh's meagre string parsing capabilities against it.
false || csh_code_block='
csh:
... csh code goes here ...
exit 0
'
... sh code goes here ...
If you are not worried about HP-UX's csh (which seems a little better than others in parsing) you could replace the multi-line single quoted command with a 'HERE' document (<<CSH_BLOCK ... CSH_BLOCK). You can't just reverse the order either, as the 'goto' statement doesn't like all syntax that it skips over.

Automatically run a program if another program returns an error

I have a program (grabface) that takes a picture of the face of a person using a webcam, and I also have a shell script wrapper that works like this:
On the command line the user gives the script the name of a program to run and its command line arguments. The script then executes the given command and checks the exit code. If there was an error the program grabface is run to capture the surprised face of the user.
This all works quite well. But the problem is that the wrapper script must always be used. Is there some way to automatically run this script whenever a command is entered in the shell? Or is there some other way to automatically run a given program after any program is run?
Preferably the solution should work in bash, but any other shell is also OK. I realize this could be accomplished by simply making some adjustments in the source code of the shell, but that's kind of a last measure.
Something that is probably even trickier would be to extend this to work with programs launched outside of the shell as well (e.g. from a desktop environment) but this may be too difficult.
Edit: Awsome! Since bash was so easy, what about other shells?
In Bash, you can use the trap command with an argument of ERR to execute a specified command whenever an executed command returns non-zero.
$ trap "echo 'there was an error'" ERR
$ touch ./can_touch
$ touch ./asfdsafds/fdsafsdaf/fdsafdsa/fdsafdasfdsa/fdsa
touch: cannot touch `./asfdsafds/fdsafsdaf/fdsafdsa/fdsafdasfdsa/fdsa': No such file or directory
there was an error
trap affects the whole session, so you'll need to make sure that trap is called at the beginning of the session by putting it in .bashrc or .profile.
Other special trap signals that Bash understands are: DEBUG, RETURN and EXIT as well as all the system signals (which can be listed using trap -l).
The Korn shell has a similar facility, while the Z shell has a more extensive trap capability.
By the way, in some cases for the command line, it can be useful in Bash to set the PROMPT_COMMAND variable to a script or command that will be run each time the prompt is issued.
Just subtitute your command where I have the false.
false || echo "It failed"
If you want to do the oposite, like when it succeeds, just put your command instead of true:
true && echo "It succeeded"
In the .profile of the user add:
trap grabface ERR

Resources