Setting environment variables with background processes running in parallel - bash

I have a file that takes 1 min to source. So within that file I need to source, I created functions and then ran them in parallel using &. The exported variables from the child processes are not available in the current environment. Is there a solution or trick to solve this issue? Thanks.
Sample:
#!/bin/bash
function getCNAME() {
curl ...... grep
export CNAME
}
function getBNAME() {
curl ...... grep
export BNAME
}
getCNAME &
getBNAME &
And then I have a main file that calls the source command on the code above and tries to use the variables BNAME and CNAME. But is unable to do so. If I remove the & it does have access to those variable but takes a long time to source the file.

You can't use export in your subshell and expect the parent shell to have access to the resulting variable. Consider using process substitutions instead:
#!/bin/bash
# note that if you're sourcing this, as you should be, the shebang will be ignored.
# ...hopefully it's just there for your editor's syntax highlighting.
rc=0
orig_pipefail_setting=$(shopt -p pipefail)
shopt -s pipefail # make sure if either curl _or_ grep fails the entire pipeline does too
# start both processes in the background, with their stdout on two different FDs
exec 4< <(curl ... | grep ... && printf '\0')
exec 5< <(curl ... | grep ... && printf '\0')
# read from those FDs into variables in the current shell
IFS= read -r -d '' BNAME <&4 || { (( rc |= $? )); echo "Error reading BNAME" >&2; }
IFS= read -r -d '' CNAME <&5 || { (( rc |= $? )); echo "Error reading CNAME" >&2; }
exec 4<&- 5<&- # close those file descriptors now that we're done with them
export BNAME CNAME # note that you probably don't actually need to export these
eval "$orig_pipefail_setting" # turn pipefail back off, if it wasn't on when we started
return "$rc" # ...return with an exit status reflecting whether we had any errors
That way file descriptors 4 and 5 will each be attached to a shell pipeline running curl and feeding its output to grep; both of them are started in the background before we try to read from either, so they're both running at the same time.

Are you sure the last two lines shouldn't be:
getCNAME
getBNAME
Edit - OP has fixed this, it used to read:
CNAME
BNAME
If you are sourcing a script (. /my/script), it is not a child process, and its variables will be available in the current shell. You don't even need export.
If you are executing a script normally, it is a child process, and you can't set variables in the parent shell.
The only methods I'm aware of of transferring data to the parent shell are via a file.
The variables should be available.
Check for bugs in your script:
Make sure you haven't used local for the variables in the functions.
Do echo "$CNAME" at the bottom of the sourced script, to test the functions are actually working at all.
EDIT
I did a little more investigation. Here is the problem: & puts the command/function in a subshell. That's why the variable is not available. In a sourced script, without &, it would be.
From man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
These are referred to as asynchronous commands.

Related

`grep` cause bash script stop [duplicate]

I'm studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
The script has the following code:
#!/bin/bash
set -e
# Automatically added by dh_installinit
if [ "$1" = install ]; then
if [ -d /usr/share/MyApplicationName ]; then
echo "MyApplicationName is just installed"
return 1
fi
rm -Rf $HOME/.config/nautilus-actions/nautilus-actions.conf
rm -Rf $HOME/.local/share/file-manager/actions/*
fi
# End automatically added section
My first query is about the line:
set -e
I think that the rest of the script is pretty simple: It checks whether the Debian/Ubuntu package manager is executing an install operation. If it is, it checks whether my application has just been installed on the system. If it has, the script prints the message "MyApplicationName is just installed" and ends (return 1 mean that ends with an “error”, doesn’t it?).
If the user is asking the Debian/Ubuntu package system to install my package, the script also deletes two directories.
Is this right or am I missing something?
From help set :
-e Exit immediately if a command exits with a non-zero status.
But it's considered bad practice by some (bash FAQ and irc freenode #bash FAQ authors). It's recommended to use:
trap 'do_something' ERR
to run do_something function when errors occur.
See http://mywiki.wooledge.org/BashFAQ/105
set -e stops the execution of a script if a command or pipeline has an error - which is the opposite of the default shell behaviour, which is to ignore errors in scripts. Type help set in a terminal to see the documentation for this built-in command.
I found this post while trying to figure out what the exit status was for a script that was aborted due to a set -e. The answer didn't appear obvious to me; hence this answer. Basically, set -e aborts the execution of a command (e.g. a shell script) and returns the exit status code of the command that failed (i.e. the inner script, not the outer script).
For example, suppose I have the shell script outer-test.sh:
#!/bin/sh
set -e
./inner-test.sh
exit 62;
The code for inner-test.sh is:
#!/bin/sh
exit 26;
When I run outer-script.sh from the command line, my outer script terminates with the exit code of the inner script:
$ ./outer-test.sh
$ echo $?
26
As per bash - The Set Builtin manual, if -e/errexit is set, the shell exits immediately if a pipeline consisting of a single simple command, a list or a compound command returns a non-zero status.
By default, the exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (it's disabled by default).
If so, the pipeline's return status of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
If you'd like to execute something on exit, try defining trap, for example:
trap onexit EXIT
where onexit is your function to do something on exit, like below which is printing the simple stack trace:
onexit(){ while caller $((n++)); do :; done; }
There is similar option -E/errtrace which would trap on ERR instead, e.g.:
trap onerr ERR
Examples
Zero status example:
$ true; echo $?
0
Non-zero status example:
$ false; echo $?
1
Negating status examples:
$ ! false; echo $?
0
$ false || true; echo $?
0
Test with pipefail being disabled:
$ bash -c 'set +o pipefail -e; true | true | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; false | false | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; true | true | false; echo success'; echo $?
1
Test with pipefail being enabled:
$ bash -c 'set -o pipefail -e; true | false | true; echo success'; echo $?
1
This is an old question, but none of the answers here discuss the use of set -e aka set -o errexit in Debian package handling scripts. The use of this option is mandatory in these scripts, per Debian policy; the intent is apparently to avoid any possibility of an unhandled error condition.
What this means in practice is that you have to understand under what conditions the commands you run could return an error, and handle each of those errors explicitly.
Common gotchas are e.g. diff (returns an error when there is a difference) and grep (returns an error when there is no match). You can avoid the errors with explicit handling:
diff this that ||
echo "$0: there was a difference" >&2
grep cat food ||
echo "$0: no cat in the food" >&2
(Notice also how we take care to include the current script's name in the message, and writing diagnostic messages to standard error instead of standard output.)
If no explicit handling is really necessary or useful, explicitly do nothing:
diff this that || true
grep cat food || :
(The use of the shell's : no-op command is slightly obscure, but fairly commonly seen.)
Just to reiterate,
something || other
is shorthand for
if something; then
: nothing
else
other
fi
i.e. we explicitly say other should be run if and only if something fails. The longhand if (and other shell flow control statements like while, until) is also a valid way to handle an error (indeed, if it weren't, shell scripts with set -e could never contain flow control statements!)
And also, just to be explicit, in the absence of a handler like this, set -e would cause the entire script to immediately fail with an error if diff found a difference, or if grep didn't find a match.
On the other hand, some commands don't produce an error exit status when you'd want them to. Commonly problematic commands are find (exit status does not reflect whether files were actually found) and sed (exit status won't reveal whether the script received any input or actually performed any commands successfully). A simple guard in some scenarios is to pipe to a command which does scream if there is no output:
find things | grep .
sed -e 's/o/me/' stuff | grep ^
It should be noted that the exit status of a pipeline is the exit status of the last command in that pipeline. So the above commands actually completely mask the status of find and sed, and only tell you whether grep finally succeeded.
(Bash, of course, has set -o pipefail; but Debian package scripts cannot use Bash features. The policy firmly dictates the use of POSIX sh for these scripts, though this was not always the case.)
In many situations, this is something to separately watch out for when coding defensively. Sometimes you have to e.g. go through a temporary file so you can see whether the command which produced that output finished successfully, even when idiom and convenience would otherwise direct you to use a shell pipeline.
I believe the intention is for the script in question to fail fast.
To test this yourself, simply type set -e at a bash prompt. Now, try running ls. You'll get a directory listing. Now, type lsd. That command is not recognized and will return an error code, and so your bash prompt will close (due to set -e).
Now, to understand this in the context of a 'script', use this simple script:
#!/bin/bash
# set -e
lsd
ls
If you run it as is, you'll get the directory listing from the ls on the last line. If you uncomment the set -e and run again, you won't see the directory listing as bash stops processing once it encounters the error from lsd.
set -e The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
By default, bash does not do this. This default behavior is exactly what you want if you are using bash on the command line
you don't want a typo to log you out! But in a script, you really want the opposite.
If one line in a script fails, but the last line succeeds, the whole script has a successful exit code. That makes it very easy to miss the error.
Again, what you want when using bash as your command-line shell and using it in scripts are at odds here. Being intolerant of errors is a lot better in scripts, and that's what set -e gives you.
copied from : https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
this may help you .
Script 1: without setting -e
#!/bin/bash
decho "hi"
echo "hello"
This will throw error in decho and program continuous to next line
Script 2: With setting -e
#!/bin/bash
set -e
decho "hi"
echo "hello"
# Up to decho "hi" shell will process and program exit, it will not proceed further
It stops execution of a script if a command fails.
A notable exception is an if statement. eg:
set -e
false
echo never executed
set -e
if false; then
echo never executed
fi
echo executed
false
echo never executed
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
#set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
hi
0
with set -e commented out we see that echo "hi" exit status being reported and hi is printed.
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
1
Now we see b.txt error being reported instead and no hi printed.
So default behaviour of shell script is to ignore command errors and continue processing and report exit status of last command. If you want to exit on error and report its status we can use -e option.

Definitively determine if currently running shell is bash or zsh

How can I definitively determine if the currently running shell is bash or zsh?
(being able to disambiguate between additional shells is a bonus, but only bash & zsh are 100% necessary)
I've seen a few ways to supposedly do this, but they all have problems (see below).
The best I can think of is to run some syntax that will work on one and not the other, and to then check the errors / outputs to see which shell is running. If this is the best solution, what command would be best for this test?
The simplest solution would be if every shell included a read-only parameter of the same name that identified the shell. If this exists, however, I haven't heard of it.
Non-definitive ways to determine the currently running shell:
# default shell, not current shell
basename "${SHELL}"
# current script rather than current shell
basename "${0}"
# BASH_VERSINFO could be defined in any shell, including zsh
if [ -z "${BASH_VERSINFO+x}" ]; then
echo 'zsh'
else
echo 'bash'
fi
# executable could have been renamed; ps isn't a builtin
shell_name="$(ps -o comm= -p $$)"
echo "${shell_name##*[[:cntrl:][:punct:][:space:]]}"
# scripts can be sourced / run by any shell regardless of shebang
# shebang parsing
On $ prompt, run:
echo $0
but you can't use $0 within a script, as $0 will become the script's name itself.
To find the current shell (let's say BASH) if shebang / magic number executable was #!/bin/bash within a script:
#!/bin/bash
echo "Script is: $0 running using $$ PID"
echo "Current shell used within the script is: `readlink /proc/$$/exe`"
script_shell="$(readlink /proc/$$/exe | sed "s/.*\///")"
echo -e "\nSHELL is = ${script_shell}\n"
if [[ "${script_shell}" == "bash" ]]
then
echo -e "\nI'm BASH\n"
fi
Outputs:
Script is: /tmp/2.sh running using 9808 PID
Current shell used within the script is: /usr/bin/bash
SHELL is = bash
I'm BASH
This will work, if shebang was: #!/bin/zsh (as well).
Then, you'll get the output for SHELL:
SHELL is = zsh
While there is no 100% foolproof way to achieve it, it might help to do a
echo $BASH_VERSION
echo $ZSH_VERSION
Both are shell variables (not environment variables), which are set by the respective shell. In the respective other shell, they are empty.
Of course, if someone on purpose creates a variable of this name, or exports such a variable and then creates a subshell of the different kind, i.e.
# We are in bash here
export BASH_VERSION
zsh # the subshell will see BASH_VERSION even though it is zsh
this approach will fail; but I think if someone is really doing such a thing, he wants to sabotage your code on purpose.
This should work for most Linux systems:
cat /proc/$$/comm
Quick and easy.
Working from comments by #ruakh & #oguzismail, I think I have a solution.
\shopt -u lastpipe 2> /dev/null
shell_name='bash'; : | shell_name='zsh'

bash shell: Avoid alias to interpret $!

I have an alias created:
alias my_rsync = "rsync -av ${PATH_EXCLUDE_DEV} ${PATH_SYS_DEV}/` ${PATH_SYS_SANDBOX}/ && wait $! && cd - &>/dev/null\"
When is load this alias and watch it with the command 'type my_rsync' I see that $! is gone because it has been interpreted.
Normally I do escape with backslash and it does function well. For example:
alias my_rsync = "mysql ${DB_DATA_SYS_SANDBOX} -e 'SHOW TABLES' | grep -v 'Tables_in_${DB_NAME_SANDBOX}' | while read a; do mysql ${DB_DATA_SYS_SANDBOX} -e \"DROP TABLE \$a\";done"
Can you guys give me a hint? Thanks.
Use a function, not an alias, and you avoid this altogether.
my_rsync() {
# BTW, this is horrible quoting; run your code through http://shellcheck.net/.
# Also, all-caps variable names are bad form except for specific reserved classes.
rsync -av ${PATH_EXCLUDE_DEV} ${PATH_SYS_DEV}/ ${PATH_SYS_SANDBOX}/ &>/dev/null
cd -
}
...in this formulation, no expansions will ever happen until the function is expanded.
As for wait -- it only makes sense at all when you're running things in the background. The usage you have here doesn't start anything in the background, so the wait calls have no purpose.
On the other hand, the following shows some wait calls that do have purpose:
rsync_args=( --exclude='/dev/*' --exclude='/sys/*' )
hosts=( foo.example.com bar.example.com )
my_rsync() {
# declare local variables
declare -a pids=( ) # array to hold PIDs
declare host pid # scalar variables to hold items being iterated over
for host in "${hosts[#]}"; do
rsync -av "${rsync_args[#]}" /sandbox "$host":/path & pids+=( "$!" )
done
for pid in "${pids[#]}"; do
wait "$pid"
done
}
This runs multiple rsyncs (one for each host) at the same time in the background, stores their PIDs in an array, and then iterates through that array when they're all running to let them complete.
Notably, it's the single & operator that causes the rsyncs to be run in the background. If they were separated from the following command with &&, ; or a newline instead, they would be run one at a time, and the value of $! would never be changed.
If you don't want a variable to be interpreted, escape it or use single quotes.
Example:
$ alias x="false; echo $?"
$ x
0
$ alias x='false; echo $?'
$ x
1

Determining whether shell script was executed "sourcing" it

Is it possible for a shell script to test whether it was executed through source? That is, for example,
$ source myscript.sh
$ ./myscript.sh
Can myscript.sh distinguish from these different shell environments?
I think, what Sam wants to do may be not possible.
To what degree a half-baken workaround is possible, depends on...
...the default shell of users, and
...which alternative shells they are allowed to use.
If I understand Sam's requirement correctly, he wants to have a 'script',
myscript, that is...
...not directly executable via invoking it by its name myscript
(i.e. that has chmod a-x);
...not indirectly executable for users by invoking sh myscript or
invoking bash myscript
...only running its contained functions and commands if invoked by
sourcing it: . myscript
The first things to consider are these
Invoking a script directly by its name (myscript) requires a first line in
the script like #!/bin/bash or similar. This will directly determine which
installed instance of the bash executable (or symlink) will be invoked to run
the script's content. This will be a new shell process. It requires the
scriptfile itself to have the executable flag set.
Running a script by invoking a shell binary with the script's (path+)name as
an argument (sh myscript), is the same as '1.' -- except that the
executable flag does not need to be set, and said first line with the
hashbang isn't required either. The only thing needed is that the invoking
user needs read access to the scriptfile.
Invoking a script by sourcing its filename (. myscript) is very much the
same as '1.' -- exept that it isn't a new shell that is invoked. All the
script's commands are executed in the current shell, using its environment
(and also "polluting" its environment with any (new) variables it may set or
change. (Usually this is a very dangerous thing to do: but here it could be
used to execute exit $RETURNVALUE under certain conditions....)
For '1.':
Easy to achieve: chmod a-x myscript will prevent myscript from being
directly executable. But this will not fullfill requirements '2.' and '3.'.
For '2.' and '3.':
Much harder to achieve. Invokations by sh myscript require reading
privileges for the file. So an obvious way out would seem to chmod a-r
myscript. However, this will also dis-allow '3.': you will not be able to
source the script either.
So what about writting the script in a way that uses a Bashism? A Bashism is a
specific way to do something which other shells do not understand: using
specific variables, commands etc. This could be used inside the script to
discover this condition and "do something" about it (like "display warning.txt",
"mailto admin" etc.). But there is no way in hell that this will prevent sh or
bash or any other shell from reading and trying to execute all the following
commands/lines written into the script unless you kill the shell by invoking
exit.
Examples: in Bash, the environment seen by the script knows of $BASH,
$BASH_ARGV, $BASH_COMMAND, $BASH_SUBSHELL, BASH_EXECUTION_STRING... . If
invoked by sh (also if sourced inside a sh), the executing shell will see
all these $BASH_* as empty environment variables. Again, this could be used
inside the script to discover this condition and "do something"... but not
prevent the following commands from being invoked!
I'm now assuming that...
...the script is using #!/bin/bash as its first line,
...users have set Bash as their shell and are invoking commands in the
following table from Bash and it is their login shell,
...sh is available and it is a symlink to bash or dash.
This will mean the following invokations are possible, with the listed values
for environment variables
vars+invok's | ./scriptname | sh scriptname | bash scriptname | . scriptname
---------------+--------------+---------------+-----------------+-------------
$0 | ./scriptname | ./scriptname | ./scriptname | -bash
$SHLVL | 2 | 1 | 2 | 1
$SHELLOPTS | braceexpand: | (empty) | braceexpand:.. | braceexpand:
$BASH | /bin/bash | (empty) | /bin/bash | /bin/bash
$BASH_ARGV | (empty) | (empty) | (empty) | scriptname
$BASH_SUBSHELL | 0 | (empty) | 0 | 0
$SHELL | /bin/bash | /bin/bash | /bin/bash | /bin/bash
$OPTARG | (empty) | (empty) | (emtpy) | (emtpy)
Now you could put a logic into your text script:
If $0 is not equal to -bash, then do an exit $SOMERETURNVALUE.
In case the script was called via sh myscript or bash myscript, then it will
exit the calling shell. In case it was run in the current shell, it will
continue to run. (Warning: in case the script has any other exit statements,
your current shell will be 'killed'...)
So put into your non-executable myscript.txt near its beginning something like
this may do something close to your goal:
echo BASH=$BASH
test x${BASH} = x/bin/bash && echo "$? : FINE.... You're using 'bash ...'"
test x${BASH} = x/bin/bash || echo "$? : RATS !!! -- You're not using BASH and I will kick you out!"
test x${BASH} = x/bin/bash || exit 42
test x"${0}" = x"-bash" && echo "$? : FINE.... You've sourced me, and I'm your login shell."
test x"${0}" = x"-bash" || echo "$? : RATS !!! -- You've not sourced me (or I'm not your bash login shell) and I will kick you out!"
test x"${0}" = x"-bash" || exit 33
This may or may not be what the asker wanted but, on a similar situation, I wanted a script to indicate that it is meant to be sourced and not directly run.
To achieve this effect my script reads:
#!/bin/echo Should be run as: source
export SOMEPATH="/some/path/on/my/system"
echo "Your environment has been set up"
So when I run it either as a command or sourced I get:
$ ./myscript.sh
Should be run as: source ./myscript.sh
$ source ./myscript.sh
Your environment has been set up
You can of course fool the script by running it as sh ./myscript.sh, but at least it gives the correct expected behaviour on 2 out of 3 cases.
This is what I was looking for:
[[ ${BASH_SOURCE[0]} = $0 ]] && main "$#"
I cannot add comment yet (stackexchange policies) so I add my own answer:
This one may works regardless if we do:
bash scriptname
scriptname
./scriptname.
on both bash and mksh.
if [ "${0##/*}" == scriptname ] # if the current name is our script
then
echo run
else
echo sourced
fi
If you have a non-altering file path for regular users, then:
if [ "$(/bin/readlink -f "$0")" = "$KNOWN_PATH_OF_THIS_FILE" ]; then
# the file was executed
else
# the file was sourced
fi
(it can also easily be loosened to only check for the filename or whatever).
But your users need to have read permission to be able to source the file, so absolutely nothing can stop them from doing what they want with the file. But it might help them out to not use it in the wrong way.
This solution is not dependent on Bashisms.
Yes it is possible. In general you can do the following:
#! /bin/bash
sourced () {
echo Sourced
}
executed () {
echo Executed
}
if [[ ${0##*/} == -* ]]; then
sourced
else
executed $#
fi
Giving the following output:
$ ./myscript
Executed
$ . ./myscript
Sourced
Based on Kurt Pfeifle’s answer, this works for me
if [ $SHLVL = 1 ]
then
echo 'script was sourced'
fi
Example
Since all of our machines have history, I did this:
check_script_call=$(history |tail -1|grep myscript.sh )
if [ -z "$check_script_call" ];then
echo "This file should be called as a source."
echo "Please, try again this way:"
echo "$ source /path/to/myscript.sh"
exit 1
fi
Everytime you run a script (without source), your shell creates a new env without history.
If you want to care about performance you can try this:
if ! history |tail -1|grep set_vars ;then
echo -e "This file should be called as a source.\n"
echo "Please, try again this way:"
echo -e "$ source /path/to/set_vars\n"
exit 1
fi
PS: I think Kurt's answer is much more complete but I think this could help.
In the first case, $0 will be "myscript.sh". In the second case, it will be "./myscript". But, in general, there's no way to tell source was used.
If you tell us what you're trying to do, instead of how you want to do it, a better answer might be forthcoming.

Automatic exit from Bash shell script on error [duplicate]

This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 3 years ago.
I've been writing some shell script and I would find it useful if there was the ability to halt the execution of said shell script if any of the commands failed. See below for an example:
#!/bin/bash
cd some_dir
./configure --some-flags
make
make install
So in this case, if the script can't change to the indicated directory, then it would certainly not want to do a ./configure afterwards if it fails.
Now I'm well aware that I could have an if check for each command (which I think is a hopeless solution), but is there a global setting to make the script exit if one of the commands fails?
Use the set -e builtin:
#!/bin/bash
set -e
# Any subsequent(*) commands which fail will cause the shell script to exit immediately
Alternatively, you can pass -e on the command line:
bash -e my_script.sh
You can also disable this behavior with set +e.
You may also want to employ all or some of the the -e -u -x and -o pipefail options like so:
set -euxo pipefail
-e exits on error, -u errors on undefined variables, -x prints commands before execution, and -o (for option) pipefail exits on command pipe failures. Some gotchas and workarounds are documented well here.
(*) Note:
The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword,
part of the test following the if or elif reserved words, part
of any command executed in a && or || list except the command
following the final && or ||, any command in a pipeline but
the last, or if the command's return value is being inverted with
!
(from man bash)
To exit the script as soon as one of the commands failed, add this at the beginning:
set -e
This causes the script to exit immediately when some command that is not part of some test (like in a if [ ... ] condition or a && construct) exits with a non-zero exit code.
Use it in conjunction with pipefail.
set -e
set -o pipefail
-e (errexit): Abort the script at the first error, when a command exits with non-zero status (except in until or while loops, if-tests, and list constructs)
-o pipefail: Causes a pipeline to return the exit status of the last command in the pipe that returned a non-zero return value.
Chapter 33. Options
Here is how to do it:
#!/bin/sh
abort()
{
echo >&2 '
***************
*** ABORTED ***
***************
'
echo "An error occurred. Exiting..." >&2
exit 1
}
trap 'abort' 0
set -e
# Add your script below....
# If an error occurs, the abort() function will be called.
#----------------------------------------------------------
# ===> Your script goes here
# Done!
trap : 0
echo >&2 '
************
*** DONE ***
************
'
An alternative to the accepted answer that fits in the first line:
#!/bin/bash -e
cd some_dir
./configure --some-flags
make
make install
One idiom is:
cd some_dir && ./configure --some-flags && make && make install
I realize that can get long, but for larger scripts you could break it into logical functions.
I think that what you are looking for is the trap command:
trap command signal [signal ...]
For more information, see this page.
Another option is to use the set -e command at the top of your script - it will make the script exit if any program / command returns a non true value.
One point missed in the existing answers is show how to inherit the error traps. The bash shell provides one such option for that using set
-E
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases.
Adam Rosenfield's answer recommendation to use set -e is right in certain cases but it has its own potential pitfalls. See GreyCat's BashFAQ - 105 - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
According to the manual, set -e exits
if a simple commandexits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in a if statement, part of an && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted via !".
which means, set -e does not work under the following simple cases (detailed explanations can be found on the wiki)
Using the arithmetic operator let or $((..)) ( bash 4.1 onwards) to increment a variable value as
#!/usr/bin/env bash
set -e
i=0
let i++ # or ((i++)) on bash 4.1 or later
echo "i is $i"
If the offending command is not part of the last command executed via && or ||. For e.g. the below trap wouldn't fire when its expected to
#!/usr/bin/env bash
set -e
test -d nosuchdir && echo no dir
echo survived
When used incorrectly in an if statement as, the exit code of the if statement is the exit code of the last executed command. In the example below the last executed command was echo which wouldn't fire the trap, even though the test -d failed
#!/usr/bin/env bash
set -e
f() { if test -d nosuchdir; then echo no dir; fi; }
f
echo survived
When used with command-substitution, they are ignored, unless inherit_errexit is set with bash 4.4
#!/usr/bin/env bash
set -e
foo=$(expr 1-1; true)
echo survived
when you use commands that look like assignments but aren't, such as export, declare, typeset or local. Here the function call to f will not exit as local has swept the error code that was set previously.
set -e
f() { local var=$(somecommand that fails); }
g() { local var; var=$(somecommand that fails); }
When used in a pipeline, and the offending command is not part of the last command. For e.g. the below command would still go through. One options is to enable pipefail by returning the exit code of the first failed process:
set -e
somecommand that fails | cat -
echo survived
The ideal recommendation is to not use set -e and implement an own version of error checking instead. More information on implementing custom error handling on one of my answers to Raise error in a Bash script

Resources