I have a script like this:
#!/usr/bin/bash
COMMAND='some_command >> some_log_file 2>&1'
until $COMMAND; do
echo "some_command crashed with exit code $?. Respawning.." >&2
sleep 1
done
(I got the until ... done bit from https://stackoverflow.com/a/697064/88821 , FWIW, but changed it a bit.)
While some_command is being run, the problem is that the output is not going to some_log_file. Instead, it goes to the stdout of the shell from which I ran this wrapper script. How can I get the output to go to some_log_file while keeping the entire command (including redirects) in the variable COMMAND? The idea is to use this script as a more general wrapper script that can be used with other programs. (COMMAND is actually being passed as an argument into the script).
You're passing >> some_log_file 2>&1 as arguments to some_command, rather than honoring them as redirections. This happens because parsing shell syntax (such as redirections) happens before parameter expansions are performed (the point in processing where $foo is replaced with the contents of the relevant variable). That's actually desirable behavior -- it would be impossible to write code in shell handling untrusted data otherwise.
Don't store code in strings. You can include it literally:
until some_command >> some_log_file 2>&1; do
echo "some_command crashed with exit code $?. Respawning.." >&2
sleep 1
done
...or you can store it in a function:
mycode() { some_command >> some_log_file 2>&1; }
until mycode; do
echo "some_command crashed with exit code $?. Respawning.." >&2
sleep 1
done
One way is
#!/usr/bin/bash
COMMAND='some_command >> some_log_file 2>&1'
until bash -c "$COMMAND"; do
echo "some_command crashed with exit code $?. Respawning.." >&2
sleep 1
done
The problem with your approach is that it does not evaluate the variable as bash code, but just as a series of literal strings. It's like how putting "1+1" in a Java string will never cause 2 to be printed without invoking a Java interpreter:
String cmd="1+1";
System.out.println(cmd); // Prints 1+1
System.out.println(1+1); // Prints 2
Related
I have a script which quite often fails. It is crucial that the correct exit code is passed on.
This code works as expected:
#!/usr/bin/env bash
function my_function {
echo "Important output"
exit 1
}
function cleanup {
MY_EXIT_CODE=$?
echo "MY_EXIT_CODE: ${MY_EXIT_CODE}"
}
trap cleanup EXIT
my_function
MY_EXIT_CODE is 1 as expected and running echo $? after the script gives me 1 as well (as expected)
However, I need to get the complete output of my_function both to a variable and the console output. In order to do so, as advised in this answer (which seems itself based on this answer) I changed my code into
#!/usr/bin/env bash
function my_function {
echo "Important output"
exit 1
}
function cleanup {
MY_EXIT_CODE=$?
echo "MY_EXIT_CODE: ${MY_EXIT_CODE}"
}
trap cleanup EXIT
exec 5>&1
FF=$(my_function|tee /dev/fd/5)
And now the exit code is wrong. Is it 0 while it should be 1. I know this is somewhat connected to subshell handling but I couldn't figure out how to solve it.
And now the exit code is wrong. Is it 0 while it should be 1
No, your assumption is wrong. Assuming tee succeeded, the exit code should be 0. From the posix shell manual:
If the reserved word ! does not precede the pipeline, the exit status shall be the exit status of the last command specified in the pipeline.
The "last command" in a pipeline is the rightmost command. Because in your case tee exits with 0, the exit status of FF=$(.... | tee) is zero.
how to solve it.
That depends on the behavior you want to achieve. Usually, in bash you may just set -o pipefail to always catch errors.
It seems that simply stating
set -o pipefail
at the beginning of the script file did the trick.
Yes it's subshell and pipe, try like this
FF=$(my_function|tee /dev/fd/5; exit $PIPESTATUS)
I want to execute a long running command in Bash, and both capture its exit status, and tee its output.
So I do this:
command | tee out.txt
ST=$?
The problem is that the variable ST captures the exit status of tee and not of command. How can I solve this?
Note that command is long running and redirecting the output to a file to view it later is not a good solution for me.
There is an internal Bash variable called $PIPESTATUS; it’s an array that holds the exit status of each command in your last foreground pipeline of commands.
<command> | tee out.txt ; test ${PIPESTATUS[0]} -eq 0
Or another alternative which also works with other shells (like zsh) would be to enable pipefail:
set -o pipefail
...
The first option does not work with zsh due to a little bit different syntax.
Dumb solution: Connecting them through a named pipe (mkfifo). Then the command can be run second.
mkfifo pipe
tee out.txt < pipe &
command > pipe
echo $?
using bash's set -o pipefail is helpful
pipefail: the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
There's an array that gives you the exit status of each command in a pipe.
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo $?
0
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo ${PIPESTATUS[*]}
1 0
$ touch x
$ cat x| sed 's'
sed: 1: "s": substitute pattern can not be delimited by newline or backslash
$ echo ${PIPESTATUS[*]}
0 1
This solution works without using bash specific features or temporary files. Bonus: in the end the exit status is actually an exit status and not some string in a file.
Situation:
someprog | filter
you want the exit status from someprog and the output from filter.
Here is my solution:
((((someprog; echo $? >&3) | filter >&4) 3>&1) | (read xs; exit $xs)) 4>&1
echo $?
See my answer for the same question on unix.stackexchange.com for a detailed explanation and an alternative without subshells and some caveats.
By combining PIPESTATUS[0] and the result of executing the exit command in a subshell, you can directly access the return value of your initial command:
command | tee ; ( exit ${PIPESTATUS[0]} )
Here's an example:
# the "false" shell built-in command returns 1
false | tee ; ( exit ${PIPESTATUS[0]} )
echo "return value: $?"
will give you:
return value: 1
So I wanted to contribute an answer like lesmana's, but I think mine is perhaps a little simpler and slightly more advantageous pure-Bourne-shell solution:
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; printf $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out - command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, printf will execute and print icommand1's exit code on its stdout, but that stdout is redirected to file descriptor 3.
While command1 is running, its stdout is being piped to command2 (printf's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor 1 - because we want file descriptor 1 free for a little bit later, because we will bring the printf output on file descriptor 3 back down into file descriptor 1 - because that's what the command substitution (the backticks), will capture and that's what will get placed into the variable.
The final bit of magic is that first exec 4>&1 we did as a separate command - it opens file descriptor 4 as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it - but since command2's output is going to file descriptor 4 as far as the command substitution is concerned, the command substitution doesn't capture it - however once it gets "out" of the command substitution it is effectively still going to the script's overall file descriptor 1.
(The exec 4>&1 has to be a separate command because many common shells don't like it when you try to write to a file descriptor inside a command substitution, that is opened in the "external" command that is using the substitution. So this is the simplest portable way to do it.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the printf's output jumps over command 2 so that command2 doesn't catch it, and then command 2's output jumps over and out of the command substitution just as printf lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its merry way being written to the standard output, just as in a normal pipe.
Also, as I understand it, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out - this, and not having to define an additional function, is why I think this might be a somewhat better solution than the one proposed by lesmana.
Per the caveats lesmana mentions, it's possible that command1 will at some point end up using file descriptors 3 or 4, so to be more robust, you would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; printf $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Note that I use compound commands in my example, but subshells (using ( ) instead of { } will also work, though may perhaps be less efficient.)
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
I'm not sure how often things use file descriptor three and four directly - I think most of the time programs use syscalls that return not-used-at-the-moment file descriptors, but sometimes code writes to file descriptor 3 directly, I guess (I could imagine a program checking a file descriptor to see if it's open, and using it if it is, or behaving differently accordingly if it's not). So the latter is probably best to keep in mind and use for general-purpose cases.
(command | tee out.txt; exit ${PIPESTATUS[0]})
Unlike #cODAR's answer this returns the original exit code of the first command and not only 0 for success and 127 for failure. But as #Chaoran pointed out you can just call ${PIPESTATUS[0]}. It is important however that all is put into brackets.
In Ubuntu and Debian, you can apt-get install moreutils. This contains a utility called mispipe that returns the exit status of the first command in the pipe.
Outside of bash, you can do:
bash -o pipefail -c "command1 | tee output"
This is useful for example in ninja scripts where the shell is expected to be /bin/sh.
The simplest way to do this in plain bash is to use process substitution instead of a pipeline. There are several differences, but they probably don't matter very much for your use case:
When running a pipeline, bash waits until all processes complete.
Sending Ctrl-C to bash makes it kill all the processes of a pipeline, not just the main one.
The pipefail option and the PIPESTATUS variable are irrelevant to process substitution.
Possibly more
With process substitution, bash just starts the process and forgets about it, it's not even visible in jobs.
Mentioned differences aside, consumer < <(producer) and producer | consumer are essentially equivalent.
If you want to flip which one is the "main" process, you just flip the commands and the direction of the substitution to producer > >(consumer). In your case:
command > >(tee out.txt)
Example:
$ { echo "hello world"; false; } > >(tee out.txt)
hello world
$ echo $?
1
$ cat out.txt
hello world
$ echo "hello world" > >(tee out.txt)
hello world
$ echo $?
0
$ cat out.txt
hello world
As I said, there are differences from the pipe expression. The process may never stop running, unless it is sensitive to the pipe closing. In particular, it may keep writing things to your stdout, which may be confusing.
PIPESTATUS[#] must be copied to an array immediately after the pipe command returns.
Any reads of PIPESTATUS[#] will erase the contents.
Copy it to another array if you plan on checking the status of all pipe commands.
"$?" is the same value as the last element of "${PIPESTATUS[#]}",
and reading it seems to destroy "${PIPESTATUS[#]}", but I haven't absolutely verified this.
declare -a PSA
cmd1 | cmd2 | cmd3
PSA=( "${PIPESTATUS[#]}" )
This will not work if the pipe is in a sub-shell. For a solution to that problem,
see bash pipestatus in backticked command?
Base on #brian-s-wilson 's answer; this bash helper function:
pipestatus() {
local S=("${PIPESTATUS[#]}")
if test -n "$*"
then test "$*" = "${S[*]}"
else ! [[ "${S[#]}" =~ [^0\ ] ]]
fi
}
used thus:
1: get_bad_things must succeed, but it should produce no output; but we want to see output that it does produce
get_bad_things | grep '^'
pipeinfo 0 1 || return
2: all pipeline must succeed
thing | something -q | thingy
pipeinfo || return
Pure shell solution:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (cat || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
hello world
And now with the second cat replaced by false:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (false || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
Some command failed:
Second command failed: 1
First command failed: 141
Please note the first cat fails as well, because it's stdout gets closed on it. The order of the failed commands in the log is correct in this example, but don't rely on it.
This method allows for capturing stdout and stderr for the individual commands so you can then dump that as well into a log file if an error occurs, or just delete it if no error (like the output of dd).
It may sometimes be simpler and clearer to use an external command, rather than digging into the details of bash. pipeline, from the minimal process scripting language execline, exits with the return code of the second command*, just like a sh pipeline does, but unlike sh, it allows reversing the direction of the pipe, so that we can capture the return code of the producer process (the below is all on the sh command line, but with execline installed):
$ # using the full execline grammar with the execlineb parser:
$ execlineb -c 'pipeline { echo "hello world" } tee out.txt'
hello world
$ cat out.txt
hello world
$ # for these simple examples, one can forego the parser and just use "" as a separator
$ # traditional order
$ pipeline echo "hello world" "" tee out.txt
hello world
$ # "write" order (second command writes rather than reads)
$ pipeline -w tee out.txt "" echo "hello world"
hello world
$ # pipeline execs into the second command, so that's the RC we get
$ pipeline -w tee out.txt "" false; echo $?
1
$ pipeline -w tee out.txt "" true; echo $?
0
$ # output and exit status
$ pipeline -w tee out.txt "" sh -c "echo 'hello world'; exit 42"; echo "RC: $?"
hello world
RC: 42
$ cat out.txt
hello world
Using pipeline has the same differences to native bash pipelines as the bash process substitution used in answer #43972501.
* Actually pipeline doesn't exit at all unless there is an error. It executes into the second command, so it's the second command that does the returning.
Why not use stderr? Like so:
(
# Our long-running process that exits abnormally
( for i in {1..100} ; do echo ploop ; sleep 0.5 ; done ; exit 5 )
echo $? 1>&2 # We pass the exit status of our long-running process to stderr (fd 2).
) | tee ploop.out
So ploop.out receives the stdout. stderr receives the exit status of the long running process. This has the benefit of being completely POSIX-compatible.
(Well, with the exception of the range expression in the example long-running process, but that's not really relevant.)
Here's what this looks like:
...
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
5
Note that the return code 5 does not get output to the file ploop.out.
The manual of (t)csh say:
exit [expr]
The shell exits either with the value of the specified expr (an expression, as described under Expressions) or,
without expr, with the value 0.
But if run tcsh -c 'exit 5; echo after exit'; echo $?, we get the following output (test on tcsh of ubuntu/centos and freebsd 10.3):
after exit
0
It seems like the exit command is skipped. How to get the same action like the POSIX/bash shell?
What are you trying to do in a POSIX shell? If you want to print something and exit normally just do printf "%s\n" 'my message'; exit 0. (See https://unix.stackexchange.com/questions/65803/why-is-printf-better-than-echo for reasons not to use echo)
This is probably a bug in csh that's kept in tcsh for backwards compatibility reasons.
(t)csh is line-oriented in a way that other shells are not and generally processes input a line at a time, for instance
This script:
#!/bin/tcsh -f
exit 5
echo 'after exit'
exits with status 5 when executed.
This script, on the other hand, prints after exit.
#!/bin/tcsh -f
exit 5; echo 'after exit'
Somewhat more surprisingly, a string containing a newline passed to /bin/tcsh is treated like the single-line script, not the multi-line one.
% perl -e 'system("/bin/tcsh -f -c \"exit 5\necho after\"")'
Editing this post, original is at bottom beneath the "Thanks!"
command='a.out arg1 arg2 &'
eval ${command}
if [ $? -ne 0 ]; then
printf "Command \'${command}\' failed\n"
exit 1
fi
wait
Here is a test script that demonstrates the problem, which I oversimplified
in the original post. Notice the ampersand in line 2 and the wait command.
These more faithfully represent my script. In case it matters, the ampersand
is sometimes there and sometimes not, its presence is determined by a user-
specified flag that indicates whether or not to background a long arithmetic
calculation. And, also in case it matters, I'm actually backgrounding many
(twelve) processes, i.e., ${command[0..11]}. I want the script to die if any
fail. I use 'wait' to synchronize the successful return of all processes.
Happy (sort of) to use another approach but this almost works.
The ampersand (for backgrounding) seems to cause the problem.
When ${command} omits the ampersand, the script runs as expected:
The executable a.out is not found, a complaint to that effect is issued,
and $? is non-zero so the host script exits. When ${command} includes
the ampersand, the same complaint is issued but $? is zero so the
script continues to execute. I want the script to die immediately when
a.out fails but how do I obtain the non-zero return value from a
backgrounded process?
Thanks!
(original post):
I have a bash script that uses commands of the form
eval ${command}
if [ $? -ne 0 ]; then
printf "Command ${command} failed"
exit 1
fi
where ${command} is a string of words, e.g., "a.out arg1 ... argn".
The problem is that the return code from eval (i.e., $?) is always
zero even when ${command} fails. Removing the "eval" from the above
snippet allows the correct return code ($?) to be returned and thus
halt the script. I need to keep my command string in a variable
(${command}) in order to manipulate it elsewhere, and simply running
${command} without the eval doesn't work well for other reasons. How do I catch the
correct return code when using eval?
Thanks!
Charlie
The ampersand (for backgrounding) seems to cause the problem.
That is correct.
The shell cannot know a command's exit code until the command completes. When you put a command in background, the shell does not wait for completion. Hence, it cannot know the (future) return status of the command in background.
This is documented in man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
In other words, the return code after putting a command in background is always 0 because the shell cannot predict the future return code of a command that has not yet completed.
If you want to find the return status of commands in the background, you need to use the wait command.
Examples
The command false always sets a return status of 1:
$ false ; echo status=$?
status=1
Observe, though, what happens if we background it:
$ false & echo status=$?
[1] 4051
status=0
The status is 0 because the command was put in background and the shell cannot predict its future exit code. If we wait a few moments, we will see:
$
[1]+ Exit 1 false
Here, the shell is notifying us that the brackground task completed and its return status was just as it should be: 1.
In the above, we did not use eval. If we do, nothing changes:
$ eval 'false &' ; echo status=$?
[1] 4094
status=0
$
[1]+ Exit 1 false
If you do want the return status of a backgrounded command, use wait. For example, this shows how to capture the return status of false:
$ false & wait $!; echo status=$?
[1] 4613
[1]+ Exit 1 false
status=1
From the man page on my system:
eval [arg ...] The args are read and concatenated together into a single command. This command is then read and executed by the shell, and its exit status is returned as the value of eval. If there are no args, or only null arguments, eval returns 0.
If your system documentation is 'the same', then, most likely, whatever commands you are running are the problem, i.e. 'a.out' is returning '0' on exit instead of a non-zero value. Add appropriate 'exit return code' to your compiled program.
You might also try using $() which will 'run' your binary instead of 'evaluating' it..., i.e.
STATUS=$(a.out var var var)
As long on only one 'command' is in the stream, then the value of $? is the 'exit code'; otherwise, $? is the return code for the last command in a multi-command 'pipe'...
:)
Dale
I'm writing a script in Bash to test some code. However, it seems silly to run the tests if compiling the code fails in the first place, in which case I'll just abort the tests.
Is there a way I can do this without wrapping the entire script inside of a while loop and using breaks? Something like a dun dun dun goto?
Try this statement:
exit 1
Replace 1 with appropriate error codes. See also Exit Codes With Special Meanings.
Use set -e
#!/bin/bash
set -e
/bin/command-that-fails
/bin/command-that-fails2
The script will terminate after the first line that fails (returns nonzero exit code). In this case, command-that-fails2 will not run.
If you were to check the return status of every single command, your script would look like this:
#!/bin/bash
# I'm assuming you're using make
cd /project-dir
make
if [[ $? -ne 0 ]] ; then
exit 1
fi
cd /project-dir2
make
if [[ $? -ne 0 ]] ; then
exit 1
fi
With set -e it would look like:
#!/bin/bash
set -e
cd /project-dir
make
cd /project-dir2
make
Any command that fails will cause the entire script to fail and return an exit status you can check with $?. If your script is very long or you're building a lot of stuff it's going to get pretty ugly if you add return status checks everywhere.
A SysOps guy once taught me the Three-Fingered Claw technique:
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exit 111; }
try() { "$#" || die "cannot $*"; }
These functions are *NIX OS and shell flavor-robust. Put them at the beginning of your script (bash or otherwise), try() your statement and code on.
Explanation
(based on flying sheep comment).
yell: print the script name and all arguments to stderr:
$0 is the path to the script ;
$* are all arguments.
>&2 means > redirect stdout to & pipe 2. pipe 1 would be stdout itself.
die does the same as yell, but exits with a non-0 exit status, which means “fail”.
try uses the || (boolean OR), which only evaluates the right side if the left one failed.
$# is all arguments again, but different.
If you will invoke the script with source, you can use return <x> where <x> will be the script exit status (use a non-zero value for error or false). But if you invoke an executable script (i.e., directly with its filename), the return statement will result in a complain (error message "return: can only `return' from a function or sourced script").
If exit <x> is used instead, when the script is invoked with source, it will result in exiting the shell that started the script, but an executable script will just terminate, as expected.
To handle either case in the same script, you can use
return <x> 2> /dev/null || exit <x>
This will handle whichever invocation may be suitable. That is assuming you will use this statement at the script's top level. I would advise against directly exiting the script from within a function.
Note: <x> is supposed to be just a number.
I often include a function called run() to handle errors. Every call I want to make is passed to this function so the entire script exits when a failure is hit. The advantage of this over the set -e solution is that the script doesn't exit silently when a line fails, and can tell you what the problem is. In the following example, the 3rd line is not executed because the script exits at the call to false.
function run() {
cmd_output=$(eval $1)
return_value=$?
if [ $return_value != 0 ]; then
echo "Command $1 failed"
exit -1
else
echo "output: $cmd_output"
echo "Command succeeded."
fi
return $return_value
}
run "date"
run "false"
run "date"
Instead of if construct, you can leverage the short-circuit evaluation:
#!/usr/bin/env bash
echo $[1+1]
echo $[2/0] # division by 0 but execution of script proceeds
echo $[3+1]
(echo $[4/0]) || exit $? # script halted with code 1 returned from `echo`
echo $[5+1]
Note the pair of parentheses which is necessary because of priority of alternation operator. $? is a special variable set to exit code of most recently called command.
I have the same question but cannot ask it because it would be a duplicate.
The accepted answer, using exit, does not work when the script is a bit more complicated. If you use a background process to check for the condition, exit only exits that process, as it runs in a sub-shell. To kill the script, you have to explicitly kill it (at least that is the only way I know).
Here is a little script on how to do it:
#!/bin/bash
boom() {
while true; do sleep 1.2; echo boom; done
}
f() {
echo Hello
N=0
while
((N++ <10))
do
sleep 1
echo $N
# ((N > 5)) && exit 4 # does not work
((N > 5)) && { kill -9 $$; exit 5; } # works
done
}
boom &
f &
while true; do sleep 0.5; echo beep; done
This is a better answer but still incomplete a I really don't know how to get rid of the boom part.
You can close your program by program name on follow way:
for soft exit do
pkill -9 -x programname # Replace "programmname" by your programme
for hard exit do
pkill -15 -x programname # Replace "programmname" by your programme
If you like to know how to evaluate condition for closing a program, you need to customize your question.
#!/bin/bash -x
# exit and report the failure if any command fails
exit_trap () { # ---- (1)
local lc="$BASH_COMMAND" rc=$?
echo "Command [$lc] exited with code [$rc]"
}
trap exit_trap EXIT # ---- (2)
set -e # ---- (3)
Explanation:
This question is also about how to write clean code. Let's divide the above script into multiple parts:
Part - 1:
exit_trap is a function that gets called when any step failed and captures the last executed step using $BASH_COMMAND and captures the return code of that step. This is the function that can be used for any clean-up, similar to shutdownhooks
The command currently being executed or about to be executed, unless the shell is executing a command as the result of a trap, in which case it is the command executing at the time of the trap.
Doc.
Part - 2:
trap [action] [signal]
Register the trap action (here exit_trap function) in case of EXIT signal.
Part - 3:
Exit immediately if a sequence of one or more commands returns a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command’s return status is being inverted with !. If a compound command other than a subshell returns a non-zero status because a command failed while -e was being ignored, the shell does not exit. A trap on ERR, if set, is executed before the shell exits.
Doc.
Part - 4:
You can create a common.sh file and source it in all of your scripts.
source common.sh