Exit script only AFTER running all commands [duplicate] - bash

This question already has answers here:
How to wait in bash for several subprocesses to finish, and return exit code !=0 when any subprocess ends with code !=0?
(35 answers)
Closed 2 months ago.
I would like my shell script to fail, if a specific command fails. BUT in any case, run the entire script. So i thought about using return 1 at the end of the command i want to "catch" and maybe add a condition at the end like: if return 1; then exit 1. I'm a bit lost how this should look like.
#!/bin/bash
command 1
# I want THIS command to make the script fail.
# It runs test and in parallel regex
command 2 &
# bash regex that HAS to run in parallel with command 2
regex='^ID *\| *([0-9]+)'
while ! [[ $(command 3) =~ $regex ]] && jobs -rp | awk 'END{exit(NR==0)}'
do
sleep 1
done
...
# Final command for creation of a report
# Script has to run this command also if command 2 fails
command 4

trap is your friend, but you have to be careful of those background tasks.
$: cat tst
#!/bin/bash
trap 'let err++' ERR
{ trap 'let err++' ERR; sleep 1; false; } & pid=$!
for str in foo bar baz ;do echo $str; done
wait $pid
echo happily done
exit $err
$: ./tst && echo ok || echo no
foo
bar
baz
happily done
no
Just make sure you test all your logic.

Related

Process Substitution and Exit Codes

I would like to monitor the output of a process for a given period of time. The following does everything I want except give me the return value of the command that ran.
cmd='cat <<EOF
My
Three
Lines
EOF
exit 2
'
perl -pe "if (/Hello/) { print \$_; exit 1 }" <(echo "$cmd" | timeout
5 bash)
Does anyone have a way to get that return value? I've looked at other questions here, but none of the answers apply in this situation.
Bash 4.4-Only Answer: Use $! to collect PID, and wait on that PID
Bash only made it possible to collect the exit status of a process substitution in version 4.4. Since we need to have that version anyhow, might as well use automatic FD allocation too. :)
exec {psfd}< <(echo "hello"; exit 3); procsub_pid=$!
cat <&$psfd # read from the process substitution so it can exit
exec {psfd}<&- # close the FD
wait "$procsub_pid" # wait for the process to collect its exit status
echo "$?"
...properly returns:
3
In the context of your code, that might look like:
cmd() { printf '%s\n' My Three Lines; exit 2; }
export -f cmd
exec {psfd}< <(timeout 5 bash -c cmd); ps_pid=$!
perl -pe "if (/Hello/) { print \$_; exit 1 }" <&$psfd
echo "Perl exited with status $?"
wait "$ps_pid"; echo "Process substitution exited with status $?"
...emitting as output:
Perl exited with status 0
Process substitution exited with status 2
Easy Answer: Do Something Else
While it's possible to work around this in very recent shell releases, in general, process substitutions eat exit status. More to the point, there's just no need for them in the example given.
If you set the pipefail shell option, exit status from any component in a pipeline -- not just the last -- will be reflected in the pipeline's exit status; thus, you don't need to use a process substitution to have perl's exit status be honored as well.
#!/usr/bin/env bash
set -o pipefail
cmd() { printf '%s\n' My Three Lines; exit 2; }
export -f cmd
timeout 5 bash -c 'cmd' | perl -pe "if (/Hello/) { print \$_; exit 1 }"
printf '%s\n' \
"Perl exited with status ${PIPESTATUS[1]}" \
"Process substitution exited with status ${PIPESTATUS[0]}"

Get exit code of process substitution with pipe into while loop

The following script calls another program reading its output in a while loop (see Bash - How to pipe input to while loop and preserve variables after loop ends):
while read -r col0 col1; do
# [...]
done < <(other_program [args ...])
How can I check for the exit code of other_program to see if the loop was executed properly?
Note: ls -d / /nosuch is used as an example command below, because it fails (exit code 1) while still producing stdout output (/) (in addition to stderr output).
Bash v4.2+ solution:
ccarton's helpful answer works well in principle, but by default the while loop runs in a subshell, which means that any variables created or modified in the loop will not be visible to the current shell.
In Bash v4.2+, you can change this by turning the lastpipe option on, which makes the last segment of a pipeline run in the current shell;
as in ccarton's answer, the pipefail option must be set to have $? reflect the exit code of the first failing command in the pipeline:
shopt -s lastpipe # run the last segment of a pipeline in the current shell
shopt -so pipefail # reflect a pipeline's first failing command's exit code in $?
ls -d / /nosuch | while read -r line; do
result=$line
done
echo "result: [$result]; exit code: $?"
The above yields (stderr output omitted):
result: [/]; exit code: 1
As you can see, the $result variable, set in the while loop, is available, and the ls command's (nonzero) exit code is reflected in $?.
Bash v3+ solution:
ikkachu's helpful answer works well and shows advanced techniques, but it is a bit cumbersome.
Here is a simpler alternative:
while read -r line || { ec=$line && break; }; do # Note the `|| { ...; }` part.
result=$line
done < <(ls -d / /nosuch; printf $?) # Note the `; printf $?` part.
echo "result: [$result]; exit code: $ec"
By appending the value of $?, the ls command's exit code, to the output without a trailing \n (printf $?), read reads it in the last loop operation, but indicates failure (exit code 1), which would normally exit the loop.
We can detect this case with ||, and assign the exit code (that was still read into $line) to variable $ec and exit the loop then.
On the off chance that the command's output doesn't have a trailing \n, more work is needed:
while read -r line ||
{ [[ $line =~ ^(.*)/([0-9]+)$ ]] && ec=${BASH_REMATCH[2]} && line=${BASH_REMATCH[1]};
[[ -n $line ]]; }
do
result=$line
done < <(printf 'no trailing newline'; ls /nosuch; printf "/$?")
echo "result: [$result]; exit code: $ec"
The above yields (stderr output omitted):
result: [no trailing newline]; exit code: 1
At least one way would be to redirect the output of the background process through a named pipe. This would allow to pick up its PID and then get the exit status through waiting on the PID.
#!/bin/bash
mkfifo pipe || exit 1
(echo foo ; exit 19) > pipe &
pid=$!
while read x ; do echo "read: $x" ; done < pipe
wait $pid
echo "exit status of bg process: $?"
rm pipe
If you can use a direct pipe (i.e. don't mind the loop being run in a subshell), you could use Bash's PIPESTATUS, which contains the exit codes of all commands in the pipeline:
(echo foo ; exit 19) | while read x ; do
echo "read: $x" ; done;
echo "status: ${PIPESTATUS[0]}"
A simple way is to use the bash pipefail option to propagate the first error code from a pipeline.
set -o pipefail
other_program | while read x; do
echo "Read: $x"
done || echo "Error: $?"
Another way is to use coproc (requires 4.0+).
coproc other_program [args ...]
while read -r -u ${COPROC[0]} col0 col1; do
# [...]
done
wait $COPROC_PID || echo "Error exit status: $?"
coproc frees you from having to setup asynchronicity and stdin/stdout redirection that you'd otherwise need to do in an equivalent mkfifo.

Customized progress message for tasks in bash script

I'm currently writing a bash script to do tasks automatically. In my script I want it to display progress message when it is doing a task.
For example:
user#ubuntu:~$ Configure something
->
Configure something .
->
Configure something ..
->
Configure something ...
->
Configure something ... done
All the progress message should appear in the same line.
Below is my workaround so far:
echo -n "Configure something "
exec "configure something 2>&1 /dev/null"
//pseudo code for progress message
echo -n "." and sleep 1 if the previous exec of configure something not done
echo " done" if exec of the command finished successfully
echo " failed" otherwise
Will exec wait for the command to finish and then continue with the script lines later?
If so, then how can I echo message at the same time the exec of configure something is taking place?
How do I know when exec finishes the previous command and return true? use $? ?
Just to put the editorial hat on, what if something goes wrong? How are you, or a user of your script going to know what went wrong? This is probably not the answer you're looking for but having your script just execute each build step individually may turn out to be better overall, especially for troubleshooting. Why not define a function to validate your build steps:
function validateCmd()
{
CODE=$1
COMMAND=$2
MODULE=$3
if [ ${CODE} -ne 0 ]; then
echo "ERROR Executing Command: \"${COMMAND}\" in Module: ${MODULE}"
echo "Exiting."
exit 1;
fi
}
./configure
validateCmd $? "./configure" "Configuration of something"
Anyways, yes as you probably noticed above, use $? to determine what the result of the last command was. For example:
rm -rf ${TMP_DIR}
if [ $? -ne 0 ]; then
echo "ERROR Removing directory: ${TMP_DIR}"
exit 1;
fi
To answer your first question, you can use:
echo -ne "\b"
To delete a character on the same line. So to count to ten on one line, you can do something like:
for i in $(seq -w 1 10); do
echo -en "\b\b${i}"
sleep .25
done
echo
The trick with that is you'll have to know how much to delete, but I'm sure you can figure that out.
You cannot call exec like that; exec never returns, and the lines after an exec will not execute. The standard way to print progress updates on a single line is to simply use \r instead of \n at the end of each line. For example:
#!/bin/bash
i=0
sleep 5 & # Start some command
pid=$! # Save the pid of the command
while sleep 1; do # Produce progress reports
printf '\rcontinuing in %d seconds...' $(( 5 - ++i ))
test $i -eq 5 && break
done
if wait $pid; then echo done; else echo failed; fi
Here's another example:
#!/bin/bash
execute() {
eval "$#" & # Execute the command
pid=$!
# Invoke a shell to print status. If you just invoke
# the while loop directly, killing it will generate a
# notification. By trapping SIGTERM, we suppress the notice.
sh -c 'trap exit SIGTERM
while printf "\r%3d:%s..." $((++i)) "$*"; do sleep 1
done' 0 "$#" &
last_report=$!
if wait $pid; then echo done; else echo failed; fi
kill $last_report
}
execute sleep 3
execute sleep 2 \| false # Execute a command that will fail
execute sleep 1

How to get exit status of piped command from inside the pipeline?

Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.

Exit Shell Script Based on Process Exit Code [duplicate]

This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 1 year ago.
I have a shell script that executes a number of commands. How do I make the shell script exit if any of the commands exit with a non-zero exit code?
After each command, the exit code can be found in the $? variable so you would have something like:
ls -al file.ext
rc=$?; if [[ $rc != 0 ]]; then exit $rc; fi
You need to be careful of piped commands since the $? only gives you the return code of the last element in the pipe so, in the code:
ls -al file.ext | sed 's/^/xx: /"
will not return an error code if the file doesn't exist (since the sed part of the pipeline actually works, returning 0).
The bash shell actually provides an array which can assist in that case, that being PIPESTATUS. This array has one element for each of the pipeline components, that you can access individually like ${PIPESTATUS[0]}:
pax> false | true ; echo ${PIPESTATUS[0]}
1
Note that this is getting you the result of the false command, not the entire pipeline. You can also get the entire list to process as you see fit:
pax> false | true | false; echo ${PIPESTATUS[*]}
1 0 1
If you wanted to get the largest error code from a pipeline, you could use something like:
true | true | false | true | false
rcs=${PIPESTATUS[*]}; rc=0; for i in ${rcs}; do rc=$(($i > $rc ? $i : $rc)); done
echo $rc
This goes through each of the PIPESTATUS elements in turn, storing it in rc if it was greater than the previous rc value.
If you want to work with $?, you'll need to check it after each command, since $? is updated after each command exits. This means that if you execute a pipeline, you'll only get the exit code of the last process in the pipeline.
Another approach is to do this:
set -e
set -o pipefail
If you put this at the top of the shell script, it looks like Bash will take care of this for you. As a previous poster noted, "set -e" will cause Bash to exit with an error on any simple command. "set -o pipefail" will cause Bash to exit with an error on any command in a pipeline as well.
See here or here for a little more discussion on this problem. Here is the Bash manual section on the set builtin.
"set -e" is probably the easiest way to do this. Just put that before any commands in your program.
If you just call exit in Bash without any parameters, it will return the exit code of the last command. Combined with OR, Bash should only invoke exit, if the previous command fails. But I haven't tested this.
command1 || exit;
command2 || exit;
Bash will also store the exit code of the last command in the variable $?.
[ $? -eq 0 ] || exit $?; # Exit for nonzero return code
http://cfaj.freeshell.org/shell/cus-faq-2.html#11
How do I get the exit code of cmd1 in cmd1|cmd2
First, note that cmd1 exit code could be non-zero and still don't mean an error. This happens for instance in
cmd | head -1
You might observe a 141 (or 269 with ksh93) exit status of cmd1, but it's because cmd was interrupted by a SIGPIPE signal when head -1 terminated after having read one line.
To know the exit status of the elements of a pipeline
cmd1 | cmd2 | cmd3
a. with Z shell (zsh):
The exit codes are provided in the pipestatus special array.
cmd1 exit code is in $pipestatus[1], cmd3 exit code in
$pipestatus[3], so that $? is always the same as
$pipestatus[-1].
b. with Bash:
The exit codes are provided in the PIPESTATUS special array.
cmd1 exit code is in ${PIPESTATUS[0]}, cmd3 exit code in
${PIPESTATUS[2]}, so that $? is always the same as
${PIPESTATUS: -1}.
...
For more details see Z shell.
For Bash:
# This will trap any errors or commands with non-zero exit status
# by calling function catch_errors()
trap catch_errors ERR;
#
# ... the rest of the script goes here
#
function catch_errors() {
# Do whatever on errors
#
#
echo "script aborted, because of errors";
exit 0;
}
In Bash this is easy. Just tie them together with &&:
command1 && command2 && command3
You can also use the nested if construct:
if command1
then
if command2
then
do_something
else
exit
fi
else
exit
fi
#
#------------------------------------------------------------------------------
# purpose: to run a command, log cmd output, exit on error
# usage:
# set -e; do_run_cmd_or_exit "$cmd" ; set +e
#------------------------------------------------------------------------------
do_run_cmd_or_exit(){
cmd="$#" ;
do_log "DEBUG running cmd or exit: \"$cmd\""
msg=$($cmd 2>&1)
export exit_code=$?
# If occurred during the execution, exit with error
error_msg="Failed to run the command:
\"$cmd\" with the output:
\"$msg\" !!!"
if [ $exit_code -ne 0 ] ; then
do_log "ERROR $msg"
do_log "FATAL $msg"
do_exit "$exit_code" "$error_msg"
else
# If no errors occurred, just log the message
do_log "DEBUG : cmdoutput : \"$msg\""
fi
}

Resources