Cannot stop BASH when using of && and || operators - bash

I would like to stop my BASH if the commands have any errors.
make clean || ( echo "ERROR!!" && echo "ERROR!!" >> log_file && exit 1 )
But seems like my BASH still keeps going. How do I put exit 1 in the one-line operators
I am very new to BASH, any help is appreciated!

exit 1 exits from the subshell created by (), not the original shell. Use {} to keep the command group in the same shell.
Don't use && between commands unless you want to stop as soon as one of them fails. Use ; to separate commands on the same line.
make clean || { echo "ERROR!!" ; echo "ERROR!!" >> log_file ; exit 1 ;}
Or just use if to make it easier to understand.
if ! make clean
then
echo "ERROR!!"
echo "ERROR!!" >> log_file
exit
fi

You have the direct solution in Barmar's answer. An alternative if you want to check multiple commands in a similar way could be to define a function which could be reused:
die() {
echo "ERROR: $#"
echo "ERROR: $#" >> log_file
exit 1
}
make clean || die "I left it unclean"
make something || die "something went wrong"
or, if you want the script to end at first sign of trouble, you could use set -e
set -e
make clean # stops here unless successful
make something # or here if this line fails etc.
You may want to log an error message too, so you could install a trap on ERR. errfunc would here be called before exiting the script and the line number where it failed would be logged:
errfunc() {
echo "ERROR on line $1"
echo "ERROR on line $1" >> log_file
}
trap 'errfunc $LINENO' ERR
set -e
make clean
make something

Related

SHELL general function for action state

How to make a code bellow as a general function to be used entire script in bash:
if [[ $? = 0 ]]; then
echo "success " >> $log
else echo "failed" >> $log
fi
You might write a wrapper for command execution:
function exec_cmd {
$#
if [[ $? = 0 ]]; then
echo "success " >> $log
else
echo "failed" >> $log
fi
}
And then execute commands in your script using the function:
exec_cmd command1 arg1 arg2 ...
exec_cmd command2 arg1 arg2 ...
...
If you don't want to wrap the original calls you could use an explicit call, like the following
function check_success {
if [[ $? = 0 ]]; then
echo "success " >> $log
else echo "failed" >> $log
fi
}
ls && check_success
ls non-existant
check_success
There's no really clean way to do that. This is clean and might be good enough?
PS4='($?)[$LINENO]'
exec 2>>"$log"
That will show every command run in the log, and each entry will start with the exit code of the previous command...
You could put this in .bashrc and call it whenever
function log_status { [ $? == 0 ] && echo success>>/tmp/status || echo fail>>/tmp/status }
If you want it after every command you could make the prompt write to the log (note the original PS1 value is appended).
export PS1="\$([ \$? == 0 ] && echo success>>/tmp/status || echo fail>>/tmp/status)$PS1"
(I'm not experienced with this, perhaps PROMPT_COMMAND is a more appropriate place to put it)
Or even get more fancy and see the result with colours.
I guess you could also play with getting the last executed command:
How do I get "previous executed command" in a bash script?
Get name of last run program in Bash
BASH: echoing the last command run

Writing try catch finally in shell

Is there a linux bash command like the java try catch finally?
Or does the linux shell always go on?
try {
`executeCommandWhichCanFail`
mv output
} catch {
mv log
} finally {
rm tmp
}
Based on your example, it looks like you are trying to do something akin to always deleting a temporary file, regardless of how a script exits. In Bash to do this try the trap builtin command to trap the EXIT signal.
#!/bin/bash
trap 'rm tmp' EXIT
if executeCommandWhichCanFail; then
mv output
else
mv log
exit 1 #Exit with failure
fi
exit 0 #Exit with success
The rm tmp statement in the trap is always executed when the script exits, so the file "tmp" will always tried to be deleted.
Installed traps can also be reset; a call to trap with only a signal name will reset the signal handler.
trap EXIT
For more details, see the bash manual page: man bash
Well, sort of:
{ # your 'try' block
executeCommandWhichCanFail &&
mv output
} || { # your 'catch' block
mv log
}
rm tmp # finally: this will always happen
I found success in my script with this syntax:
# Try, catch, finally
(echo "try this") && (echo "and this") || echo "this is the catch statement!"
# this is the 'finally' statement
echo "finally this"
If either try statement throws an error or ends with exit 1, then the interpreter moves on to the catch statement and then the finally statement.
If both try statements succeed (and/or end with exit), the interpreter will skip the catch statement and then run the finally statement.
Example_1:
goodFunction1(){
# this function works great
echo "success1"
}
goodFunction2(){
# this function works great
echo "success2"
exit
}
(goodFunction1) && (goodFunction2) || echo "Oops, that didn't work!"
echo "Now this happens!"
Output_1
success1
success2
Now this happens!
Example _2
functionThrowsErr(){
# this function returns an error
ech "halp meh"
}
goodFunction2(){
# this function works great
echo "success2"
exit
}
(functionThrowsErr) && (goodFunction2) || echo "Oops, that didn't work!"
echo "Now this happens!"
Output_2
main.sh: line 3: ech: command not found
Oops, that didn't work!
Now this happens!
Example_3
functionThrowsErr(){
# this function returns an error
echo "halp meh"
exit 1
}
goodFunction2(){
# this function works great
echo "success2"
}
(functionThrowsErr) && (goodFunction2) || echo "Oops, that didn't work!"
echo "Now this happens!"
Output_3
halp meh
Oops, that didn't work!
Now this happens!
Note that the order of the functions will affect output. If you need both statements to be tried and caught separately, use two try catch statements.
(functionThrowsErr) || echo "Oops, functionThrowsErr didn't work!"
(goodFunction2) || echo "Oops, good function is bad"
echo "Now this happens!"
Output
halp meh
Oops, functionThrowsErr didn't work!
success2
Now this happens!
mv takes two parameters, so may be you really wanted to cat the output file's contents:
echo `{ execCommand && cat output ; } || cat log`
rm -f tmp
Another way to do it would be:
set -e; # stop on errors
mkdir -p "$HOME/tmp/whatevs"
exit_code=0
(
set +e;
(
set -e;
echo 'foo'
echo 'bar'
echo 'biz'
)
exit_code="$?"
)
rm -rf "$HOME/tmp/whatevs"
if [[ "exit_code" != '0' ]]; then
echo 'failed';
fi
although the above doesn't really offer any benefit over:
set -e; # stop on errors
mkdir -p "$HOME/tmp/whatevs"
exit_code=0
(
set -e;
echo 'foo'
echo 'bar'
echo 'biz'
exit 44;
exit 43;
) || {
exit_code="$?" # exit code of last command which is 44
}
rm -rf "$HOME/tmp/whatevs"
if [[ "exit_code" != '0' ]]; then
echo 'failed';
fi
Warning: exit traps are not always excuted. Since writing this answer I have run into situations, where my exit trap would not be executed, causing loss of files, the reason of which I haven't found yet.
The issue occurred when I stopped a python script with Ctrl+C, which in turn had executed a bash script using exit traps -- which actually should cause the exit traps to be executed, since exit traps are executed on SIGINT in bash.
So, while trap .. exit is useful for cleanup, there are stil scenarios where it won't be executed, the most obvious ones being power outages and receiving SIGKILL.
I often end up with bash scripts becoming quite large, as I add additional options, or otherwise change them. When a bash-script contains a lot of functions, using 'trap EXIT' may become non-trivial.
For instance, consider a script invoked as
dotask TASK [ARG ...]
where each TASK may consists of substeps, where it is desirable to perform cleanup in between.
In this case, it is helpful to work with subshells to produce scoped exit traps, e.g.
function subTask (
local tempFile=$(mktemp)
trap "rm '${tempFile}'" exit
...
)
However, working with subshells can be tricky, as they can't set global variables of the parent shell.
Additionally, it is often inconvenient to write a single exit trap. For instance, the cleanup steps may depend on how far a function came before encountering an error. It would be nice to be able to make RAII style cleanup declarations:
function subTask (
...
onExit 'rm tmp.1'
...
onExit 'rm tmp.2'
...
)
It would seem obvious to use something like
handlers=""
function onExit { handlers+="$1;"; trap "$handlers" exit; }
to update the trap. But this fails for nested subshells, as it would cause premature execution of the parent shell's handlers. The client code would have to explicitly reset the handlers variable at the beginning of the subshell.
Solutions discussed in [multiple bash traps for the same signal], which patch the trap by using the output from trap -p EXIT will equally fail: Even though subshells don't inherit the EXIT trap, trap -p exit will display the parent shell's handler so, again, manual resetting is needed.

Customized progress message for tasks in bash script

I'm currently writing a bash script to do tasks automatically. In my script I want it to display progress message when it is doing a task.
For example:
user#ubuntu:~$ Configure something
->
Configure something .
->
Configure something ..
->
Configure something ...
->
Configure something ... done
All the progress message should appear in the same line.
Below is my workaround so far:
echo -n "Configure something "
exec "configure something 2>&1 /dev/null"
//pseudo code for progress message
echo -n "." and sleep 1 if the previous exec of configure something not done
echo " done" if exec of the command finished successfully
echo " failed" otherwise
Will exec wait for the command to finish and then continue with the script lines later?
If so, then how can I echo message at the same time the exec of configure something is taking place?
How do I know when exec finishes the previous command and return true? use $? ?
Just to put the editorial hat on, what if something goes wrong? How are you, or a user of your script going to know what went wrong? This is probably not the answer you're looking for but having your script just execute each build step individually may turn out to be better overall, especially for troubleshooting. Why not define a function to validate your build steps:
function validateCmd()
{
CODE=$1
COMMAND=$2
MODULE=$3
if [ ${CODE} -ne 0 ]; then
echo "ERROR Executing Command: \"${COMMAND}\" in Module: ${MODULE}"
echo "Exiting."
exit 1;
fi
}
./configure
validateCmd $? "./configure" "Configuration of something"
Anyways, yes as you probably noticed above, use $? to determine what the result of the last command was. For example:
rm -rf ${TMP_DIR}
if [ $? -ne 0 ]; then
echo "ERROR Removing directory: ${TMP_DIR}"
exit 1;
fi
To answer your first question, you can use:
echo -ne "\b"
To delete a character on the same line. So to count to ten on one line, you can do something like:
for i in $(seq -w 1 10); do
echo -en "\b\b${i}"
sleep .25
done
echo
The trick with that is you'll have to know how much to delete, but I'm sure you can figure that out.
You cannot call exec like that; exec never returns, and the lines after an exec will not execute. The standard way to print progress updates on a single line is to simply use \r instead of \n at the end of each line. For example:
#!/bin/bash
i=0
sleep 5 & # Start some command
pid=$! # Save the pid of the command
while sleep 1; do # Produce progress reports
printf '\rcontinuing in %d seconds...' $(( 5 - ++i ))
test $i -eq 5 && break
done
if wait $pid; then echo done; else echo failed; fi
Here's another example:
#!/bin/bash
execute() {
eval "$#" & # Execute the command
pid=$!
# Invoke a shell to print status. If you just invoke
# the while loop directly, killing it will generate a
# notification. By trapping SIGTERM, we suppress the notice.
sh -c 'trap exit SIGTERM
while printf "\r%3d:%s..." $((++i)) "$*"; do sleep 1
done' 0 "$#" &
last_report=$!
if wait $pid; then echo done; else echo failed; fi
kill $last_report
}
execute sleep 3
execute sleep 2 \| false # Execute a command that will fail
execute sleep 1

Trouble with errexit in bash

I'm writing a bash script and I'd like it to crash on the first error. However, I can't get it to do this in a specific circumstance I simplified below:
#!/bin/bash
set -Exu
bad_command() {
false
#exit 1
echo "NO!!"
}
(set -o pipefail; bad_command | cat ; echo "${PIPESTATUS[#]}; $?") || false
echo "NOO!!"
The expected behaviour would be a crash of the bad_command subshell, propagated to a crash of the () subshell, propagated to a crash of the outter shell. But none of those crash, and both NOs get printed(!?)
If I uncomment the exit 1 statement, then the NO is no longer printed, but NOO still is(!?)
I tried using set -e expicitly inside each of the 3 shells (first line in function, first statement after (, but there's no change.
Note: I need to execute the pipe inside the () subshell, because this is a simplification of a more elaborate script. Without the () subshell, everything works as expected, no NOs whatsoever with either false or exit 1.
This seems to be a bash or even POSIX bug: https://groups.google.com/forum/?fromgroups=#!topic/gnu.bash.bug/NCK_0GmIv2M
After hitting the same problem, I have found a workaround. Actually 3 depending on what you want to achieve.
First a small rewrite of the OP example code since handling the exit code requires some extra work down the line:
#! /bin/bash
set -eEu
bad_command_extra() {
return 42
}
bad_command() {
bad_command_extra
echo "NO!!"
}
if bad_command; then echo "NOO!!"; else echo "errexit worked: $?"; fi
If it's only needed to have the errexit work, following is sufficient in calling bad_command. The trick is to launch bad_command in the background:
(bad_command) &
bc_pid=$!
if wait $bc_pid; then echo "NOO!!"; else echo "errexit worked: $?"; fi
If you want to work with the output as well (similar to abc=$(bad_command)), capture it in a temporary file as usual:
tmp_out=$(mktemp)
tmp_err=$(mktemp)
(bad_command >$tmp_out 2>$tmp_err) &
bc_pid=$!
if wait $bc_pid; then echo "NOO!!"; else echo "errexit worked: $?"; fi
cat $tmp_out $tmp_err
rm -f $tmp_out $tmp_err
Finally, I found out in my testings that the wait command returned either 0 or 1 but not the actual exit code of bad_command (bash 4.3.42). This requires some more work:
tmp_out=$(mktemp)
tmp_err=$(mktemp)
tmp_exit=$(mktemp)
echo 0 > $tmp_exit
(
get_exit () {
echo $? > $tmp_exit
}
trap get_exit ERR
bad_command >$tmp_out 2>$tmp_err
) &
bc_pid=$!
bc_exit=$(cat $tmp_exit)
if wait $bc_pid
then echo "NOO!!"
else echo "errexit worked: $bc_exit"
fi
cat $tmp_out $tmp_err
rm -f $tmp_out $tmp_err $tmp_exit
For some strange reason, putting the if on one line as before got me exit code 0 in this case !

How to get exit status of piped command from inside the pipeline?

Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.

Resources