Could someone explain this try/catch alternative in bash? - bash

So I found out that bash does not handle exceptions (there is no try/catch).
For my script, I would like to know if a command was successful or not.
This is the part of my code right now:
command = "scp -p$port $user:$password#$host:$from $to"
$command 2>/dev/null
if (( $? == 0 )); then
echo 'command was successful'
else
echo 'damn, there was an error'
fi
The things I don't understand are:
line 3, why do I have to put the 2 behind the $command?
line 5, what exactly is it with this $?

$? means the return code of the last executed command.
2> means redirecting the stderr (standard error stream) output to /dev/null.

Just FYI, this will also work:
if some_command 2>/dev/null ; then
echo 'command was successful'
else
echo 'damn, there was an error'
fi

Related

Cannot stop BASH when using of && and || operators

I would like to stop my BASH if the commands have any errors.
make clean || ( echo "ERROR!!" && echo "ERROR!!" >> log_file && exit 1 )
But seems like my BASH still keeps going. How do I put exit 1 in the one-line operators
I am very new to BASH, any help is appreciated!
exit 1 exits from the subshell created by (), not the original shell. Use {} to keep the command group in the same shell.
Don't use && between commands unless you want to stop as soon as one of them fails. Use ; to separate commands on the same line.
make clean || { echo "ERROR!!" ; echo "ERROR!!" >> log_file ; exit 1 ;}
Or just use if to make it easier to understand.
if ! make clean
then
echo "ERROR!!"
echo "ERROR!!" >> log_file
exit
fi
You have the direct solution in Barmar's answer. An alternative if you want to check multiple commands in a similar way could be to define a function which could be reused:
die() {
echo "ERROR: $#"
echo "ERROR: $#" >> log_file
exit 1
}
make clean || die "I left it unclean"
make something || die "something went wrong"
or, if you want the script to end at first sign of trouble, you could use set -e
set -e
make clean # stops here unless successful
make something # or here if this line fails etc.
You may want to log an error message too, so you could install a trap on ERR. errfunc would here be called before exiting the script and the line number where it failed would be logged:
errfunc() {
echo "ERROR on line $1"
echo "ERROR on line $1" >> log_file
}
trap 'errfunc $LINENO' ERR
set -e
make clean
make something

Bash script for searching a specific word in terminal output

I'm trying to implement a bash script who supposed to search for a word in a Python script terminal output.
The Python script doesn't stop so "&" in the end of the command is needed but the "if [ $? == 0 ] ; then" condition doesn't work.
How it can be solved?
Thanks, Gal.
#!/bin/bash
#Check if Pixhawk is connected
PORT=/dev/ttyPixhawk
end=$((SECONDS+3))
not_exists=f
/usr/local/bin/mavproxy.py --daemon --non-interactive --master=$PORT | grep 'Failed' &> /dev/null &
while [ $SECONDS -lt $end ] ; do
if [ $? == 0 ] ; then
not_exists=t
fi
sleep 1
done
if [ $not_exists=t ] ; then
echo "Not Exists"
else
echo "Exists"
fi
kill $(pgrep -f '/usr/local/bin/mavproxy.py')
Bash doesn't know anything about the output of background commands. Check for yourself with [ 5444 -lt 3 ] & echo $?.
your if statement wouldn't work in any case because $? checks for the return value of the most recent previous command, which in this case is your while loop.
You have a few different options. If you're waiting for some output, and you know how long it is in the output until whatever target you're looking for occurs, you can have the python write to a file and keep checking on the file size with a timeout for failure.
You can also continue with a simple timed approach as you have where you just check the output after a few seconds and decide success or failure based on that.
You can make your python script actually end, or provide more error messages, or write only the relevant parts to file that way.
Furthermore, you really should run your script through shellcheck.net to notice more problems.
You'll need to define your goal and use case more clearly to get real help; all we can really say is "your approach will not work, but there are definitely approaches which will work"
You are checking the status of grep command output inside while loop using $?. This can be done if $? is the next command to be fired after grep and if grep is not a back-group process . But in your script, $? will return the status of while [$SECONDS -lt $end ]. You can try to re-direct the output to a temp file and check it's status
/usr/local/bin/mavproxy.py --daemon --non-interactive --master=$PORT | grep 'Failed' &> tmp.txt &
sleep 3
# If file exists and it's size is greater than 0, [ -s File] will return true
if [ -s tmp.txt ]; then
echo 'pattern exists'
else
echo 'pattern not exists'
fi

Shell Script won't fail in Jenkins

I have a simple shell script which I want to set up as a periodic Jenkins job rather than a cronjob for visibility and usability for less experienced users.
Here is the script:
#!/bin/bash
outputfile=/opt/jhc/streaming/check_error_output.txt
if [ "grep -sq 'Unable' $outputfile" == "0" ]; then
echo -e "ERROR MESSAGE FOUND\n"
exit 1
else
echo -e "NO ERROR MESSAGES HAVE BEEN FOUND\n"
exit 0
fi
My script will always return "NO ERROR MESSAGES HAVE BEEN FOUND" regardless of whether or not 'Unable' is in $outputfile, what am I doing wrong?
I also need my Jenkins job to class this as a success if 'Unable' isn't found (e.g. If script returns "0" then pass, everything else is fail)
Execute the grep command and check the exit status instead:
#!/bin/bash
outputfile=/opt/jhc/streaming/check_error_output.txt
grep -sq 'Unable' $outputfile
if [ "$?" == "0" ]; then
echo -e "ERROR MESSAGE FOUND\n"
exit 1
else
echo -e "NO ERROR MESSAGES HAVE BEEN FOUND\n"
exit 0
fi
You are comparing two different strings. The outcome will always be false, i.e. the else part is taken.
Also, no need to explicitly query the status code. Do it like this:
if grep -sq 'Unable' $outputfile
then
....
else
....
fi

How to execute a bash script line by line? [duplicate]

This question already has answers here:
Automatic exit from Bash shell script on error [duplicate]
(8 answers)
Closed 6 years ago.
#Example Script
wget http://file1.com
cd /dir
wget http://file2.com
wget http://file3.com
I want to execute the bash script line by line and test the exit code ($?) of each execution and determine whether to proceed or not:
It basically means I need to add the following script below every line in the original script:
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
and the original script becomes:
#Example Script
wget http://file1.com
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
cd /dir
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
wget http://file2.com
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
wget http://file3.com
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
But the script becomes bloated.
Is there a better method?
One can use set -e but it's not without it's own pitfalls. Alternative one can bail out on errors:
command || exit 1
And an your if-statement can be written less verbose:
if command; then
The above is the same as:
command
if test "$?" -eq 0; then
set -e makes the script fail on non-zero exit status of any command. set +e removes the setting.
There are many ways to do that.
For example can use set in order to automatically stop on "bad" rc; simply by putting
set -e
on top of your script. Alternatively, you could write a "check_rc" function; see here for some starting points.
Or, you start with this:
check_error () {
if [ $RET == 0 ]; then
echo "DONE"
echo ""
else
echo "ERROR"
exit 1
fi
}
To be used with:
echo "some example command"
RET=$? ; check_error
As said; many ways to do this.
Best bet is to use set -e to terminate the script as soon as any non-zero return code is observed. Alternatively you can write a function to deal with error traps and call it after every command, this will reduce the if...else part and you can print any message before exiting.
trap errorsRead ERR;
function errorsRead() {
echo "Some none-zero return code observed..";
exit 1;
}
somecommand #command of your need
errorsRead # calling trap handling function
You can do this contraption:
wget http://file1.com || exit 1
This will terminate the script with error code 1 if a command returns a non-zero (failed) result.

Error handling in unix Script for subshell script

I have a wrapper.sh script which call another script run_workflow.sh which eventually calls a workflow. I would like to handle error for run_wrklow.sh...i.e, if the workflow is executed successfully then i need to call another script run_workflow2.sh which triggers another workflow.
Here is the sample code...Please suggest me how to handle errors
wrapper.sh
sh run_workflow.sh #trigger workflow1
if [ $? -ne 0 ]; then
echo "Workflow Failed"
else
echo "Wrokflow Success"
sh run_workflow2.sh #trigger workflow2
if [ $? -ne 0 ]; then
echo "Workflow2 Failed"
else
echo "Workflow2 Success"
fi
fi
However when i try this code I'm not able to return failed status.
Here is my suggestion. You don't need to explicitly test $?, the syntax is that if is followed by a command ([ is the test command).
exit_value=1 # default failure
if sh run_workflow.sh #trigger workflow1
then
echo "Wrokflow Success"
if sh run_workflow2.sh #trigger workflow2
then
echo "Workflow2 Success"
exit_value=0
else
echo "Workflow2 Failed" >&2
fi
else
echo "Workflow Failed" >&2
fi
exit $exit_value
Note that I echo error messages to stderr (>&2). The exit command returns an error, which is an integer between 0-255. By convention we return 0 on success and 1 on error.
I also indented my code, which all experienced programmers do.

Resources