Catching errors in Bash with glassfish commands [return code in pipes] - bash

I am writing a bash script to manage deployments to a GF server for several environments. What I would like to know is how can I get the result of a GF command and then determine whether to continue or exit.
For example
Say I want to redeploy, I have this script
$GF_ASADMIN --port $GF_PORT redeploy --name $EAR_FILE_NAME --keepstate=true $EAR_FILE | tee -a $LOG
The variables are already defined. So GF will start to redeploy and either suceed or fail. I want to check if it does and act accordingly. I have this right after it.
RC=$?
if [[ $RC -eq 0 ]];
then echoInfo "Application Successfully redeployed!" | tee -a $LOG;
else
echoError "Failed to redeploy application!"
exit 1
fi;
However, it doesnt really seem to work .

The problem is the pipe
$GF_ASADMIN ... | tee -a $LOG
$? reflects the return code of tee.
Your are looking for PIPESTATUS. See man bash:
PIPESTATUS
An array variable (see Arrays below) containing a list of exit
status values from the processes in the most-recently-executed
foreground pipeline (which may contain only a single command).
See also this example to clarify the PIPESTATUS
false | true
echo ${PIPESTATUS[#]}
Output is: 1 0
The corrected code is:
RC=${PIPESTATUS[0]}

Or try using a code block redirect, for example:
{
if "$GF_ASADMIN" --port $GF_PORT redeploy --name "$EAR_FILE_NAME" --keepstate=true "$EAR_FILE"
then
echo Info "Application Successfully redeployed!"
else
echo Error "Failed to redeploy application!" >&2
exit 1
fi
} | tee -a "$LOG"

Related

Trying to exit main command from a piped grep condition

I'm struggling to find a good solution for what I'm trying to do.
So I have a CreateReactApp instance that is booted through a yarn run start:e2e. As soon as the output from that command has "Compiled successfully", I want to be able to run next command in the bash script.
Different things I tried:
if yarn run start:e2e | grep "Compiled successfully"; then
exit 0
fi
echo "THIS NEEDS TO RUN"
This does appear to stop the logs, but it does not run the next command.
yarn run start:e2e | while read -r line;
do
echo "$line"
if [[ "$line" == *"Compiled successfully!"* ]]; then
exit 0
fi
done
echo "THIS NEEDS TO RUN"
yarn run start:e2e | grep -q "Compiled successfully";
echo $?
echo "THIS NEEDS TO RUN"
I've read about the differences between pipes / process substitions, but don't see a practical implementation regarding my use case..
Can someone enlighten me on what I'm doing wrong?
Thanks in advance!
EDIT: Because I got multiple proposed solutions and none of those worked I'll maybe redefine my main problem a bit.
So the yarn run start:e2e boots op a react app, that has a sort of "watch" mode. So it keeps spewing out logs after the "Compiled successfully" part, when changes occur to the source code, typechecks, ....
After the React part is booted (so if the log Compiled succesfully is outputted) the logs do not matter anymore but the localhost:3000 (that the yarn compiles to) must remain active.
Then I run other commands after the yarn run to do some testing on the localhost:3000
So basically what I want to achieve in pseudo (the pipe stuff in command A is very abstract and may not even look like the correct solution but trying to explain thoroughly):
# command A
yarn run dev | cmd_to_watch_the_output "Compiled succesfully" | exit 0 -> localhost:3000 active but the shell is back in 'this' window
-> keep watching the output until Compiled succesfully occurs
-> If it occurs, then the logs does not matter anymore and I want to run command B
# command B
echo "I WANT TO SEE THIS LOG"
... do other stuff ...
I hope this clears it up a bit more :D
Thanks already for the propositions!
If you want yarn run to keep running even after Compiled successfully, you can't just pipe its stdout to another program that exits after that line: that stdout needs to have somewhere to go so yarn's future attempts to write logs don't fail or block.
#!/usr/bin/env bash
case $BASH_VERSION in
''|[0-3].*|4.[012].*) echo "Error: bash 4.3+ required" >&2; exit 1;;
esac
exec {yarn_fd}< <(yarn run); yarn_pid=$!
while IFS= read -r line <&$yarn_fd; do
printf '%s\n' "$line"
if [[ $line = *"Compiled successfully!"* ]]; then
break
fi
done
# start a background process that reads future stdout from `yarn run`
cat <&$yarn_fd >/dev/null & cat_pid=$!
# close the FD from that background process so `cat` has the only copy
exec {yarn_fd}<&-
echo "Doing other things here!"
echo "When ready to shut down yarn, kill $yarn_pid and $cat_pid"

jenkins stuck after restarting remote service agent

This is part of bigger code that checking the OS version and choosing the correct condition by it.
( after checking OS version, it will go to this if condition: )
if [ "$AK" = "$OS6" ]
then
if [ "$(ls -la /etc/init.d/discagent 2>/dev/null | wc -l)" == 1 ]
then
/etc/init.d/discagent restart 2>&1 > /dev/null
/etc/init.d/discagent status |& grep -qe 'running'
if [ $? -eq 0 ] ; then
echo " Done "
else
echo " Error "
fi
fi
fi
If im hashing service discagent restart the pipeline is passing.
But if im not hashing it then it hang and not given any errors, And on the output file it is showing only the second server (out of few) that its hang on, And not moving to the next server.
what could be the issue?
p. S
while running it direct on the server it is working.
Run this script manually on the servers, that this script will be running on.
You can use xtrace -x which would show each statement before being executed, when you use with -v with -x so this would be -xv and the statement would be outputted, and then the line after the output of the statement with the variables substituted before the code is substituted.
using -u would show you the exact error when this occurs.
Another command is the trap command to debug your bash script.
More information on the following webpage,
https://linuxconfig.org/how-to-debug-bash-scripts

Not able to fetch the exit status of a multiple commands (separated by PIPE) which got assigned to a variable

Below is the sample script which i am trying to execute; but it fails to fetch the exit status of $cmd; is there any other way to fetch its exit status..!?
cmd="curl -mddddddd google.com"
status=$($cmd | wc -l)
echo ${PIPESTATUS[0]}
I know that, if i replace status=$($cmd | wc -l) with $cmd | wc -l , i could fetch the exit status of $cmd using PIPESTATUS. But in my case i have to assign it to a variable (example: status in above case).
Please help me here..!
Regards,
Rohith
What you're assigning to the status variable is not a status, but what $cmd | wc -l pipeline prints to standard output.
Why do you echo anyway? Try realstatus=${PIPESTATUS[0]}.
EDIT (After some digging and RTFMing...):
Just this -- realstatus=${PIPESTATUS[0]} -- doesn't seem to help, since $(command_substitution), which is in your code, is done "in a subshell environment", while PIPESTATUS is about "the most-recently-executed foreground pipeline"
If what you're trying to do in this particular case is to ensure the curl (aka $cmd) command was succesfull in the pipeline you should probably make use of pipefail option (see here).
If the output of the command is text and not excessively large, the simplest way to get the status of the command is to not use a pipe:
cmd_output=$($cmd)
echo "'$cmd' exited with $?"
linecount=$(wc -l <<<"$cmd_output")
echo "'wc' exited with $?"
What counts as "excessively large" depends on the system, but I successfully tested the code above with a command that generated 50 megabytes (over one million lines) of output on an old Linux system.
If the output of the command is too big to store in memory, another option is to put it in a temporary file:
$cmd >tmpfile
echo "'$cmd' exited with $?"
linecount=$(wc -l <tmpfile)
echo "'wc' exited with $?"
You need to be careful when using temporary files though. See Creating temporary files in Bash and How create a temporary file in shell script?.
Note that, as with the OP's example code, the unquoted $cmd in the code examples above is dangerous. It should not be used in real code.
If you just want to echo the pipe status, you can redirect that to stderr. But you have to do it in the subshell.
status=$($cmd | wc -l; echo ${PIPESTATUS[0]} >&2)
Or you can capture both variables from the subshell using read
read -rd $'\0' status pstatus <<<$($cmd | wc -l; echo ${PIPESTATUS[0]})

Abort bash script if git pull fails [duplicate]

I have a Bash shell script that invokes a number of commands.
I would like to have the shell script automatically exit with a return value of 1 if any of the commands return a non-zero value.
Is this possible without explicitly checking the result of each command?
For example,
dosomething1
if [[ $? -ne 0 ]]; then
exit 1
fi
dosomething2
if [[ $? -ne 0 ]]; then
exit 1
fi
Add this to the beginning of the script:
set -e
This will cause the shell to exit immediately if a simple command exits with a nonzero exit value. A simple command is any command not part of an if, while, or until test, or part of an && or || list.
See the bash manual on the "set" internal command for more details.
It's really annoying to have a script stubbornly continue when something fails in the middle and breaks assumptions for the rest of the script. I personally start almost all portable shell scripts with set -e.
If I'm working with bash specifically, I'll start with
set -Eeuo pipefail
This covers more error handling in a similar fashion. I consider these as sane defaults for new bash programs. Refer to the bash manual for more information on what these options do.
To add to the accepted answer:
Bear in mind that set -e sometimes is not enough, specially if you have pipes.
For example, suppose you have this script
#!/bin/bash
set -e
./configure > configure.log
make
... which works as expected: an error in configure aborts the execution.
Tomorrow you make a seemingly trivial change:
#!/bin/bash
set -e
./configure | tee configure.log
make
... and now it does not work. This is explained here, and a workaround (Bash only) is provided:
#!/bin/bash
set -e
set -o pipefail
./configure | tee configure.log
make
The if statements in your example are unnecessary. Just do it like this:
dosomething1 || exit 1
If you take Ville Laurikari's advice and use set -e then for some commands you may need to use this:
dosomething || true
The || true will make the command pipeline have a true return value even if the command fails so the the -e option will not kill the script.
If you have cleanup you need to do on exit, you can also use 'trap' with the pseudo-signal ERR. This works the same way as trapping INT or any other signal; bash throws ERR if any command exits with a nonzero value:
# Create the trap with
# trap COMMAND SIGNAME [SIGNAME2 SIGNAME3...]
trap "rm -f /tmp/$MYTMPFILE; exit 1" ERR INT TERM
command1
command2
command3
# Partially turn off the trap.
trap - ERR
# Now a control-C will still cause cleanup, but
# a nonzero exit code won't:
ps aux | grep blahblahblah
Or, especially if you're using "set -e", you could trap EXIT; your trap will then be executed when the script exits for any reason, including a normal end, interrupts, an exit caused by the -e option, etc.
The $? variable is rarely needed. The pseudo-idiom command; if [ $? -eq 0 ]; then X; fi should always be written as if command; then X; fi.
The cases where $? is required is when it needs to be checked against multiple values:
command
case $? in
(0) X;;
(1) Y;;
(2) Z;;
esac
or when $? needs to be reused or otherwise manipulated:
if command; then
echo "command successful" >&2
else
ret=$?
echo "command failed with exit code $ret" >&2
exit $ret
fi
Run it with -e or set -e at the top.
Also look at set -u.
On error, the below script will print a RED error message and exit.
Put this at the top of your bash script:
# BASH error handling:
# exit on command failure
set -e
# keep track of the last executed command
trap 'LAST_COMMAND=$CURRENT_COMMAND; CURRENT_COMMAND=$BASH_COMMAND' DEBUG
# on error: print the failed command
trap 'ERROR_CODE=$?; FAILED_COMMAND=$LAST_COMMAND; tput setaf 1; echo "ERROR: command \"$FAILED_COMMAND\" failed with exit code $ERROR_CODE"; put sgr0;' ERR INT TERM
An expression like
dosomething1 && dosomething2 && dosomething3
will stop processing when one of the commands returns with a non-zero value. For example, the following command will never print "done":
cat nosuchfile && echo "done"
echo $?
1
#!/bin/bash -e
should suffice.
I am just throwing in another one for reference since there was an additional question to Mark Edgars input and here is an additional example and touches on the topic overall:
[[ `cmd` ]] && echo success_else_silence
Which is the same as cmd || exit errcode as someone showed.
For example, I want to make sure a partition is unmounted if mounted:
[[ `mount | grep /dev/sda1` ]] && umount /dev/sda1

false | true; echo $? [duplicate]

I currently have a script that does something like
./a | ./b | ./c
I want to modify it so that if any of a, b, or c exit with an error code I print an error message and stop instead of piping bad output forward.
What would be the simplest/cleanest way to do so?
In bash you can use set -e and set -o pipefail at the beginning of your file. A subsequent command ./a | ./b | ./c will fail when any of the three scripts fails. The return code will be the return code of the first failed script.
Note that pipefail isn't available in standard sh.
You can also check the ${PIPESTATUS[]} array after the full execution, e.g. if you run:
./a | ./b | ./c
Then ${PIPESTATUS} will be an array of error codes from each command in the pipe, so if the middle command failed, echo ${PIPESTATUS[#]} would contain something like:
0 1 0
and something like this run after the command:
test ${PIPESTATUS[0]} -eq 0 -a ${PIPESTATUS[1]} -eq 0 -a ${PIPESTATUS[2]} -eq 0
will allow you to check that all commands in the pipe succeeded.
If you really don't want the second command to proceed until the first is known to be successful, then you probably need to use temporary files. The simple version of that is:
tmp=${TMPDIR:-/tmp}/mine.$$
if ./a > $tmp.1
then
if ./b <$tmp.1 >$tmp.2
then
if ./c <$tmp.2
then : OK
else echo "./c failed" 1>&2
fi
else echo "./b failed" 1>&2
fi
else echo "./a failed" 1>&2
fi
rm -f $tmp.[12]
The '1>&2' redirection can also be abbreviated '>&2'; however, an old version of the MKS shell mishandled the error redirection without the preceding '1' so I've used that unambiguous notation for reliability for ages.
This leaks files if you interrupt something. Bomb-proof (more or less) shell programming uses:
tmp=${TMPDIR:-/tmp}/mine.$$
trap 'rm -f $tmp.[12]; exit 1' 0 1 2 3 13 15
...if statement as before...
rm -f $tmp.[12]
trap 0 1 2 3 13 15
The first trap line says 'run the commands 'rm -f $tmp.[12]; exit 1' when any of the signals 1 SIGHUP, 2 SIGINT, 3 SIGQUIT, 13 SIGPIPE, or 15 SIGTERM occur, or 0 (when the shell exits for any reason).
If you're writing a shell script, the final trap only needs to remove the trap on 0, which is the shell exit trap (you can leave the other signals in place since the process is about to terminate anyway).
In the original pipeline, it is feasible for 'c' to be reading data from 'b' before 'a' has finished - this is usually desirable (it gives multiple cores work to do, for example). If 'b' is a 'sort' phase, then this won't apply - 'b' has to see all its input before it can generate any of its output.
If you want to detect which command(s) fail, you can use:
(./a || echo "./a exited with $?" 1>&2) |
(./b || echo "./b exited with $?" 1>&2) |
(./c || echo "./c exited with $?" 1>&2)
This is simple and symmetric - it is trivial to extend to a 4-part or N-part pipeline.
Simple experimentation with 'set -e' didn't help.
Unfortunately, the answer by Johnathan requires temporary files and the answers by Michel and Imron requires bash (even though this question is tagged shell). As pointed out by others already, it is not possible to abort the pipe before later processes are started. All processes are started at once and will thus all run before any errors can be communicated. But the title of the question was also asking about error codes. These can be retrieved and investigated after the pipe finished to figure out whether any of the involved processes failed.
Here is a solution that catches all errors in the pipe and not only errors of the last component. So this is like bash's pipefail, just more powerful in the sense that you can retrieve all the error codes.
res=$( (./a 2>&1 || echo "1st failed with $?" >&2) |
(./b 2>&1 || echo "2nd failed with $?" >&2) |
(./c 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
To detect whether anything failed, an echo command prints on standard error in case any command fails. Then the combined standard error output is saved in $res and investigated later. This is also why standard error of all processes is redirected to standard output. You can also send that output to /dev/null or leave it as yet another indicator that something went wrong. You can replace the last redirect to /dev/null with a file if yo uneed to store the output of the last command anywhere.
To play more with this construct and to convince yourself that this really does what it should, I replaced ./a, ./b and ./c by subshells which execute echo, cat and exit. You can use this to check that this construct really forwards all the output from one process to another and that the error codes get recorded correctly.
res=$( (sh -c "echo 1st out; exit 0" 2>&1 || echo "1st failed with $?" >&2) |
(sh -c "cat; echo 2nd out; exit 0" 2>&1 || echo "2nd failed with $?" >&2) |
(sh -c "echo start; cat; echo end; exit 0" 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
This answer is in the spirit of the accepted answer, but using shell variables instead of temporary files.
if TMP_A="$(./a)"
then
if TMP_B="$(echo "TMP_A" | ./b)"
then
if TMP_C="$(echo "TMP_B" | ./c)"
then
echo "$TMP_C"
else
echo "./c failed"
fi
else
echo "./b failed"
fi
else
echo "./a failed"
fi

Resources