Why exit status always coming 0. What will be the solution - shell

Below code is the part of my shell script.
But I am not able to understand why exit status(sshStatus) always coming 0?
I want ssh output as well as exit status.
Please help me to find the solution.
local output="$(ssh -q -o ConnectTimeout=10 \
-o BatchMode=yes \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
$user#$host "$command" 2>&1)"
local sshStatus=$?
command can be :
command="[ ! -d /home/upendra/dfs ]"
command="cat /home/upendra/a.txt"
command="sh /home/upendra/dfs/bin/start-datanode.sh"
Whenever i'm calling command like below directly on shell prompt:
ssh -q -o ConnectTimeout=10 \
-o BatchMode=yes \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
upendra#172.20.20.2 "[ ! -d /home/upendra/dfs ]" 2>&1
Then exit status(echo $?) is coming 1. This is correct because this directory not exists on host.

I got solution on this page :bash shell - ssh remote script capture output and exit code? it is due to "local output". – Upendra

You are always getting exot status as 0 as your command is excecuted successfully
The exit status you obtain from the local machine is the exit status of the last command in ssh session
for example
$ ssh localhost
$ exit 5
$ echo $? #on local system
5
Consider a case without any command
$ ssh localhost
$ ls
#will list commands and exit succussfully
ctrl+d
$ echo $? #on local system
0
Here the exit status of ls command is 0 which is printed.

Every command returns an exit status (sometimes referred to as a return status or exit code). A successful command returns a 0, while an unsuccessful command returns a non-zero value that usually can be interpreted as an error code. Well-behaved UNIX commands, programs, and utilities return a 0 exit code upon successful completion, though there are some exceptions.
Likewise, functions within a script and the script itself return an exit status. The last command executed in the function or script determines the exit status. Within a script, an exit nnn command may be used to deliver an nnn exit status to the shell (nnn must be an integer in the 0 - 255 range).
#!/bin/bash
echo hello
echo $? # Exit status 0 returned because command executed successfully.
lskdf # Unrecognized command. echo $? # Non-zero exit status
returned -- command failed to execute.
echo
exit 113 # Will return 113 to shell.
# To verify this, type "echo $?" after script terminates.
Your code return exit code 0 which means your shell script execute successfully.

Related

returns the exact returned value if wget fails

I'd love to return the exact value if wget command fails in efficient way without changing it.
Can exit #? output the returned value from wget?
Ex.
# If it succeeds, then wget returns zero instead of non zero
## 0 No problems occurred.
## 1 Generic error code.
## 2 Parse error—for instance, when parsing command-line options, the ‘.wgetrc’ or ‘.netrc’...
## 3 File I/O error.
## 4 Network failure.
## 5 SSL verification failure.
## 6 Username/password authentication failure.
## 7 Protocol errors.
## 8 Server issued an error response.
wget https://www.google.co.jp/images/branding/googlelogo/2x/googlelogo_color_120x44dp.png -o test.img
if [ $? -ne 0 ]
then
# exit 16 # failed ends1 <== This doesn't tell anything
exit #?
fi
wget https://www.google.co.jp/images/branding/googlelogo/2x/googlelogo_color_120x44dp.png -o test.img
# grab wget's exit code
exit_code=$?
# if exit code is not 0 (failed), then return it
test $exit_code -eq 0 || exit $exit_code
The -e option of Bash may do what you want:
Exit immediately if a pipeline (which may consist of a single simple command), a list, or a compound command (see SHELL GRAMMAR), exits with a non-zero status.
It's also important to know that
Bash's exit status is the exit status of the last command executed in the script.
My experiments with Bash 4.4 suggest that the exit status of the failing command is returned, even if a trap handler is invoked:
$ ( trap 'echo $?' ERR; set -e; ( exit 3 ) ; echo true ) ; echo $?
3
3
So you can write:
#!/bin/bash
url=https://www.google.co.jp/images/branding/googlelogo/2x/googlelogo_color_120x44dp.png
set -e
wget -o test.img "$url"
set +e # if you no longer want exit on fail
For just one command in your script, you might prefer an explicit test and exit like this:
wget -o test.img "$url" || exit $?
Further, exit with no argument is the same as exit $?, so that can be simplified to just
wget -o test.img "$url" || exit

bash get exitcode of su script execution

I have a shell script when need to run as a particular user. So I call that script as below,
su - testuser -c "/root/check_package.sh | tee -a /var/log/check_package.log"
So after this when I check the last execution exitcode it returns always 0 only even if that script fails.
I tried something below also which didn't help,
su - testuser -c "/root/check_package.sh | tee -a /var/log/check_package.log && echo $? || echo $?"
Is there way to get the exitcode of command whatever running through su.
The problem here is not su, but tee: By default, the shell exits with the exit status of the last pipeline component; in your code, that component is not check_package.sh, but instead is tee.
If your /bin/sh is provided by bash (as opposed to ash, dash, or another POSIX-baseline shell), use set -o pipefail to cause the entirely pipeline to fail if any component of it does:
su - testuser -c "set -o pipefail; /root/check_package.sh | tee -a /var/log/check_package.log"
Alternately, you can do the tee out-of-band with redirection to a process substitution (though this requires your current user to have permission to write to check_package.log):
su - testuser -c "/root/check_package.sh" > >(tee -a /var/log/check_package.log
Both su and sudo exit with the exit status of the command they execute (if authentication succeeded):
$ sudo false; echo $?
1
$ su -c false; echo $?
1
Your problem is that the command pipeline that su runs is a pipeline. The exit status of your pipeline is that of the tee command (which succeeds), but what you really want is that of the first command in the pipeline.
If your shell is bash, you have a couple of options:
set -o pipefail before your pipeline, which will make it return the rightmost failure value of all the commands if any of them fail
Examine the specific member of the PIPESTATUS array variable - this can give you the exit status of the first command whether or not tee succeeds.
Examples:
$ sudo bash -c "false | tee -a /dev/null"; echo $?
0
$ sudo bash -c "set -o pipefail; false | tee -a /dev/null"; echo $?
1
$ sudo bash -c 'false | tee -a /dev/null; exit ${PIPESTATUS[0]}'; echo $?
1
You will get similar results using su -c, if your system shell (in /bin/sh) is Bash. If not, then you'd need to explicitly invoke bash, at which point sudo is clearly simpler.
I was facing a similar issue today, in case the topic is still open here my solution, otherwise just ignore it...
I wrote a bash script (let's say my_script.sh) which looks more or less like this:
### FUNCTIONS ###
<all functions listed in the main script which do what I want...>
### MAIN SCRIPT ### calls the functions defined in the section above
main_script() {
log_message "START" 0
check_env
check_data
create_package
tar_package
zip_package
log_message "END" 0
}
main_script |tee -a ${var_log} # executes script and writes info into log file
var_sta=${PIPESTATUS[0]} # captures status of pipeline
exit ${var_sta} # exits with value of status
It works when you call the script directly or in sudo mode

start-stop-daemon alway returns success

I have the following piece of code in a BASH script:
start-stop-daemon --start --quiet --background \
--startas /bin/bash -- -c "$CMD; exec echo $?> $FILE"
where CMD is some command string and FILE is an output file location. For some reason though, the output in FILE is always 0, even when it should give a different number (1 for error, etc).
Why is that, and how do I fix it?
Your $? gets expanded in the context of the current shell together with $CMD and $FILE.
To fix this you'll have to escape it like \$?, i.e.
... /bin/bash -c "$CMD; exec echo \$? >$FILE"
exec echo $? this will print 0 if your last command was run successfully. and that is why it prints 0 in file. what are you trying to achieve.

How to check command status while redirect standard error output to a file?

I have a bash script having the following command
rm ${thefile}
In order to ensure the command is execute successfully, I use $? variable to check on the status, but this variable doesn't show the exact error? To do this, I redirect the standard error output to a log file using following command:
rm ${file} >> ${LOG_FILE} 2>&1
With this command I can't use $? variable to check status on the rm command because the command behind the rm command is executed successfully, thus $? variable is kind of useless here.
May I know is there a solution that could combine both features where I'm able to check on the status of rm command and at mean time I'm allow to redirect the output?
With this command I can't use $? variable to check status on the rm command because the command behind the rm command is executed successfully, thus $? variable is kind of useless here.
That is simply not true. All of the redirections are part of a single command, and $? contains its exit status.
What you may be thinking of is cases where you have multiple commands arranged in a pipeline:
command-1 | command-2
When you do that, $? is set to the exit status of the last command in the pipeline (in this case command-2), and you need to use the PIPESTATUS array to get the exit status of other commands. (In this example ${PIPESTATUS[0]} is the exit status of command-1 and ${PIPESTATUS[1]} is equivalent to $?.)
What you probably need is the shell option pipefail in bash (from man bash):
The return status of a pipeline is the exit status of the last command, unless
the pipefail option is enabled. If pipefail is enabled, the pipeline's return
status is the value of the last (rightmost) command to exit with a non-zero sta‐
tus, or zero if all commands exit successfully. If the reserved word ! precedes
a pipeline, the exit status of that pipeline is the logical negation of the exit
status as described above. The shell waits for all commands in the pipeline to
terminate before returning a value.
> shopt -s -o pipefail
> true | false
> echo $?
1
> false | true
> echo $?
1
true | true
echo $?
0

How to keep make log and check it online?

I want to redirect make's log and see what's doing on make. Here is the script
make |& tee make.log # bash syntax
# make 2>&1 | tee make.log # or, sh syntax
[ $? -ne 0 ] && echo "Error: stopped" && exit 1
echo "Done"
I found it won't execute the Error exit when make failed.
I guess it is caused by the pipe, but how to refine the build script?
Since you're already restricting yourself to using bash, not portable POSIX sh, you can just use bash's pipefail shell option; run set -o pipefail. The man page for bash says:
If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
eg.
#!/bin/bash
set -o pipefail
if make |& tee make.log ; then
echo "Done"
else
echo "Error: stopped"
exit 1
fi

Resources