How to get return status of a command in subshell into the main shell? - bash

I want to retrieve the return status of a command which is being executed in a subshell.
I am running the below script from Unix Box A which has a passwordless SSH access to Box B whose IP is mentioned in the script as ip_addr.
I want to get the return status of command which is ran in subshell in my current environment.
That is if the below command fails:
echo "cmd" | system_program> 2>> /dev/null
then echo $? should print non-zero value and I should be able to use that value to decide further action.
Snippet of my script is:
sample.sh :
ip_addr="xxx.xxx.xx.xx"
status=$(ssh -q -T $ip_addr << EOF
rm /tmp/program.log; echo "cmd" | system_program> 2>> /dev/null; echo $?
EOF
)

You don't need the here-doc, or the echo. Try:
ssh -q -T $ip_addr 'rm /tmp/program.log; echo "cmd" | system_program> 2>> /dev/null'
Or if you want to use here-doc set errexit:
status=$(ssh -q -T $ip_addr << EOF
set -o errexit
rm /tmp/program.log; echo "cmd" | system_program> 2>> /dev/null
EOF
)

Related

How to run shell script via kubectl without interactive shell

I am trying to export a configuration from a service called keycloak by using shell script. To do that, export.sh will be run from the pipeline.
the script connects to k8s cluster and run the command in there.
So far, everything goes okay the export work perfectly.
But when I try to exit from the k8s cluster with exit and directly end the shell script. therefore it will move back to the pipeline host without staying in the remote machine.
Running the command from the pipeline
ssh -t ubuntu#example1.com 'bash' < export.sh
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
kubectl -n keycloak exec -it keycloak-0 bash
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
exit
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
exit
exit
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
After the first exit the whole shell script stopped, the left commands doesn't work. it won't stay on ubuntu#example1.com.
Is there any solutions?
Run the commands inside without interactive shell using HEREDOC(EOF).
It's not EOF. It's 'EOF'. this prevents a variable expansion in the current shell.
But in the other script's /tmp/export/master-* will expand as you expect.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
# the suggested code.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
Even if scp runs successfully or not, this code will exit.

Cron + nohup = script in cron cannot find command?

There is a simple cron job:
#reboot /home/user/scripts/run.sh > /dev/null 2>&1
run.sh starts a binary (simple web server):
#!/usr/bin/env bash
NPID=/home/user/server/websrv
if [ ! -f $NPID ]
then
echo "Not started"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
NUM=$(ps ax | grep $(cat $NPID) | grep -v grep | wc -l)
if [ $NUM -lt 1 ]
then
echo "Not working"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
ps ax | grep $(cat $NPID) | grep -v grep
echo "All Ok"
fi
fi
websrv gets JSON from user, and runs work.sh script itselves.
The problem is that sh script, which is invoked by websrv, "does not see" commands and stops with exit 1.
The script work.sh is like this:
#!/bin/sh -e
if [ "$#" -ne 1 ]; then
echo "Usage: $0 INPUT"
exit 1
fi
cd $(dirname $0) #good!
pwd #good!
IN="$1"
echo $IN #good!
KEYFORGIT="/some/path"
eval `ssh-agent -s` #good!
which ssh-add #good! (returns /usr/bin/ssh-add)
ssh-add $KEYFORGIT/openssh #error: exit 1!
git pull #error: exit 1!
cd $(dirname $0) #good!
rm -f somefile #error: exit 1!
#############==========Etc.==============
Usage of the full paths does not help.
If the script has been executed itself, it works.
If run.sh manually, it also works.
If I run the command nohup home/user/server/websrv & if works as well.
However, if all this chain of tools is started by cron on boot, work.sh is not able to perform any command except of cp, pwd, which, etc. But invoke of ssh-add, git, cp, rm, make etc., forces exit 1 status of the script. Why it "does not see" the commands? Unfortunately, I also cannot get any extended log which might explain the particular errors.
Try adding the path from the session that runs the script correctly to the cron entry (or inside the script)
Get the current path (where the script runs fine) with echo $PATH and add that to the crontab: replacing the string below with the output -> <REPLACE_WITH_OUTPUT_FROM_ABOVE>
#reboot export PATH=$PATH:<REPLACE_WITH_OUTPUT_FROM_ABOVE>; /home/user/scripts/run.sh > /dev/null 2>&1
You can compare paths with a cron entry like this to see what cron's PATH is:
* * * * * echo $PATH > /tmp/crons_path
Then cat /tmp/crons_path to see what it says.
Example output:
$ crontab -l | grep -v \#
* * * * * echo $PATH >> /tmp/crons_path
# wait a minute or so...
$ cat /tmp/crons_path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ echo $PATH
/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
As the commenter above mentioned, crontab doesn't always use the same path as user so likely something is missing.
Be sure to remove the temp cron entry after testing (crontab -e, etc.)...

Return Status of first command before Pipe in ksh

The following Command-line gives Exit Status for tee command. How do I get the return value of make in the following command-line
$ make -f Makefile_64bit 2>&1 | tee -a logfile
$ echo $?

bash get exitcode of su script execution

I have a shell script when need to run as a particular user. So I call that script as below,
su - testuser -c "/root/check_package.sh | tee -a /var/log/check_package.log"
So after this when I check the last execution exitcode it returns always 0 only even if that script fails.
I tried something below also which didn't help,
su - testuser -c "/root/check_package.sh | tee -a /var/log/check_package.log && echo $? || echo $?"
Is there way to get the exitcode of command whatever running through su.
The problem here is not su, but tee: By default, the shell exits with the exit status of the last pipeline component; in your code, that component is not check_package.sh, but instead is tee.
If your /bin/sh is provided by bash (as opposed to ash, dash, or another POSIX-baseline shell), use set -o pipefail to cause the entirely pipeline to fail if any component of it does:
su - testuser -c "set -o pipefail; /root/check_package.sh | tee -a /var/log/check_package.log"
Alternately, you can do the tee out-of-band with redirection to a process substitution (though this requires your current user to have permission to write to check_package.log):
su - testuser -c "/root/check_package.sh" > >(tee -a /var/log/check_package.log
Both su and sudo exit with the exit status of the command they execute (if authentication succeeded):
$ sudo false; echo $?
1
$ su -c false; echo $?
1
Your problem is that the command pipeline that su runs is a pipeline. The exit status of your pipeline is that of the tee command (which succeeds), but what you really want is that of the first command in the pipeline.
If your shell is bash, you have a couple of options:
set -o pipefail before your pipeline, which will make it return the rightmost failure value of all the commands if any of them fail
Examine the specific member of the PIPESTATUS array variable - this can give you the exit status of the first command whether or not tee succeeds.
Examples:
$ sudo bash -c "false | tee -a /dev/null"; echo $?
0
$ sudo bash -c "set -o pipefail; false | tee -a /dev/null"; echo $?
1
$ sudo bash -c 'false | tee -a /dev/null; exit ${PIPESTATUS[0]}'; echo $?
1
You will get similar results using su -c, if your system shell (in /bin/sh) is Bash. If not, then you'd need to explicitly invoke bash, at which point sudo is clearly simpler.
I was facing a similar issue today, in case the topic is still open here my solution, otherwise just ignore it...
I wrote a bash script (let's say my_script.sh) which looks more or less like this:
### FUNCTIONS ###
<all functions listed in the main script which do what I want...>
### MAIN SCRIPT ### calls the functions defined in the section above
main_script() {
log_message "START" 0
check_env
check_data
create_package
tar_package
zip_package
log_message "END" 0
}
main_script |tee -a ${var_log} # executes script and writes info into log file
var_sta=${PIPESTATUS[0]} # captures status of pipeline
exit ${var_sta} # exits with value of status
It works when you call the script directly or in sudo mode

How can I run this command line in the background?

I have this
script -q -c "continuously_running_command" /dev/null > out
When I have the above command line running I can stop it by doing CTRL+C
However I'd like to run the above commandline in back ground so that I can stop it by doing kill -9 %1
But when I try to run this
script -q -c "continuously_running_command" /dev/null > out &
I get
[2]+ Stopped (tty output) script -q -c "continuously_running_command" /dev/null 1>out
Question:
How can I run the above commandline in back ground?
In order to background a process with redirection to a file, you must also redirect stderr. With stdout and stderr redirected, you can then background the process:
script -q -c "continuously_running_command" /dev/null > out 2>&1 &
Fully working example:
#!/bin/bash
i=$((0+0))
while test "$i" -lt 100; do
((i+=1))
echo "$i"
sleep 1
done
Running the script and tail of output file while backgrounded:
alchemy:~/scr/tmp/stack> ./back.sh > outfile 2>&1 &
[1] 31779
alchemy:~/scr/tmp/stack> tailf outfile
10
11
12
13
14
...
100
[1]+ Done ./back.sh > outfile 2>&1
In the case of:
script -q -c "continuously_running_command" /dev/null
The problem in in this case is the fact that script itself causes redirection of all dialog with script to FILE, in this case to /dev/null. So you need to simply issue the command without redirecting to /dev/null or just redirect stderr to out:
script -q -c "continuously_running_command" out 2>&1 &
or
script -q -c "continuously_running_command" /dev/null/ 2>out &

Resources