how to check ssh exit command executed? - shell

I want to check exit command is executed success in a ssh session.
My first attempt is using the command exit && echo $? || echo $?.The echo $? || echo $?,it should print 0 if exit successful.
The problem is echo never execute if exit execute success because connection is disconnected and the later command is lost.
My second attempt is splitting the command to two command, as this:
$ exit
$ echo $?
echo $? should be 0 if exit execute successful.
But another problem is echo $? maybe be swallowed because it send so quickly that arrived to the remote ssh host before exit be executed.
So, how to ensure exit is executed at remote host before send next command?
UPDATE
The using stage is I execute shell command at a program language and send message by a ssh pipe stream.So, I don't know the exit command executed time.If the exit command not completed, my follow commands will be swallowed because they send to the exiting host.
That is why I'm care about the exit command executed.

If your main concern is knowing that you are back to your local machine, then you could define a variable earlier in your script that is just known on your local machine before ssh. Then after exiting you could test for the existence of that variable. If it exists you are back on the local machine and if it does not then try to exit again because you are not back on your local machine.
#define this before ssh
uniqueVarName333=1
Then in your script:
# ssh stuff
exit
if [ -z ${uniqueVarName333+x} ]; then exit; else echo "Back on local machine"; fi
Or you could just check for the success of exit multiple times to ensure that it is successful when you command it to the remote machine.
exit || exit || exit #check it as many times as you feel to get the probability of exiting close to 0

Inspired by #9Breaker.
I has solved this by calling echo 'END-COMMAND' flag repeatedly with a spare time, such as 15ms.
To clarify this by a shell channel example:
echo '' && echo 'BEGIN-COMMAND' && exit
echo 'END-COMMAND'
//if not response repeat send `echo 'END-COMMAND'`
echo $? && echo 'END-COMMAND'
//if not response repeat send `echo 'END-COMMAND'`
echo $? && echo 'END-COMMAND'
We can pick up IO stream chars and parse streaming util match BEGIN-COMMAN and END-COMMAND.
the response maybe success:
BEGIN-COMMAND
0
END-COMMAND //should be send multi times to get the response
or fail if network broken when connecting:
BEGIN-COMMAND
ssh: connect to host 192.168.1.149 port 22: Network is unreachable
255
END-COMMAND

Related

Bash script not exiting once a process is running

How should I modify my bash script logic so it exits the while loop and exits the script itself once a process named custom_app is running on my local Ubuntu 18.04? I've tried using break and exit inside an if statement with no luck.
Once custom app is running from say...1st attempt then I quit the app, run_custom_app.sh lingers in the background and resumes retrying 2nd, 3rd, 4th, 5th time. It should be doing nothing at this point since app already ran successfully and user intentionally quit.
Below is run_custom_app.sh used to run my custom app triggered from a website button click.
Script logic
Check if custom_app process is running already. If so, don't run the commands in the while code block. Do nothing. Exit run_custom_app.sh.
While custom_app process is NOT running, retry up to 5 times.
Once custom_app process is running, stop while loop and exit run_custom_app.sh as well.
In cases where 5 run retries have been attempted but custom_app process is still not running, display a message to the user.
#!/bin/sh
RETRYCOUNT=0
PROCESS_RUNNING=`ps cax | grep custom_app`
# Try to connect until process is running. Retry up to 5 times. Wait 10 secs between each retry.
while [ ! "$PROCESS_RUNNING" ] && [ "$RETRYCOUNT" -le 5 ]; do
RETRYCOUNT="`expr $RETRYCOUNT + 1`"
commands
sleep 10
PROCESS_RUNNING=`ps cax | grep custom_app`
if [ "$PROCESS_RUNNING" ]; then
break
fi
done
# Display an error message if not connected after 5 connection attempts
if [ ! "$PROCESS_RUNNING" ]; then
echo "Failed to connect, please try again in about 2 minutes" # I need to modify this later so it opens a Terminal window displaying the echo statement, not yet sure how.
fi
I have tested this code on VirtualBox as a replacement for your custom_app and the previous post was using an until loop and pgrep instead of ps. As suggested by DavidC.Rankin pidof is more correct but if you want to use ps then I suggest to use ps -C custom_app -o pid=
#!/bin/sh
retrycount=0
until my_app_pid=$(ps -C VirtualBox -o pid=); do ##: save the output of ps in a variable so we can check/test it for later.
echo commands ##: Just echoed the command here not sure which commands you are using/running.
if [ "$retrycount" -eq 4 ]; then ##: We started at 0 so the fifth count is 4
break ##: exit the loop
fi
sleep 10
retrycount=$((retrycount+1)) ##: increment by one using shell syntax without expr
done
if [ -n "$my_app_pid" ]; then ##: if $my_app_pid is not empty
echo "app is running"
else
echo "Failed to connect, please try again in about 2 minutes" >&2 ##: print the message to stderr
exit 1 ##: exit with a failure which is not 0
fi
The my_app_pid=$(ps -C VirtualBox -o pid=) variable assignment has a useful exit status so we can use it.
Basically the until loop is just the opposite of the while loop.

bash run multiple files exit condition

I have a function like so
function generic_build_a_module(){
move_to_the_right_directory
echo 'copying the common packages'; ./build/build_sdist.sh;
echo 'installing the api common package'; ./build/cache_deps.sh;
}
I want to exit the function if ./build/build_sdist.sh doesn't finishes successfully.
here is the content ./build/build_sdist.sh
... multiple operations....
echo "installing all pip dependencies from $REQUIREMENTS_FILE_PATH and placing their tar.gz into $PACKAGES_DIR"
pip install --no-use-wheel -d $PACKAGES_DIR -f $PACKAGES_DIR -r $REQUIREMENTS_FILE_PATH $PACKAGES_DIR/*
In other words, how does the main function generic_build_a_module "knows" if the ./build/build_sdist.sh finished successfully?
You can check the exit status of a command by surrounding it with an if. ! inverts the exit status. Use return 1 to exit your function with exit status 1.
generic_build_a_module() {
move_to_the_right_directory
echo 'copying the common packages'
if ! ./build/build_sdist.sh; then
echo "Aborted due to error while executing build."
return 1
fi
echo 'installing the api common package'
./build/cache_deps.sh;
}
If you don't want to print an error message, the same program can be written shorter using ||.
generic_build_a_module() {
move_to_the_right_directory
echo 'copying the common packages'
./build/build_sdist.sh || return 1
echo 'installing the api common package'
./build/cache_deps.sh;
}
Alternatively, you could use set -e. This will exit your script immediately when some command exits with a non-zero status.
You have to do the following:-
Run both the script in background and store their respective process id in two variables
Keep checking whether the scripts completed or not after an interval say for every 1 to 2 seconds.
Kill the process which is not completed after a specific time say 30 seconds
Example:
sdist=$(ps -fu $USER|grep -v "grep"|grep "build_sdist.sh"| awk '{print $2}')
OR
sdist=$(ps -fu $USER|grep [b]uild_sdist.sh| awk '{print $2}')
deps=$(ps -fu $USER|grep -v "grep"|grep "cache_deps.sh"| awk '{print $2}')
Now use a while loop to check the status every after a certain interval or just check the status directly after 30 seconds like below
sleep 30
if grep "$sdist"; then
kill -8 $sdist
fi
if grep "$deps"; then
kill -8 $deps
fi
You can check the exit code status of the last executed command by checking the $? variable. Exit code 0 is a typical indication that the command completed successfully.
Exit codes can be set by using exit followed by the code number within a script.
Here's a previous question regarding the use of $? with more detail, but to simply check this value try:
echo "test";echo $?
# Example
echo 'copying the common packages'; ./build/build_sdist.sh;
if [ $? -ne 0 ]; then
echo "The last command exited with a non-zero code"
fi
[ $? -ne 0 ] Checks if the last executed commands error code is not equal to 0. This is also useful to ensure that any negative error codes generated such as -1 are captured.
The caveat of the above approach is that we have only checked against the last command executed and not the ... multiple operations.... that you mentioned, so we may have missed an error generated by a command executed before pip install.
Depending on the situation you could set -e within a subsequent script, which instructs the shell to exit the script at the first instance a command exits with a non-zero status.
Another option would be to perform a similar operation as the example within ./build/build_sdist.sh to check the exit code of each command. This would give you the most control as to when and how the script finishes and allows the script to set it's own exit code.

Handling of error : program run from shell doesnt return

I have a shell script which updates different firmwares with different executables.
I need to know if one of the executable has hung and not returning back to shell.
Can I introduce any kind of timeout ?
Sample shell script below. How to handle if updatefw command hangs and does not return.
#!/bin/sh
updatefw -c config.cfg
if [ $? != 0 ]; then
echo "exec1 failed"
exit 1
fi
exit 0
I suggest with timeout from GNU core utilities:
#!/bin/bash
timeout 30 updatefw -c config.cfg
if [[ $? == 124 ]]; then
echo "update failed"
exit 1
fi
When timeout quits updatefw, the return code is 124.
I assume here that the update will never take longer than 30 seconds.

How to write process ID and get exit code of the next to last command

I want to run a command, write the process id instantly to a file when the command started and afterwards get the exit status of the command. This means, while the process id has to be written instantly, I want the exit status only when the initial command has finished.
The following statement will unfortunately run the command, write the process id instantly but it won't wait for the command to be finished. Furthermore I will only get the exit status of the echo command, not of the initial command
command in my case is rdiff-backup.
How do I need to modify the statement?
<command> & echo $! > "/pid_file"
RESULT=$?
if [ "$RESULT" -ne "0" ]; then
echo "Finished with errors"
fi
You need to wait on the background process to get its exit status:
_command_for_background_ & echo $! > pid_file
: ... do other things, if any ...
#
# it is better to grab $? on the same line to prevent any
# future modifications inadvertently breaking the strict sequence
#
wait $(< pid_file); child_status=$?
if [[ $child_status != 0 ]]; then
echo "Finished with errors"
fi

how to assign a value to variable and get the return value of output in single line

I have below line in my script
script_list=`ssh#hostip ls -A /directory 2>/dev/null`
Is there a way to use that in if condition, so that i can get the script_list variable assigned or handle the failure scenario in else condition
Thanks in advance
You can simply check the automatic variable $? in the next line:
script_list=$( ssh ... )
rc=$?
if [[ $rc -ne 0 ]]; then
...something is wrong...
fi
This works because the exit code of ssh is the exit code of the command it ran remotely if ssh itself could be executed successfully. But usually, you don't care which part of the command chain failed, it's good enough to know that any part (the local ssh or the remote command failed).
No problem, just do it. An assignment is perfectly fine as a command (by command I mean the thing which can come after an if).
if asdf=$(echo test1; exit 1); then
echo "SUCCESS1: $asdf"
fi
if asdf=$(echo test0; exit 0); then
echo "SUCCESS0: $asdf"
fi

Resources