BASH SSH Redirect on Fail - bash

I have a BASH script that logs in to multiple servers and runs a series of commands. Occasionally I'll have instances where a server is unavailable (regular maintenance, etc). How can I exit out of the SSH session cleanly without passing an error like this?:
bash-3.2$ ssh myserver3
Disconnecting: Bad packet length.

You may be looking for -q option:
ssh -q user#host
And you can check the return code $? afterwards.
From man bash:
-q
Quiet mode. Causes all warning and diagnostic messages to be
suppressed
You can find more info about this topic in How to create a bash script to check the SSH connection?.

Related

SSH Remote Forwarding - Send to Background & Save Output as Variable

I'm working on a bash script to connect to a server via SSH that is running sish (https://github.com/antoniomika/sish). This will essentially create a port forward on the internet like ngrok using only SSH. Here is what happens during normal usage.
The command:
ssh -i ./tun -o StrictHostKeyChecking=no -R 5900:localhost:5900 tun.domain.tld sleep 10
The response:
Starting SSH Forwarding service for tcp:5900. Forwarded connections can be accessed via the following methods:
TCP: tun.domain.tld:43345
Now I need to send the ssh command to the background and figure out some way of capturing the response from the server as a variable so that I can grab the port that sish has assigned and send that somewhere (probably a webhook). I've tried a few things like using -f and piping to a file or named pipe and trying to cat it, but the issue is that the piping to the file never works and although the file is created, it's always empty. Any assistance would be greatly appreciated.
If you're running a single instance of sish (and the tunnel you're attempting to define) you can actually have sish bind the specific part you want (in this case 5900).
You just set the --bind-random-ports=false flag on your server command in order to tell sish that it's okay to not use random ports.
If you don't want to do this (or you have multiple clients that will expose this same port), you can use a simple script like the following:
#!/bin/bash
ADDR=""
# Start the tunnel. Use a phony command to tell ssh to clean the output.
exec 3< <(ssh -R 5900:localhost:5900 tun.domain.tld foobar 2>&1 | grep --line-buffered TCP | awk '{print $2; system("")}')
# Get our buffered output that is now just the address sish has given to us.
for i in 1; do
read <&3 line;
ADDR="$line"
done
# Here is where you'd call the webhook
echo "Do something with $ADDR"
# If you want the ssh command to continue to run in the background
# you can omit the following. This is to wait for the ssh command to
# exit or until this script dies in order to kill the ssh command.
PIDS=($(pgrep -P $(pgrep -P $$)))
function killssh() {
kill ${PIDS[0]}
}
trap killssh EXIT
while kill -0 ${PIDS[0]} 2> /dev/null; do sleep 1; done;
sish also has an admin api which you can scrape. The information on that is available here.
References: I build and maintain sish and use it myself (as well as a similar type of script).

Parsing SSH stream from batch

Using batch script to run ssh, I am finding all the output is dumped into the standard error... so my command:
ssh -i keyfile user#host "commands" 2> error.log
captures the remote server prompt for password if there are no matching keys in the local known_hosts...
This leaves me no way to capture the output for error processing or logging without leaving the user of my batch script stuck no knowing what the blank prompt is.
My other thought is to do a simple ssh to test the connection first and establish the password prompt if it's needed, then move to the command of interest. But I feel like if the first one passes, then the only thing I'm error is my remote command.
I've tried
>CON 2> error.log
... seems to do the same thing.
Unfortunately there's no TEE command default in Windows.
My best solution is to:
1) echo Enter password
2) ssh "params" 2>error.log
Suggestions?
If the remote system supports the syntax, you could do something like this:
ssh -i keyfile user#host "commands 2>&1" > output.log 2> error.log
This redirects the remote command's error output to its standard output. ssh's own standard output and standard error aren't affected. The 2>&1 part is Bourne shell syntax to redirect standard error (descriptor 2) to standard output (descriptor 1). It should work if the remote shell is sh, bash, or ksh.

How can I start an ssh session with a script without redirecting stdin?

I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.

SSH timeout over remote login inspite of ServerAliveInterval / nohup

For reasons beyond my control, I have a setup like this:
Local: script1.sh, which calls script2.sh on the remote server on Wikitech over ssh, and then waits for the script2.sh to finish
Remote: script2.sh, which executes an SQL query on the Wikidb and writes the result into a file "file.txt".
Inside my local script1.sh, I have:
nohup ssh -o ServerAliveInterval=60 -o ServerAliveCountMax=1000 user#remote "path/to/script2.sh $ARG1 $ARG2 $FILENAME"
Inside my remote script2.sh, I have a query which takes a LONG, LONG time to execute. Think hours. I don't have much leeway to optimize the query much.
nohup sql enwiki "$QUERY">$FILENAME
After the query in script2.sh executes, the output is redirected to "file.txt".
script1.sh, which was waiting for this file, then sftps the "file.txt" down to local, and sends it for processing downstream.
The whole thing keeps breaking down with a Write failed: Broken pipe error in "nohup.out" on the local shell.
I had put in the nohups and the ServerAliveInterval and ServerAliveCountMax to try and fix the problem, but that doesn't seem to have helped.
The remote files have nothing written them, if they are created at all.
Please help?
I guess you should redirect output of following line too:
nohup ssh -o ServerAliveInterval=60 -o ServerAliveCountMax=1000 user#remote "path/to/script2.sh $ARG1 $ARG2 $FILENAME" &>out.log
if it won't help, execute script with bash -x so we could catch exactly where the problem occurred

starting remote script via ssh containing nohup

I want to start a script remotely via ssh like this:
ssh user#remote.org -t 'cd my/dir && ./myscript data my#email.com'
The script does various things which work fine until it comes to a line with nohup:
nohup time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 &
it is supposed to do start the program myprog, pipe its output to mylog and send an email with some datafiles created by myprog as attachment and the log as body. Though when the script reaches this line, ssh outputs:
Connection to remote.org closed.
What is the problem here?
Thanks for any help
Your command runs a pipeline of processes in the background, so the calling script will exit straight away (or very soon afterwards). This will cause ssh to close the connection. That in turn will cause a SIGHUP to be sent to any process attached to the terminal that the -t option caused to be created.
Your time ./myprog process is protected by a nohup, so it should carry on running. But your mutt isn't, and that is likely to be the issue here. I suggest you change your command line to:
nohup sh -c "time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 " &
so the entire pipeline gets protected. (If that doesn't fix it it may be necessary to do something with file descriptors - for instance mutt may have other issues with the terminal not being around - or the quoting may need tweaking depending on the parameters - but give that a try for now...)
This answer may be helpful. In summary, to achieve the desired effect, you have to do the following things:
Redirect all I/O on the remote nohup'ed command
Tell your local SSH command to exit as soon as it's done starting the remote process(es).
Quoting the answer I already mentioned, in turn quoting wikipedia:
Nohuping backgrounded jobs is for example useful when logged in via SSH, since backgrounded jobs can cause the shell to hang on logout due to a race condition [2]. This problem can also be overcome by redirecting all three I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
UPDATE
I've just had success with this pattern:
ssh -f user#host 'sh -c "( (nohup command-to-nohup 2>&1 >output.file </dev/null) & )"'
Managed to solve this for a use case where I need to start backgrounded scripts remotely via ssh using a technique similar to other answers here, but in a way I feel is more simple and clean (at least, it makes my code shorter and -- I believe -- better-looking), by explicitly closing all three streams using the stream-close redirection syntax (as discussed at the following locations:
https://unix.stackexchange.com/questions/131801/closing-a-file-descriptor-vs
https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21
http://www.tldp.org/LDP/abs/html/io-redirection.html#CFD
https://www.gnu.org/software/bash/manual/html_node/Redirections.html
Rather than the more widely used but (IMHO) hackier "redirect to/from /dev/null", resulting in the deceptively simple:
nohup script.sh >&- 2>&- <&-&
2>&1 works just as well as 2>&-, but I feel the latter is ever-so-slightly more clear. ;) Most people might have a space preceding the final "background job" ampersand, but since it is not required (as the ampersand itself functions like a semicolon in normal usage), I prefer to omit it. :)

Resources