What's the best way to send multiple commands via ssh whilst running a script? - bash

I'm making a script that occasionally sends commands to another machine over ssh. At the moment, im just calling this one function that opens up a new shell for each command it sends like so:
#!/bin/bash
port='22'
user='user'
host='hostname'
send_ssh() {
ssh -oBatchMode=yes -oConnectTimeout=5 -p "$1" -tq "$2"#"$3" "$4" || exit 1
}
send_ssh "$port" "$user" "$host" true
# it worked so do some stuff locally blah blah
send_ssh "$port" "$user" "$host" anothercommand
# rest of script
I've looked at keeping a tunnel open in the background and / or using a control socket so i can speed things up without having to open a new connection for each command but cant work out if and wether it's worth doing that in this case.

Setting up a control socket is so easy that I would argue it is absolutely worth doing, even with only two calls to send_ssh.
send_ssh() {
ssh -o ControlMaster=auto -o ControlPersist=10 -oBatchMode=yes -oConnectTimeout=5 -p "$1" -tq "$2"#"$3" "$4" || exit 1
}
ControlMaster=auto means that an existing connection will be used if available, otherwise one will be opened for you.
The setting ControlPersist=10 means that the connection to the remote host will remain open for 10 seconds after the client exists, you can adjust this as desired. You can set it to 0 or yes to keep it open permanently, in which case you'll want to make sure something like
ssh -o exit -p "$1" "$2#$3"
executes before your script exits to close the master connection.

Related

script to check if I can access multiple servers

I made this script to check if I can connect to a list of many servers:
for SERVER in $(cat servers.txt); do
ssh root#$SERVER && echo OK $SERVER || echo ERR $SERVER
done
The problem is that if is the first time I’m connecting to the server, the server asks the classic question “The authenticity of host ‘x.x.x.x’ can't be established... bla bla bla” Then I have to respond yes or not and it losses the purpose of making it a script, is there any way to bypass that so I can add it to the script?
Also, There are some servers in which I don't have my keys in them but they have the option to enter a password. In that case it will wait until I try a password to continue with the script execution, so I was wondering if there is a way to improve this script so if the server asks for a password then set it to ERR $SERVER and continue with the script?
Thank you for your help.
Do you actually need to establish an SSH connection?
Or is simply opening a socket connection to the host on the SSH port enough for you to determine that the server is online?
servers.txt
hostA.example.com
hostB.example.com
hostC.example.com
port-probe.sh
#!/bin/bash
PORT=22
TIMEOUT=3
for SERVER in $(cat servers.txt); do
# Open a socket and send a char
echo "-" | nc -w $TIMEOUT $SERVER $PORT &> /dev/null
# Check exit code of NC
if [ $? -eq 0 ]; then
echo "$SERVER is Available."
else
echo "$SERVER is Unavailable."
fi
done
You can use the -o flag to set options in SSH:
for SERVER in $(cat servers.txt); do
ssh -o StrictHostKeyChecking=no -o BatchMode=yes root#$SERVER exit && echo OK $SERVER || echo ERR $SERVER
done
Check out the manpage for ssh_config: man ssh_config for all of the options available with the -o flag.
If you have quite a few servers what you may want to do is connect to all of them simultaneously. This way it only waits maximum 2 minutes (default TCP connect timeout) if any of the servers is unresponsive.
Connect to each server without allocating a terminal and execute echo . command, redirect the output into a named file. Issue these commands in a loop asynchronously. Then wait till all commands complete, iterate over the log files and check which ones have dots in it. Then report the servers whose log files do not have a dot in it.
E.g.:
#/bin/bash
servers="$#"
for server in $servers; do
ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -Tn $server echo . >$server.log 2>$server.error.log &
done
wait # After 2 minutes all connection attempts timeout.
for server in $servers; do
[[ -s $server.log ]] || echo "Failed to connect to $server" >2
done

Authenticating with user/password *once* for multiple commands? (session multiplexing)

I got this trick from Solaris documentation, for copying ssh public keys to remote hosts.
note: ssh-copy-id isn't available on Solaris
$ cat some_data_file | ssh user#host "cat >/tmp/some_data_file; some_shell_cmd"
I wanted to adapt it to do more involved things.
Specifically I wanted some_shell_command to be a script sent from the local host to execute on the remote host... a script would interact with the local keyboard (e.g. prompt user when the script was running on the remote host).
I experimented with ways of sending multiple things over stdin from multiple sources. But certain things that work in in local shell don't work over ssh, and some things, such as the following, didn't do what I wanted at all:
$ echo "abc" | cat <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat < <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat <<-EOF
> echo $(</dev/stdin) #echoes: echo abc (I wanted: abc)
> EOF
# messed with eval for the above but that was a problem too.
#chepner concluded it's not feasible to do all of that in a single ssh command. He suggested a theoretical alternative that didn't work as hoped, but I got it working after some research and tweaking and documented the results of that and posted it as an answer to this question.
Without that solution, having to run multiple ssh, and scp commands by default entails being prompted for password multiple times, which is a major drag.
I can't expect all the users of a script I write in a multi-user environment to configure public key authorization, nor expect they will put up with having to enter a password over and over.
OpenSSH Session Multiplexing
    This solution works even when using earlier versions of OpenSSH where the
    ControlPersistoption isn't available. (Working bash example at end of this answer)
Note: OpenSSH 3.9 introduced Session Multiplexing over a "control master connection" (in 2005), However, the ControlPersist option wasn't introduced until OpenSSH 5.6 (released in 2010).
ssh session multiplexing allows a script to authenticate once and do multiple ssh transactions over the authenticated connection. For example, if you have a script that runs several distinct tasks using ssh, scp, or sftp, each transaction can be carried out over OpenSSH 'control master session' that refers to location of its named-socket in the filesystem.
The following one-time-password authentication is useful when running a script that has to perform multiple ssh operations and one wants to avoid users having to password authenticate more than once, and is especially useful in cases where public key authentication isn't viable - e.g. not permitted, or at least not configured.
Most solutions I've seen entail using ControlPersist to tell ssh to keep the control master connection open, either indefinitely, or for some specific number of seconds.
Unfortunately, systems with OpenSSH prior to 5.6 don't have that option (wherein upgrading them might not be feasible). Unfortunately, there doesn't seem to be much documentation or discussion about that limitation online.
Reading through old release docs I discovered ControlPersist arrived late in the game for ssh session multiplexing scene. implying there may have been an alternative way to configure session multiplexing without relying on the ControlPersist option prior to it.
Initially trying to configure persistent-sessions from command line options rather than the config parameter, I ran into the problem of the ssh session terminating prematurely, closing control connection client sessions with it, or, alternatively, the connection was held open (kept ssh control master alive), terminal I/O was blocked, and the script would hang.
The following clarifies how to accomplish it.
OpenSSH option ssh flag Purpose
------------------- --------- -----------------------------
-o ControlMaster=yes -M Establishes sharable connection
-o ControlPath=path -S path Specifies path of connection's named socket
-o ControlPersist=600 Keep shareable connection open 10 min.
-o ControlPersist=yes Keep shareable connection open indefinitely
-N Don't create shell or run a command
-f Go into background after authenticating
-O exit Closes persistent connection
ControlPersist form Equivalent Purpose
------------------- ---------------- -------------------------
-o ControlPersist=yes ssh -Nf Keep control connection open indefinitely
-o ControlPersist=300 ssh -f sleep 300 Keep control connection open 5 min.
Note: scp and sftp implement -S flag differently, and -M flag not at all, so, for those commands, the -o option form is always required.
Sketchy Overview of Operations:
Note: This incomplete example doesn't execute as shown.
ctl=<path to dir to store named socket>
ssh -fNMS $ctl user#host # open control master connection
ssh -S $ctl … # example of ssh over connection
scp -o ControlPath=$ctl … # example of scp over connection
sftp -o ControlPath=$ctl … # example of sftp over connection
ssh -S $ctl -O exit # close control master connection
Session Multiplexing Demo
(Try it. You'll like it. Working example - authenticates only once):
Running this script will probably help you understand it quicker than reading it, and it is fascinating.
Note: If you lack access to remote host, just enter localhost at the "Host...?" prompt if you want to try this demo script
#!/bin/bash # This script demonstrates ssh session multiplexing
trap "[ -z "$ctl" ] || ssh -S $ctl -O exit $user#$host" EXIT # closes conn, deletes fifo
read -p "Host to connect to? " host
read -p "User to login with? " user
BOLD="\n$(tput bold)"; NORMAL="$(tput sgr0)"
echo -e "${BOLD}Create authenticated persistent control master connection:${NORMAL}"
sshfifos=~/.ssh/controlmasters
[ -d $sshfifos ] || mkdir -p $sshfifos; chmod 755 $sshfifos
ctl=$sshfifos/$user#$host:22 # ssh stores named socket ctrl conn here
ssh -fNMS $ctl $user#$host # Control Master: Prompts passwd then persists in background
lcldir=$(mktemp -d /tmp/XXXX)
echo -e "\nLocal dir: $lcldir"
rmtdir=$(ssh -S $ctl $user#$host "mktemp -d /tmp/XXXX")
echo "Remote dir: $rmtdir"
echo -e "${BOLD}Copy self to remote with scp:${NORMAL}"
scp -o ControlPath=$ctl ${BASH_SOURCE[0]} $user#$host:$rmtdir
echo -e "${BOLD}Display 4 lines of remote script, with ssh:${NORMAL}"
echo "====================================================================="
echo $rmtdir | ssh -S $ctl $user#$host "dir=$(</dev/stdin); head -4 \$dir/*"
echo "====================================================================="
echo -e "${BOLD}Do some pointless things with sftp:${NORMAL}"
sftp -o ControlPath=$ctl $user#$host:$rmtdir <<EOF
pwd
ls
lcd $lcldir
get *
quit
EOF
Using a master control socket, you can use multiple processes without having to authenticate more than once. This is just a simple example; see man ssh_config under ControlPath for advice on using a more secure socket.
It's not quite clear what you mean by sourcing somecommand locally; I'm going to assume it is a local script that you want copied over to the remote host. The simplest thing to do is just copy it over to run it.
# Copy the first file, and tell ssh to keep the connection open
# in the background after scp completes
$ scp -o ControlMaster=yes -o ControlPersist=yes -o ControlPath=%C somefile user#host:/tmp/somefile
# Copy the script on the same connection
$ scp -o ControlPath=%C somecommand user#host:
# Run the script on the same connection
$ ssh -o ControlPath=%C user#host somecommand
# Close the connection
$ ssh -o ControlPath=%C -O exit user#host
Of course, the user could use public key authentication to avoid entering their credentials at all, but ssh would still go through the authentication process each time. Here, the authentication process is only done once, by the command using ControlMaster=yes. The other two processes reuse that connection. The last commmand, with -O exit, doesn't actually connect; it just tells the local connection to close itself.
$ echo "abc" | cat <(echo "def")
The expression <(echo "def") expands to a file name, typically something like /dev/fd/63, that names a (virtual) file containing the text "def". So lets's simplify it a bit:
$ echo "def" > def.txt
$ echo "abc" | cat def.txt
This will also prints just def.
The pipe does feed the line abc to the standard input of the cat command. But because cat is given a file name on its command line, it doesn't read from its standard input. The abc is just quietly ignored, and the cat command prints the contents of the named file -- which is exactly what you told it to do.
The problem with echo abc | cat <(echo def) is that the <() wins the "providing the input" race. Luckily, bash will allow you to supply many inputs using mulitple <() constructs. So the trick is, how do you get the output of your echo abc into the <()?
How about:
$ echo abc | cat <(echo def) <(cat)
def
abc
If you need to handle the input from the pipe first, just switch the order:
$ echo abc | cat <(cat) <(echo def)
abc
def

Temporarily remove the ssh private key password in a shell scriptI

I am required to deploy some files from server A to server B. I connect to server A via SSH and from there, connect via ssh to server B, using a private key stored on server A, the public key of which resides in server B's authorized_keys file. The connection from A to B happens within a Bash shell script that resides on server A.
This all works fine, nice and simple, until a security-conscious admin pointed out that my SSH private key stored on server A is not passphrase protected, so that anyone who might conceivably hack into my account on server A would also have access to server B, as well as C, D, E, F, and G. He has a point, I guess.
He suggests a complicated scenario under which I would add a passphrase, then modify my shell script to add a a line at the beginning in which I would call
ssh-keygen -p -f {private key file}
answer the prompt for my old passphrase with the passphrase and the (two) prompts for my new passphrasw with just return which gets rid of the passphrase, and then at the end, after my scp command
calling
ssh-keygen -p -f {private key file}
again, to put the passphrase back
To which I say "Yecch!".
Well I can improve that a little by first reading the passphrase ONCE in the script with
read -s PASS_PHRASE
then supplying it as needed using the -N and -P parameters of ssh-keygen.
It's almost usable, but I hate interactive prompts in shell scripts. I'd like to get this down to one interactive prompt, but the part that's killing me is the part where I have to press enter twice to get rid of the passphrase
This works from the command line:
ssh-keygen -p -f {private key file} -P {pass phrase} -N ''
but not from the shell script. There, it seems I must remove the -N parameter and accept the need to type two returns.
That is the best I am able to do. Can anyone improve this? Or is there a better way to handle this? I can't believe there isn't.
Best would be some way of handling this securely without ever having to type in the passphrase but that may be asking too much. I would settle for once per script invocation.
Here is a simplified version the whole script in skeleton form
#! /bin/sh
KEYFILE=$HOME/.ssh/id_dsa
PASSPHRASE=''
unset_passphrase() {
# params
# oldpassword keyfile
echo "unset_key_password()"
cmd="ssh-keygen -p -P $1 -N '' -f $2"
echo "$cmd"
$cmd
echo
}
reset_passphrase() {
# params
# oldpassword keyfile
echo "reset_key_password()"
cmd="ssh-keygen -p -N '$1' -f $2"
echo "$cmd"
$cmd
echo
}
echo "Enter passphrase:"
read -s PASSPHRASE
unset_passphrase $PASSPHRASE $KEYFILE
# do something with ssh
reset_passphrase $PASSPHRASE $KEYFILE
Check out ssh-agent. It caches the passphrase so you can use the keyfile during a certain period regardless of how many sessions you have.
Here are more details about ssh-agent.
OpenSSH supports what's called a "control master" mode, where you can connect once, leave it running in the background, and then have other ssh instances (including scp, rsync, git, etc.) reuse that existing connection. This makes it possible to only type the password once (when setting up the control master) but execute multiple ssh commands to the same destination.
Search for ControlMaster in man ssh_config for details.
Advantages over ssh-agent:
You don't have to remember to run ssh-agent
You don't have to generate an ssh public/private key pair, which is important if the script will be run by many users (most people don't understand ssh keys, so getting a large group of people to generate them is a tiring exercise)
Depending on how it is configured, ssh-agent might time out your keys part-way through the script; this won't
Only one TCP session is started, so it is much faster if you're connecting over and over again (e.g., copying many small files one at a time)
Example usage (forgive Stack Overflow's broken syntax highlighting):
REMOTE_HOST=server
log() { printf '%s\n' "$*"; }
error() { log "ERROR: $*" >&2; }
fatal() { error "$*"; exit 1; }
try() { "$#" || fatal "'$#' failed"; }
controlmaster_start() {
CONTROLPATH=/tmp/$(basename "$0").$$.%l_%h_%p_%r
# same as CONTROLPATH but with special characters (quotes,
# spaces) escaped in a way that rsync understands
CONTROLPATH_E=$(
printf '%s\n' "${CONTROLPATH}" |
sed -e 's/'\''/"'\''"/g' -e 's/"/'\''"'\''/g' -e 's/ /" "/g'
)
log "Starting ssh control master..."
ssh -f -M -N -S "${CONTROLPATH}" "${REMOTE_HOST}" \
|| fatal "couldn't start ssh control master"
# automatically close the control master at exit, even if
# killed or interrupted with ctrl-c
trap 'controlmaster_stop' 0
trap 'exit 1' HUP INT QUIT TERM
}
controlmaster_stop() {
log "Closing ssh control master..."
ssh -O exit -S "${CONTROLPATH}" "${REMOTE_HOST}" >/dev/null \
|| fatal "couldn't close ssh control master"
}
controlmaster_start
try ssh -S "${CONTROLPATH}" "${REMOTE_HOST}" some_command
try scp -o ControlPath="${CONTROLPATH}" \
some_file "${REMOTE_HOST}":some_path
try rsync -e "ssh -S ${CONTROLPATH_E}" -avz \
some_dir "${REMOTE_HOST}":some_path
# the control master will automatically close once the script exits
I could point out an alternative solution for this. Instead of having the key stored on server A I would keep the key locally. Now I would create a local port forward to server B on port 4000.
ssh -L 4000:B:22 usernam#A
And then in a new terminal connect through the tunnel to server B.
ssh -p 4000 -i key_copied_from_a user_on_b#localhost
I don't know how feasible this is to you though.
Building up commands as a string is tricky, as you've discovered. Much more robust to use arrays:
cmd=( ssh-keygen -p -P "$1" -N "" -f "$2" )
echo "${cmd[#]}"
"${cmd[#]}"
Or even use the positional parameters
passphrase="$1"
keyfile="$2"
set -- ssh-keygen -p -P "$passphrase" -N "" -f "$keyfile"
echo "$#"
"$#"
The empty argument won't be echoed surrounded by quotes, but it's there

Bash script to set up a temporary SSH tunnel

On Cygwin, I want a Bash script to:
Create an SSH tunnel to a remote server.
Do some work locally that uses the tunnel.
Then shut down the tunnel.
The shutdown part has me perplexed.
Currently, I have a lame solution. In one shell I run the following to create a tunnel:
# Create the tunnel - this works! It runs forever, until the shell is quit.
ssh -nNT -L 50000:localhost:3306 jm#sampledomain.com
Then, in another shell window, I do my work:
# Do some MySQL stuff over local port 50000 (which goes to remote port 3306)
Finally, when I am done, I close the first shell window to kill the tunnel.
I'd like to do this all in one script like:
# Create tunnel
# Do work
# Kill tunnel
How do I keep track of the tunnel process, so I know which one to kill?
You can do this cleanly with an ssh 'control socket'. To talk to an already-running SSH process and get it's pid, kill it etc. Use the 'control socket' (-M for master and -S for socket) as follows:
$ ssh -M -S my-ctrl-socket -fNT -L 50000:localhost:3306 jm#sampledomain.com
$ ssh -S my-ctrl-socket -O check jm#sampledomain.com
Master running (pid=3517)
$ ssh -S my-ctrl-socket -O exit jm#sampledomain.com
Exit request sent.
Note that my-ctrl-socket will be an actual file that is created.
I got this info from a very RTFM reply on the OpenSSH mailing list.
You can tell SSH to background itself with the -f option but you won't get the PID with $!.
Also instead of having your script sleep an arbitrary amount of time before you use the tunnel, you can use -o ExitOnForwardFailure=yes with -f and SSH will wait for all remote port forwards to be successfully established before placing itself in the background. You can grep the output of ps to get the PID. For example you can use
...
ssh -Cfo ExitOnForwardFailure=yes -N -L 9999:localhost:5900 $REMOTE_HOST
PID=$(pgrep -f 'N -L 9999:')
[ "$PID" ] || exit 1
...
and be pretty sure you're getting the desired PID
You can tell ssh to go into background with & and not create a shell on the other side (just open the tunnel) with a command line flag (I see you already did this with -N).
Save the PID with PID=$!
Do your stuff
kill $PID
EDIT: Fixed $? to $! and added the &
I prefer to launch a new shell for separate tasks and I often use the following command combination:
$ sudo bash; exit
or sometimes:
$ : > sensitive-temporary-data.txt; bash; rm -f sensitive-temporary-data.txt; exit
These commands create a nested shell where I can do all my work; when I'm finished I hit CTRL-D and the parent shell cleans up and exits as well. You could easily throw bash; into your ssh tunnel script just before the kill part so that when you log out of the nested shell your tunnel will be closed:
#!/bin/bash
ssh -nNT ... &
PID=$!
bash
kill $PID
You could launch the ssh with a & a the end, to put it in the background and grab its id when doing. Then you just have to do a kill of that id when you're done.
A simple bash script to solve your problem.
# Download then put in $PATH
wget https://raw.githubusercontent.com/ijortengab/bash/master/commands/command-keep-alive.sh
mv command-keep-alive.sh -t /usr/local/bin
# open tunnel, put script in background
command-keep-alive.sh "ssh -fN -o ServerAliveInterval=10 -o ServerAliveCountMax=2 -L 33306:localhost:3306 myserver" /tmp/my.pid &
# do something
mysql --port 33306
# close tunnel
kill $(cat /tmp/my.pid)
https://github.com/aronpc/remina-ssh-tunnel
#!/usr/bin/env sh
scriptname="$(basename $0)"
actionname="$1"
tunnelname=$(echo "$2" | iconv -t ascii//TRANSLIT | sed -E 's/[^a-zA-Z0-9-]+/-/g' | sed -E 's/^-+|-+$//g' | tr A-Z a-z)
remotedata="$3"
tunnelssh="$4"
if [ $# -lt 4 ]
then
echo "Usage: $scriptname start | stop LOCAL_PORT:RDP_IP:RDP_PORT SSH_NODE_IP"
exit
fi
case "$actionname" in
start)
echo "Starting tunnel to $tunnelssh"
ssh -M -S ~/.ssh/sockets/$tunnelname.control -fnNT -L $remotedata $tunnelssh
ssh -S ~/.ssh/sockets/$tunnelname.control -O check $tunnelssh
;;
stop)
echo "Stopping tunnel to $tunnelssh"
ssh -S ~/.ssh/sockets/$tunnelname.control -O exit $tunnelssh
;;
*)
echo "Did not understand your argument, please use start|stop"
;;
esac
usage example
Edit or create new remmina server connection
schema
~/.ssh/rdp-tunnel.sh ACTION TUNNELNAME LOCAL_PORT:REMOTE_SERVER:REMOTE_PORT TUNNEL_PROXY
name
description
ACTION
start|stop
TUNNELNAME
"string identify socket" slugify to create socket file into ~/.ssh/sockets/string-identify-socket.control
LOCAL_PORT
the door that will be exposed locally if we use the same port for two connections it will crash
REMOTE_SERVER
the ip of the server that you would access if you had it on the proxy server that will be used
REMOTE_PORT
the service port that runs on the server
TUNNEL_PROXY
the connection you are going to use as a proxy, it needs to be in your ~/.ssh/config preferably using the access keys
I use the combination (% g-% p) of the remmina group name and connection name to be my TUNNELNAME (this needs to be unique, it will see the socket name)
pre-command
~/.ssh/rdp-tunnel.sh start "%g-%p" 63394:192.168.8.176:3389 tunnel-name-1
post-command
~/.ssh/rdp-tunnel.sh stop "%g-%p" 63394:192.168.8.176:3389 tunnel-name-1
you can and should use this script to access anything, I use it constantly to access systems and services that do not have a public ip going through 1,2,3,4,5 or more ssh proxies
see more into :
ssh config
ssh mach
ssh jump hosts
sshuttle python ssh
Refs:
https://remmina.org/remmina-rdp-ssh-tunnel/
https://kgibran.wordpress.com/2019/03/13/remmina-rdp-ssh-tunnel-with-pre-and-post-scripts/
Bash script to set up a temporary SSH tunnel
https://gist.github.com/oneohthree/f528c7ae1e701ad990e6

How do I kill a backgrounded/detached ssh session?

I am using the program synergy together with an ssh tunnel
It works, i just have to open an console an type these two commands:
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
synergyc localhost
because im lazy i made an Bash-Script which is run with one mouseclick on an icon:
#!/bin/bash
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
synergyc localhost
the Bash-Script above works as well, but now i also want to kill synergy and the ssh tunnel via one mouseclick, so i have to save the PIDs of synergy and ssh into file to kill them later:
#!/bin/bash
mkdir -p /tmp/synergyPIDs || exit 1
rm -f /tmp/synergyPIDs/ssh || exit 1
rm -f /tmp/synergyPIDs/synergy || exit 1
[ ! -e /tmp/synergyPIDs/ssh ] || exit 1
[ ! -e /tmp/synergyPIDs/synergy ] || exit 1
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
echo $! > /tmp/synergyPIDs/ssh
synergyc localhost
echo $! > /tmp/synergyPIDs/synergy
But the files of this script are empty.
How do I get the PIDs of ssh and synergy?
(I try to avoid ps aux | grep ... | awk ... | sed ... combinations, there has to be an easier way.)
With all due respect to the users of pgrep, pkill, ps | awk, etc, there is a much better way.
Consider that if you rely on ps -aux | grep ... to find a process you run the risk of a collision. You may have a use case where that is unlikely, but as a general rule, it's not the way to go.
SSH provides a mechanism for managing and controlling background processes. But like so many SSH things, it's an "advanced" feature, and many people (it seems, from the other answers here) are unaware of its existence.
In my own use case, I have a workstation at home on which I want to leave a tunnel that connects to an HTTP proxy on the internal network at my office, and another one that gives me quick access to management interfaces on co-located servers. This is how you might create the basic tunnels, initiated from home:
$ ssh -fNT -L8888:proxyhost:8888 -R22222:localhost:22 officefirewall
$ ssh -fNT -L4431:www1:443 -L4432:www2:443 colocatedserver
These cause ssh to background itself, leaving the tunnels open. But if the tunnel goes away, I'm stuck, and if I want to find it, I have to parse my process list and home I've got the "right" ssh (in case I've accidentally launched multiple ones that look similar).
Instead, if I want to manage multiple connections, I use SSH's ControlMaster config option, along with the -O command-line option for control. For example, with the following in my ~/.ssh/config file,
host officefirewall colocatedserver
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
the ssh commands above, when run, will leave spoor in ~/.ssh/cm_sockets/ which can then provide access for control, for example:
$ ssh -O check officefirewall
Master running (pid=23980)
$ ssh -O exit officefirewall
Exit request sent.
$ ssh -O check officefirewall
Control socket connect(/home/ghoti/.ssh/cm_socket/ghoti#192.0.2.5:22): No such file or directory
And at this point, the tunnel (and controlling SSH session) is gone, without the need to use a hammer (kill, killall, pkill, etc).
Bringing this back to your use-case...
You're establishing the tunnel through which you want syngergyc to talk to syngergys on TCP port 12345. For that, I'd do something like the following.
Add an entry to your ~/.ssh/config file:
Host otherHosttunnel
HostName otherHost
User otherUser
LocalForward 12345 otherHost:12345
RequestTTY no
ExitOnForwardFailure yes
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
Note that the command line -L option is handled with the LocalForward keyword, and the Control{Master,Path} lines are included to make sure you have control after the tunnel is established.
Then, you might modify your bash script to something like this:
#!/bin/bash
if ! ssh -f -N otherHosttunnel; then
echo "ERROR: couldn't start tunnel." >&2
exit 1
else
synergyc localhost
ssh -O exit otherHosttunnel
fi
The -f option backgrounds the tunnel, leaving a socket on your ControlPath to close the tunnel later. If the ssh fails (which it might due to a network error or ExitOnForwardFailure), there's no need to exit the tunnel, but if it did not fail (else), synergyc is launched and then the tunnel is closed after it exits.
You might also want to look in to whether the SSH option LocalCommand could be used to launch synergyc from right within your ssh config file.
Quick summary: Will not work.
My first idea is that you need to start the processes in the background to get their PIDs with $!.
A pattern like
some_program &
some_pid=$!
wait $some_pid
might do what you need... except that then ssh won't be in the foreground to ask for passphrases any more.
Well then, you might need something different after all. ssh -f probably spawns a new process your shell can never know from invoking it anyway. Ideally, ssh itself would offer a way to write its PID into some file.
just came across this thread and wanted to mention the "pidof" linux utility:
$ pidof init
1
You can use lsof to show the pid of the process listening to port 12345 on localhost:
lsof -t -i #localhost:12345 -sTCP:listen
Examples:
PID=$(lsof -t -i #localhost:12345 -sTCP:listen)
lsof -t -i #localhost:12345 -sTCP:listen >/dev/null && echo "Port in use"
well i dont want to add an & at the end of the commands as the connection will die if the console wintow is closed ... so i ended up with an ps-grep-awk-sed-combo
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#otherHost
echo `ps aux | grep -F 'ssh -f -N -L localhost' | grep -v -F 'grep' | awk '{ print $2 }'` > /tmp/synergyPIDs/ssh
synergyc localhost
echo `ps aux | grep -F 'synergyc localhost' | grep -v -F 'grep' | awk '{ print $2 }'` > /tmp/synergyPIDs/synergy
(you could integrate grep into awk, but im too lazy now)
You can drop the -f, which makes it run it in background, then run it with eval and force it to the background yourself.
You can then grab the pid. Make sure to put the & within the eval statement.
eval "ssh -N -L localhost:12345:otherHost:12345 otherUser#OtherHost & "
tunnelpid=$!
Another option is to use pgrep to find the PID of the newest ssh process
ssh -fNTL 8073:localhost:873 otherUser#OtherHost
tunnelPID=$(pgrep -n -x ssh)
synergyc localhost
kill -HUP $tunnelPID
This is more of a special case for synergyc (and most other programs that try to daemonize themselves). Using $! would work, except that synergyc does a clone() syscall during execution that will give it a new PID other than the one that bash thought it has. If you want to get around this so that you can use $!, then you can tell synergyc to stay in the forground and then background it.
synergyc -f -n mydesktop remoteip &
synergypid=$!
synergyc also does a few other things like autorestart that you may want to turn off if you are trying to manage it.
Based on the very good answer of #ghoti, here is a simpler script (for testing) utilising the SSH control sockets without the need of extra configuration:
#!/bin/bash
if ssh -fN -MS /tmp/mysocket -L localhost:12345:otherHost:12345 otherUser#otherHost; then
synergyc localhost
ssh -S /tmp/mysocket -O exit otherHost
fi
synergyc will be only started if tunnel has been established successfully, which itself will be closed as soon as synergyc returns.
Albeit the solution lacks proper error reporting.
You could look out for the ssh proceess that is bound to your local port, using this line:
netstat -tpln | grep 127\.0\.0\.1:12345 | awk '{print $7}' | sed 's#/.*##'
It returns the PID of the process using port 12345/TCP on localhost. So you don't have to filter all ssh results from ps.
If you just need to check, if that port is bound, use:
netstat -tln | grep 127\.0\.0\.1:12345 >/dev/null 2>&1
Returns 1 if none bound or 0 if someone is listening to this port.
There are many interesting answers here, but nobody mentioned that the manpage of SSH does describe this exact case! (see TCP FORWARDING section). And the solution they offer is much simpler:
ssh -fL 12345:localhost:12345 user#remoteserver sleep 10
synergyc localhost
Now in details:
First we start SSH with a tunnel; thanks to -f it will initiate the connection and only then fork to background (unlike solutions with ssh ... &; pid=$! where ssh is sent to background and next command is executed before the tunnel is created). On the remote machine it will run sleep 10 which will wait 10 seconds and then end.
Within 10 seconds, we should start our desired command, in this case synergyc localhost. It will connect to the tunnel and SSH will then know that the tunnel is in use.
After 10 seconds pass, sleep 10 command will finish. But the tunnel is still in use by synergyc, so SSH will not close the underlying connection until the tunnel is released (i.e. until synergyc closes socket).
When synergyc is closed, it will release the tunnel, and SSH in turn will terminate itself, closing a connection.
The only downside of this approach is that if the program we use will close and re-open connection for some reason then SSH will close the tunnel right after connection is closed, and the program won't be able to reconnect. If this is an issue then you should use an approach described in #doak's answer which uses control socket to properly terminate SSH connection and uses -f to make sure tunnel is created when SSH forks to the background.

Resources