i have a script that fires remote commands on several different machines through ssh connection. Script goes something like:
for server in list; do
echo "output from $server"
ssh to server execute some command
done
The problem with this is evidently the time, as it needs to establish ssh connection, fire command, wait for answer, print it. What i would like is to have script that would try to establish connections all at once and return echo "output from $server" and output of command as soon as it gets it, so not necessary in the list order.
I've been googling this for a while but didn't find an answer. I cannot cancel ssh session after command run as one thread suggested, because i need an output and i cannot use parallel gnu suggested in other threads. Also i cannot use any other tool, i cannot bring/install anything on this machine, only useable tool is GNU bash, version 4.1.2(1)-release.
Another question is how are ssh sessions like this limited? If i simply paste 5+ or so lines of "ssh connect, do some command" it actually doesn't do anything, or execute only on first from list. (it works if i paste 3-4 lines). Thank you
Have you tried this?
for server in list; do
ssh user#server "command" &
done
wait
echo finished
Update: Start subshells:
for server in list; do
(echo "output from $server"; ssh user#server "command"; echo End $server) &
done
wait
echo All subshells finished
There are several parallel SSH tools that can handle that for you:
http://code.google.com/p/pdsh/
http://sourceforge.net/projects/clusterssh/
http://code.google.com/p/sshpt/
http://code.google.com/p/parallel-ssh/
Also, you could be interested in configuration deployment solutions such as Chef, Puppet, Ansible, Fabric, etc (see this summary ).
A third option is to use a terminal broadcast such as pconsole
If you only can use GNU commands, you can write your script like this:
for server in $servers ; do
( { echo "output from $server" ; ssh user#$server "command" ; } | \
sed -e "s/^/$server:/" ) &
done
wait
and then sort the output to reconcile the lines.
I started with the shell hacks mentionned in this thread, then proceeded to something somewhat more robust : https://github.com/bearstech/pussh
It's my daily workhorse, and I basically run anything against 250 servers in 20 seconds (it's actually rate limited otherwise the connection rate kills my ssh-agent). I've been using this for years.
See for yourself from the man page (clone it and run 'man ./pussh.1') : https://github.com/bearstech/pussh/blob/master/pussh.1
Examples
Show all servers rootfs usage in descending order :
pussh -f servers df -h / |grep /dev |sort -rn -k5
Count the number of processors in a cluster :
pussh -f servers grep ^processor /proc/cpuinfo |wc -l
Show the processor models, sorted by occurence :
pussh -f servers sed -ne "s/^model name.*: //p" /proc/cpuinfo |sort |uniq -c
Fetch a list of installed package in one file per host :
pussh -f servers -o packages-for-%h dpkg --get-selections
Mass copy a file tree (broadcast) :
tar czf files.tar.gz ... && pussh -f servers -i files.tar.gz tar -xzC /to/dest
Mass copy several remote file trees (gather) :
pussh -f servers -o '|(mkdir -p %h && tar -xzC %h)' tar -czC /src/path .
Note that the pussh -u feature (upload and execute) was the main reason why I programmed this, no tools seemed to be able to do this. I still wonder if that's the case today.
You may like the parallel-ssh project with the pssh command:
pssh -h servers.txt -l user command
It will output one line per server when the command is successfully executed. With the -P option you can also see the output of the command.
Related
I have a perl script that pulls hostnames from a database and ssh's to them en masse with bash scripts piped to those ssh sessions for monitoring purposes.
These hosts overwhelmingly have root keys and in addition to monitoring I'm trying to justify further use of root keys to better facilitate this monitoring method.
Here is an example (the context is a while loop in the perl script):
system("cat /usr/local/bin/filesystem_scan.sh | ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root\#$name '/bin/bash; uname -a &> /dev/null' ");
There is another bash script that runs the perl one and gets result codes from the script that was piped and edits them into a report.
#!/bin/bash
perl /usr/local/bin/readonly-fs-scan.pl | grep -v " 0$\| 127$" | column -t > report.txt
The problem is that the script takes forever to run (over 500 hosts) because of approximately 50 hosts that don't have root keys. As a stop-gap to getting approval for keys on those servers I'm trying to find a way to implement expect into this process so that return is sent any time a password is requested (otherwise the script stalls for over a minute waiting for input for each of those 50 hosts, which causes a script that should take about 5 minutes to run to take about an hour). I've been able to do this in a simple bash script that ssh's to one server like so:
#!/bin/bash
expect -c "set timeout 5
spawn ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root#$name
expect \"*password:\"
send \"\r\""
However I'm struggling to grasp how I can insert this into the perl script.
This is a work assignment so I'm limited in large-scope changes I can make - (changing code language, installing extra utilities, etc.)
I have 4 shell commands I need to run and they do not depend on each other.
I have 4 slave machines. So, I want to run one of the 4 commands on each of the 4 machines, and then I want to wait until all 4 of them are finished.
How do I distribute this processing? This is what I tried:
$1 is a list of ip addresses to the slave machines.
for host in $(cat $1)
do
echo $host
# ssh into each machine and launch command
ssh username#$host <command>;
done
But this seems as if it is waiting for the command to finish before moving on to the next host and launching the next command.
How do I accomplish this distributed processing that doesn't depend on each other?
I would use GNU Parallel like this - running hostname in parallel on each of 4 servers:
parallel -j 4 --nonall -S 192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4 hostname
If you need to pass parameters, use --onall and put arguments after :::
parallel -j 4 --onall -S 192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4 echo ::: hello
Add --tag if you want the output lines tagged by the hostname/IP.
Add -k if you want to keep the output in order.
Add : to the server list to run on local host too.
If you aren't concerned about how many commands run concurrently, just put each one in the background with &, then wait on them as a group.
while IFS= read -r host; do
ssh username#$host <command> &
done < "$1"
wait
Note the use of a while loop instead of a for loop; see Bash FAQ 001.
The ssh part of your script needs to be like:
$ ssh -f user#host "sh -c 'sleep 30 ; nohup ls > foo 2>&1 &'"
This one sleeps for 30 secs and writes the output of ls to file foo. 30 secs is enough for you to go and see it yourself. Just build your loop around that.
I got this trick from Solaris documentation, for copying ssh public keys to remote hosts.
note: ssh-copy-id isn't available on Solaris
$ cat some_data_file | ssh user#host "cat >/tmp/some_data_file; some_shell_cmd"
I wanted to adapt it to do more involved things.
Specifically I wanted some_shell_command to be a script sent from the local host to execute on the remote host... a script would interact with the local keyboard (e.g. prompt user when the script was running on the remote host).
I experimented with ways of sending multiple things over stdin from multiple sources. But certain things that work in in local shell don't work over ssh, and some things, such as the following, didn't do what I wanted at all:
$ echo "abc" | cat <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat < <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat <<-EOF
> echo $(</dev/stdin) #echoes: echo abc (I wanted: abc)
> EOF
# messed with eval for the above but that was a problem too.
#chepner concluded it's not feasible to do all of that in a single ssh command. He suggested a theoretical alternative that didn't work as hoped, but I got it working after some research and tweaking and documented the results of that and posted it as an answer to this question.
Without that solution, having to run multiple ssh, and scp commands by default entails being prompted for password multiple times, which is a major drag.
I can't expect all the users of a script I write in a multi-user environment to configure public key authorization, nor expect they will put up with having to enter a password over and over.
OpenSSH Session Multiplexing
This solution works even when using earlier versions of OpenSSH where the
ControlPersistoption isn't available. (Working bash example at end of this answer)
Note: OpenSSH 3.9 introduced Session Multiplexing over a "control master connection" (in 2005), However, the ControlPersist option wasn't introduced until OpenSSH 5.6 (released in 2010).
ssh session multiplexing allows a script to authenticate once and do multiple ssh transactions over the authenticated connection. For example, if you have a script that runs several distinct tasks using ssh, scp, or sftp, each transaction can be carried out over OpenSSH 'control master session' that refers to location of its named-socket in the filesystem.
The following one-time-password authentication is useful when running a script that has to perform multiple ssh operations and one wants to avoid users having to password authenticate more than once, and is especially useful in cases where public key authentication isn't viable - e.g. not permitted, or at least not configured.
Most solutions I've seen entail using ControlPersist to tell ssh to keep the control master connection open, either indefinitely, or for some specific number of seconds.
Unfortunately, systems with OpenSSH prior to 5.6 don't have that option (wherein upgrading them might not be feasible). Unfortunately, there doesn't seem to be much documentation or discussion about that limitation online.
Reading through old release docs I discovered ControlPersist arrived late in the game for ssh session multiplexing scene. implying there may have been an alternative way to configure session multiplexing without relying on the ControlPersist option prior to it.
Initially trying to configure persistent-sessions from command line options rather than the config parameter, I ran into the problem of the ssh session terminating prematurely, closing control connection client sessions with it, or, alternatively, the connection was held open (kept ssh control master alive), terminal I/O was blocked, and the script would hang.
The following clarifies how to accomplish it.
OpenSSH option ssh flag Purpose
------------------- --------- -----------------------------
-o ControlMaster=yes -M Establishes sharable connection
-o ControlPath=path -S path Specifies path of connection's named socket
-o ControlPersist=600 Keep shareable connection open 10 min.
-o ControlPersist=yes Keep shareable connection open indefinitely
-N Don't create shell or run a command
-f Go into background after authenticating
-O exit Closes persistent connection
ControlPersist form Equivalent Purpose
------------------- ---------------- -------------------------
-o ControlPersist=yes ssh -Nf Keep control connection open indefinitely
-o ControlPersist=300 ssh -f sleep 300 Keep control connection open 5 min.
Note: scp and sftp implement -S flag differently, and -M flag not at all, so, for those commands, the -o option form is always required.
Sketchy Overview of Operations:
Note: This incomplete example doesn't execute as shown.
ctl=<path to dir to store named socket>
ssh -fNMS $ctl user#host # open control master connection
ssh -S $ctl … # example of ssh over connection
scp -o ControlPath=$ctl … # example of scp over connection
sftp -o ControlPath=$ctl … # example of sftp over connection
ssh -S $ctl -O exit # close control master connection
Session Multiplexing Demo
(Try it. You'll like it. Working example - authenticates only once):
Running this script will probably help you understand it quicker than reading it, and it is fascinating.
Note: If you lack access to remote host, just enter localhost at the "Host...?" prompt if you want to try this demo script
#!/bin/bash # This script demonstrates ssh session multiplexing
trap "[ -z "$ctl" ] || ssh -S $ctl -O exit $user#$host" EXIT # closes conn, deletes fifo
read -p "Host to connect to? " host
read -p "User to login with? " user
BOLD="\n$(tput bold)"; NORMAL="$(tput sgr0)"
echo -e "${BOLD}Create authenticated persistent control master connection:${NORMAL}"
sshfifos=~/.ssh/controlmasters
[ -d $sshfifos ] || mkdir -p $sshfifos; chmod 755 $sshfifos
ctl=$sshfifos/$user#$host:22 # ssh stores named socket ctrl conn here
ssh -fNMS $ctl $user#$host # Control Master: Prompts passwd then persists in background
lcldir=$(mktemp -d /tmp/XXXX)
echo -e "\nLocal dir: $lcldir"
rmtdir=$(ssh -S $ctl $user#$host "mktemp -d /tmp/XXXX")
echo "Remote dir: $rmtdir"
echo -e "${BOLD}Copy self to remote with scp:${NORMAL}"
scp -o ControlPath=$ctl ${BASH_SOURCE[0]} $user#$host:$rmtdir
echo -e "${BOLD}Display 4 lines of remote script, with ssh:${NORMAL}"
echo "====================================================================="
echo $rmtdir | ssh -S $ctl $user#$host "dir=$(</dev/stdin); head -4 \$dir/*"
echo "====================================================================="
echo -e "${BOLD}Do some pointless things with sftp:${NORMAL}"
sftp -o ControlPath=$ctl $user#$host:$rmtdir <<EOF
pwd
ls
lcd $lcldir
get *
quit
EOF
Using a master control socket, you can use multiple processes without having to authenticate more than once. This is just a simple example; see man ssh_config under ControlPath for advice on using a more secure socket.
It's not quite clear what you mean by sourcing somecommand locally; I'm going to assume it is a local script that you want copied over to the remote host. The simplest thing to do is just copy it over to run it.
# Copy the first file, and tell ssh to keep the connection open
# in the background after scp completes
$ scp -o ControlMaster=yes -o ControlPersist=yes -o ControlPath=%C somefile user#host:/tmp/somefile
# Copy the script on the same connection
$ scp -o ControlPath=%C somecommand user#host:
# Run the script on the same connection
$ ssh -o ControlPath=%C user#host somecommand
# Close the connection
$ ssh -o ControlPath=%C -O exit user#host
Of course, the user could use public key authentication to avoid entering their credentials at all, but ssh would still go through the authentication process each time. Here, the authentication process is only done once, by the command using ControlMaster=yes. The other two processes reuse that connection. The last commmand, with -O exit, doesn't actually connect; it just tells the local connection to close itself.
$ echo "abc" | cat <(echo "def")
The expression <(echo "def") expands to a file name, typically something like /dev/fd/63, that names a (virtual) file containing the text "def". So lets's simplify it a bit:
$ echo "def" > def.txt
$ echo "abc" | cat def.txt
This will also prints just def.
The pipe does feed the line abc to the standard input of the cat command. But because cat is given a file name on its command line, it doesn't read from its standard input. The abc is just quietly ignored, and the cat command prints the contents of the named file -- which is exactly what you told it to do.
The problem with echo abc | cat <(echo def) is that the <() wins the "providing the input" race. Luckily, bash will allow you to supply many inputs using mulitple <() constructs. So the trick is, how do you get the output of your echo abc into the <()?
How about:
$ echo abc | cat <(echo def) <(cat)
def
abc
If you need to handle the input from the pipe first, just switch the order:
$ echo abc | cat <(cat) <(echo def)
abc
def
Is there an app that can, given a command and options, execute for the lifetime of the process and ping a given URL indefinitely on a specific interval?
If not, could this be done on the terminal as a bash script? I'm almost positive it's doable through terminal, but am not fluent enough to whip it up within a few minutes.
Found this post that has a portion of the solution, minus the ping bits. ping runs on linux, indefinitely; until it's actively killed. How would I kill it from bash after say, two pings?
General Script
As others have suggested, use this in pseudo code:
execute command and save PID
while PID is active, ping and sleep
exit
This results in following script:
#!/bin/bash
# execute command, use '&' at the end to run in background
<command here> &
# store pid
pid=$!
while ps | awk '{ print $1 }' | grep $pid; do
ping <address here>
sleep <timeout here in seconds>
done
Note that the stuff inside <> should be replaces with actual stuff. Be it a command or an ip address.
Break from Loop
To answer your second question, that depends in the loop. In the loop above, simply track the loop count using a variable. To do that, add a ((count++)) inside the loop. And do this: [[ $count -eq 2 ]] && break. Now the loop will break when we're pinging for a second time.
Something like this:
...
while ...; do
...
((count++))
[[ $count -eq 2 ]] && break
done
ping twice
To ping only a few times, use the -c option:
ping -c <count here> <address here>
Example:
ping -c 2 www.google.com
Use man ping for more information.
Better practice
As hek2mgl noted in a comment below, the current solution may not suffice to solve the problem. While answering the question, the core problem will still persist. To aid to that problem, a cron job is suggested in which a simple wget or curl http request is sent periodically. This results in a fairly easy script containing but one line:
#!/bin/bash
curl <address here> > /dev/null 2>&1
This script can be added as a cron job. Leave a comment if you desire more information how to set such a scheduled job. Special thanks to hek2mgl for analyzing the problem and suggesting a sound solution.
Say you want to start a download with wget and while it is running, ping the url:
wget http://example.com/large_file.tgz & #put in background
pid=$!
while kill -s 0 $pid #test if process is running
do
ping -c 1 127.0.0.1 #ping your adress once
sleep 5 #and sleep for 5 seconds
done
A nice little generic utility for this is Daemonize. Its relevant options:
Usage: daemonize [OPTIONS] path [arg] ...
-c <dir> # Set daemon's working directory to <dir>.
-E var=value # Pass environment setting to daemon. May appear multiple times.
-p <pidfile> # Save PID to <pidfile>.
-u <user> # Run daemon as user <user>. Requires invocation as root.
-l <lockfile> # Single-instance checking using lockfile <lockfile>.
Here's an example of starting/killing in use: flickd
To get more sophisticated, you could turn your ping script into a systemd service, now standard on many recent Linuxes.
I am using the program synergy together with an ssh tunnel
It works, i just have to open an console an type these two commands:
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
synergyc localhost
because im lazy i made an Bash-Script which is run with one mouseclick on an icon:
#!/bin/bash
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
synergyc localhost
the Bash-Script above works as well, but now i also want to kill synergy and the ssh tunnel via one mouseclick, so i have to save the PIDs of synergy and ssh into file to kill them later:
#!/bin/bash
mkdir -p /tmp/synergyPIDs || exit 1
rm -f /tmp/synergyPIDs/ssh || exit 1
rm -f /tmp/synergyPIDs/synergy || exit 1
[ ! -e /tmp/synergyPIDs/ssh ] || exit 1
[ ! -e /tmp/synergyPIDs/synergy ] || exit 1
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
echo $! > /tmp/synergyPIDs/ssh
synergyc localhost
echo $! > /tmp/synergyPIDs/synergy
But the files of this script are empty.
How do I get the PIDs of ssh and synergy?
(I try to avoid ps aux | grep ... | awk ... | sed ... combinations, there has to be an easier way.)
With all due respect to the users of pgrep, pkill, ps | awk, etc, there is a much better way.
Consider that if you rely on ps -aux | grep ... to find a process you run the risk of a collision. You may have a use case where that is unlikely, but as a general rule, it's not the way to go.
SSH provides a mechanism for managing and controlling background processes. But like so many SSH things, it's an "advanced" feature, and many people (it seems, from the other answers here) are unaware of its existence.
In my own use case, I have a workstation at home on which I want to leave a tunnel that connects to an HTTP proxy on the internal network at my office, and another one that gives me quick access to management interfaces on co-located servers. This is how you might create the basic tunnels, initiated from home:
$ ssh -fNT -L8888:proxyhost:8888 -R22222:localhost:22 officefirewall
$ ssh -fNT -L4431:www1:443 -L4432:www2:443 colocatedserver
These cause ssh to background itself, leaving the tunnels open. But if the tunnel goes away, I'm stuck, and if I want to find it, I have to parse my process list and home I've got the "right" ssh (in case I've accidentally launched multiple ones that look similar).
Instead, if I want to manage multiple connections, I use SSH's ControlMaster config option, along with the -O command-line option for control. For example, with the following in my ~/.ssh/config file,
host officefirewall colocatedserver
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
the ssh commands above, when run, will leave spoor in ~/.ssh/cm_sockets/ which can then provide access for control, for example:
$ ssh -O check officefirewall
Master running (pid=23980)
$ ssh -O exit officefirewall
Exit request sent.
$ ssh -O check officefirewall
Control socket connect(/home/ghoti/.ssh/cm_socket/ghoti#192.0.2.5:22): No such file or directory
And at this point, the tunnel (and controlling SSH session) is gone, without the need to use a hammer (kill, killall, pkill, etc).
Bringing this back to your use-case...
You're establishing the tunnel through which you want syngergyc to talk to syngergys on TCP port 12345. For that, I'd do something like the following.
Add an entry to your ~/.ssh/config file:
Host otherHosttunnel
HostName otherHost
User otherUser
LocalForward 12345 otherHost:12345
RequestTTY no
ExitOnForwardFailure yes
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
Note that the command line -L option is handled with the LocalForward keyword, and the Control{Master,Path} lines are included to make sure you have control after the tunnel is established.
Then, you might modify your bash script to something like this:
#!/bin/bash
if ! ssh -f -N otherHosttunnel; then
echo "ERROR: couldn't start tunnel." >&2
exit 1
else
synergyc localhost
ssh -O exit otherHosttunnel
fi
The -f option backgrounds the tunnel, leaving a socket on your ControlPath to close the tunnel later. If the ssh fails (which it might due to a network error or ExitOnForwardFailure), there's no need to exit the tunnel, but if it did not fail (else), synergyc is launched and then the tunnel is closed after it exits.
You might also want to look in to whether the SSH option LocalCommand could be used to launch synergyc from right within your ssh config file.
Quick summary: Will not work.
My first idea is that you need to start the processes in the background to get their PIDs with $!.
A pattern like
some_program &
some_pid=$!
wait $some_pid
might do what you need... except that then ssh won't be in the foreground to ask for passphrases any more.
Well then, you might need something different after all. ssh -f probably spawns a new process your shell can never know from invoking it anyway. Ideally, ssh itself would offer a way to write its PID into some file.
just came across this thread and wanted to mention the "pidof" linux utility:
$ pidof init
1
You can use lsof to show the pid of the process listening to port 12345 on localhost:
lsof -t -i #localhost:12345 -sTCP:listen
Examples:
PID=$(lsof -t -i #localhost:12345 -sTCP:listen)
lsof -t -i #localhost:12345 -sTCP:listen >/dev/null && echo "Port in use"
well i dont want to add an & at the end of the commands as the connection will die if the console wintow is closed ... so i ended up with an ps-grep-awk-sed-combo
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#otherHost
echo `ps aux | grep -F 'ssh -f -N -L localhost' | grep -v -F 'grep' | awk '{ print $2 }'` > /tmp/synergyPIDs/ssh
synergyc localhost
echo `ps aux | grep -F 'synergyc localhost' | grep -v -F 'grep' | awk '{ print $2 }'` > /tmp/synergyPIDs/synergy
(you could integrate grep into awk, but im too lazy now)
You can drop the -f, which makes it run it in background, then run it with eval and force it to the background yourself.
You can then grab the pid. Make sure to put the & within the eval statement.
eval "ssh -N -L localhost:12345:otherHost:12345 otherUser#OtherHost & "
tunnelpid=$!
Another option is to use pgrep to find the PID of the newest ssh process
ssh -fNTL 8073:localhost:873 otherUser#OtherHost
tunnelPID=$(pgrep -n -x ssh)
synergyc localhost
kill -HUP $tunnelPID
This is more of a special case for synergyc (and most other programs that try to daemonize themselves). Using $! would work, except that synergyc does a clone() syscall during execution that will give it a new PID other than the one that bash thought it has. If you want to get around this so that you can use $!, then you can tell synergyc to stay in the forground and then background it.
synergyc -f -n mydesktop remoteip &
synergypid=$!
synergyc also does a few other things like autorestart that you may want to turn off if you are trying to manage it.
Based on the very good answer of #ghoti, here is a simpler script (for testing) utilising the SSH control sockets without the need of extra configuration:
#!/bin/bash
if ssh -fN -MS /tmp/mysocket -L localhost:12345:otherHost:12345 otherUser#otherHost; then
synergyc localhost
ssh -S /tmp/mysocket -O exit otherHost
fi
synergyc will be only started if tunnel has been established successfully, which itself will be closed as soon as synergyc returns.
Albeit the solution lacks proper error reporting.
You could look out for the ssh proceess that is bound to your local port, using this line:
netstat -tpln | grep 127\.0\.0\.1:12345 | awk '{print $7}' | sed 's#/.*##'
It returns the PID of the process using port 12345/TCP on localhost. So you don't have to filter all ssh results from ps.
If you just need to check, if that port is bound, use:
netstat -tln | grep 127\.0\.0\.1:12345 >/dev/null 2>&1
Returns 1 if none bound or 0 if someone is listening to this port.
There are many interesting answers here, but nobody mentioned that the manpage of SSH does describe this exact case! (see TCP FORWARDING section). And the solution they offer is much simpler:
ssh -fL 12345:localhost:12345 user#remoteserver sleep 10
synergyc localhost
Now in details:
First we start SSH with a tunnel; thanks to -f it will initiate the connection and only then fork to background (unlike solutions with ssh ... &; pid=$! where ssh is sent to background and next command is executed before the tunnel is created). On the remote machine it will run sleep 10 which will wait 10 seconds and then end.
Within 10 seconds, we should start our desired command, in this case synergyc localhost. It will connect to the tunnel and SSH will then know that the tunnel is in use.
After 10 seconds pass, sleep 10 command will finish. But the tunnel is still in use by synergyc, so SSH will not close the underlying connection until the tunnel is released (i.e. until synergyc closes socket).
When synergyc is closed, it will release the tunnel, and SSH in turn will terminate itself, closing a connection.
The only downside of this approach is that if the program we use will close and re-open connection for some reason then SSH will close the tunnel right after connection is closed, and the program won't be able to reconnect. If this is an issue then you should use an approach described in #doak's answer which uses control socket to properly terminate SSH connection and uses -f to make sure tunnel is created when SSH forks to the background.