Can I make SSH Authentication Lightning Fast? - macos

I use SSH in a terminal window many times during a day.
I remember reading about a way to reuse a single connection so that the TCP and SSH handshaking don't have to happen every time I establish another request to the same host.
Can someone point me to a link or describe how to establish a shared ssh connection so that subsequent connections to the same host will connect quickly?
Thanks.

Answering my own question. Improving SSH (OpenSSH) connection speed with shared connections describes using the "ControlPath" configuration setting.
UPDATE: For connections that are opened and closed often add a setting like ControlPersist 4h to your ~/.ssh/config. See this post about SSH productivity tips.

If you want to keep the terminal opened, the easiest way is producing I/O ("tail -f" or "while [ -d . ]; cat /etc/loadavg; sleep 3; done").
If you want to improve the connection handshake one way I use is adding "UseDNS no" on your sshd_config.

Related

Enable TCP keepalive on port open by another program

On a Debian machine I'm using an OPCUA server https://github.com/FreeOpcUa/opcua-asyncio. The server does not give the possibility to enable TCP keepalive on the port opened by the server.
Basically, I want to know if it's possible to start the server then in another script, enable the tcp keepalive on that port.
I also found some other information from Redhat https://access.redhat.com/solutions/19029, and https://access.redhat.com/solutions/25773 (requires you to sign up to see the articles). But again I'm still lost as to what to do exactly.
I'll keep reading up on this, but so far I've spent about 10 hours trying to figure out whether it's even possible. So I thought I should ask for some help.
Any advice is welcome, thanks!
For operations of socket of another process socket must be shared from it https://docs.python.org/3/library/socket.html#socket.socket.share or duplicated.
Its easier to patch your server for keepalive.

bash ncat / nc / netcat connection won't close / stays open too long

I'm using Ubuntu 18.04 LTS with bash 4.4.20.
I'm trying to create a little daemon to schedule data transmission between threads.
On the server I am doing this:
ncat -l 2001 -k -c 'xargs -n1 ./atc-worker.sh'
On the client I am doing this:
echo "totally-legit-login-token" | nc 127.0.0.1 2001 -w 1
And it works well!
Here is the response:
LaunchCode=1589323120.957093305 = Now=1589323120.957093305 = URL=https://totally-legit-url.com/ = AuthToken=totally-legit-auth-token = LastID=167
When the server receives a request from a client, it calls my little atc-worker.sh script. The server spits out a single line of text and it is back to business, serving other clients.
Thanks to the -k option, the server listens continuously. Multiple clients can connect at the same time. The only problem, is that I cannot end the connection programmatically. I need the daemon -k functionality on the server to answer requests from the clients, but I need the clients to quit listening after receiving a response and get on to their other work.
Is there an EOF signal/character I can send from my atc-worker.sh script that would tell nc on the client side to disconnect?
On the client, I use the -w 1 option to tell the client to connect for no more than a second.
But this -w 1 option has some drawbacks.
Maybe a second is too long. The connection should just take ~150 milliseconds and waiting out the rest of the second slows each client down even if it already has its answer. And -as I said before- the client has chores to do! The client shouldn't be wasting its time after it has its answer!
Bad actors Rogue clients could connect to the server that have no intention to close out in a timely manner and I want the server to have better control and shut down bad actors.
Maybe a second is too short. atc-worker.sh has a mechanism to wait for a lock file to be removed if there is one. If that lock file is there for more than a second, the connection will close before the client can receive its response.
Possible solutions.
The atc-worker.sh script could send a magic character set to terminate the connection. Problem solved.
On the client-side set of solutions, maybe curl would be a suitable choice instead of nc? But it would not solve my concern of being able to deal with bad actors. Maybe these are two different problems? Client-side closing the connection immediately after an answer is received, and server-side dealing with bad actors who will use what ever clients they choose.
Maybe use expect? I'm investigating that now.
Thanks in advance!
OK. After a lot of digging, I found someone else with a similar problem. Here is a link to his answer.
Original question: Client doesn't close connection to server after receiving all responses
Original Answer: https://stackoverflow.com/a/50528286/3055756
Thanks #Unyxos
I modified my daemon to send an "ENDRESPONSE" line when it was done even though it does not drop the connection. And I modified the client to look for that "ENDRESPONSE" line. When the client gets the line, it drops the connection using the logic that #Unyxos uses below in his answer.
Here is his simple and elegant answer:
Finally found a working way (maybe not the best but at least it's perfectly doing what I want :D)
after all functions I send a "ENDRESPONSE" message, and on my client, I test if I have this message or not :
function sendMessage {
while read line; do
if [[ $line == "ENDRESPONSE" ]]; then
break
else
echo $line
fi
done < <(netcat "$ipAddress" "$port" <<< "$*")
}
Anyway thanks for your help, I'll try to implement other solutions later !

How to create a vpn connection on a port and only use it selectively

This feels like a basic question, I'm sure other people needed something like this at some point however I couldn't find anything clear on this topic and I'm not very familiar to networking so I hope following makes sense (and sorry if I am butchering the terminology)
I often need to connect to a VPN server at work. At the moment I am using Cisco AnyConnect, which upon connection asks me the host server, my username, my password and routes all my traffic through the VPN afterwards.
The problem is, depending on what I'm doing I often need to jump back and forth to VPN (some applications need local network and others dont)
What would be perfect is to create one VPN connection and just keep it on a port without routing anything to it. Then I can use it as a proxy to selectively route my traffic through VPN (eg. I override http_proxy locally on one terminal instance and run applications that require VPN through there without having to jump back and forth). Furthermore if I create this connection from the terminal I can automate most of the process, with something like:
function callExecutableThroughVPN() {
if ! is_connected_to_vpn then
echo "coulnt find the vpn connection, will attempt to connect. enter password:"
# get password input here
setup_vpn_on_port_9876 # pass password input here
echo "setting proxy to 127.0.0.1:9876"
http_proxy=127.0.0.1:9876/
https_proxy=127.0.0.1:9876/
fi
./executable_that_need_vpn
}
Then I can simply stay on my network and use a wrapper like above for few processes that require their traffic re-routed.
So in summary, my question is: Is it possible to create a single VPN process through terminal to listen a local port, so I dont have to route all my traffic at once, and I can simply kill this process when I'm done
I recommend using SSH tunnel/Socks Proxy (see ssh -D) and tsocks wrapper. For http(s) proxies I recommend the proxychains tool.

Proxify an application via loopback adapters and SSH

This is part programming, part sysadmin, so please excuse me if you feel that this should be over on serverfault.
I have an application that is not SOCKS aware and that we need to use through a firewall. We cannot modify the application to have SOCKS support either.
At the moment, we do this by aliasing the IPs the application talks to the loopback adapter on the host, then creating SSH tunnels out to another host. The IP's the application uses are hardcoded. Our SSH connections look like:
ssh -L 1.2.3.4:9999:1.2.3.4:9999 user#somehost
Where 1.2.3.x are aliases on the loopback.
So the application connects to the open port on the loopback, which gets sent out to the SSH host and onto the real 1.2.3.4.
It works, but the problem is that this application connects to quite a few IPs ( 50+ ), so we end up with 50 ssh connections out from the box.
We've tried to use several 'proxifying' apps, like tsocks and others but have had alot of issues with them ( the app is running on OS X and tsocks doesn't work so well, even with the patches )
Our idea was to write a daemon that listened on all interfaces on the specified port - it would then take the incoming packets from the application, scrape the packet info ( dst IP, port, payload ), recreate the packet and proxify it through a single SSH SOCKS connection ( ssh -D 1080 user#somehost ). That way, we only have 1 SSH connection that all the ports are being proxied through.
My question is - is this feasible? Is there something that I'm missing here? I've been combing through pfctl, ipfw, iptables docs, but I don't see any option to do it through those and this doesn't seem like it'd be the most difficult thing to code. It would recreate the packet based on the original destination IP and port, connect to the local SOCKs proxy and resend the packet as if it were the original application, but now with SOCKS support.
If I'm missing something that someone knows about that already does this, please let me know. I don't know socket programming or SOCKs too well, but this doesn't seem like it'd be too big of a project to tackle, but I'd like some opinions if I'm biting off way more that I should.
Thanks
If your application could add SOCKS client support, you can simply ssh -D lock_socks_port remote_machine, which will open up the local *lock_socks_port* as a SOCKS server at localhost, which can then connect to any host accesible by the remote machine.
Example: imagine you are using an untrusted wifi network without encryption. You can simply launch ssh -D 1082 home, and then configure your web browser to use localhost:1080 as SOCKS server. Of course, you need a SOCKS-enabled client. All the traffic would appear as coming from your gateway, and the connection would be opaque to those snooping the wifi.
You can also open a single ssh client with an indefinite number of LocalForward requests, which would be tunneled on top of a single ssh session.
Moreover, you can add ssh connections to an already-established ssh connection by using the ControlMaster and ControlPath options of ssh.

Avoid ssh session time out

I am remotely working on a server that automatically logs me out after 5 minutes of inactivity. Here's the message that it usually provides when it does so:
Read from remote host XXXXXXX: Operation timed out
I typically have several sessions open, which I use at roughly 30-minute intervals, so I wonder what I could do to avoid getting disconnected. I've already tried:
[a] hiring a monkey to hit some keys before the session logs me out
[b] running the top command
[c] threatening the server administrator :)
Any other suggestions? Thanks.
This has been answered on StackOverFlow - I add the link here for people that don't want to go to a third party forum when they search for this answer (as I did):
https://stackoverflow.com/questions/13390710/mac-terminals-how-to-keep-alive
Add to ~/.ssh/config
ServerAliveInterval 30
or start your ssh session using:
ssh -o ServerAliveInterval 30 username#hostname
And BTW: the answer is not specific to Mac.
You might consider using vi or more to edit a file.

Resources