How to access website on machine behind firewall accessed via reverse SSH tunnel? - ssh-tunnel

I’ve set up a permanent reverse SSH tunnel between a machine behind a firewall (fixed local IP 192.168.0.7) and a server (Jelastic without root access).
I can login to the server and then SSH from there to access the machine behind firewall:
ssh jelastic#jelastic-server.com -p 3001
followed by
ssh -p 5001 user#localhost
Additionally, I would like to access websites on the machine (served on https via NGINX). How can I achieve that?
My setup for the tunnel (tunnel.service run via systemctl):
[Unit]
Description=Persistent SSH reverse Tunnel (for encrypted traffic)
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=60
ExecStart=/usr/bin/ssh -qNn -o ServerAliveInterval=30 -o ServerAliveCountMax=3 -o ExitOnForwardFailure=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /etc/sshtunnel/id_rsa -R :5001:localhost:22 jelastic#jelastic-server.com -p 3001
[Install]
WantedBy=multi-user.target
What I’ve tried so far:
setting up a second reverse SSH tunnel from port 443 instead of 22
ssh with –L option:
ssh -L 8090:192.168.0.7:443 user#192.168.0.7

Related

Complex SSH tunnel with Go

Need some help with a tricky SSH tunnel through a bastion host.
I want to port forward Postgres on the remote server, through the bastion. Our company setup only allows communication over SSH, so we have to port forward everything.
Currently, I use the CLI command to set up the SSH tunnel, then use psql shell command on my laptop to query the remote Postgres. I want to write this same connection in Go, so I can create reports, graphs, etc.
The following command line works, but I can't figure out how to do this with Go SSH.
ssh -o ProxyCommand="ssh -l username1 -i ~/.ssh/privKey1.pem bastionIP -W %h:%p" -i ~/.ssh/privKey2.pem -L 8080:localhost:5432 -N username2#PsqlHostIP
psql -h localhost -P 8000 -U user -W pass

Docker Desktop Windows and VPN - no network connection inside container

I'm trying to use Docker on Windows while being connected to VPN.
When VPN is not connected, everything works OK.
But when I connect to our corporate VPN using Cisco AnyConnect client, network inside docker container is not working anymore:
docker run alpine ping www.google.com
ping: bad address 'www.google.com'
docker run alpine ping -c 5 216.58.204.36
PING 216.58.204.36 (216.58.204.36): 56 data bytes
--- 216.58.204.36 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
How to fix this issue and make it work?
My setup is:
Windows 10 Version 1809 (OS Build 17763.1098)
Docker Desktop Community 2.2.0.4 (43472): Engine 19.03.8, Compose 1.25.4, Kubernetes 1.15.5, Notary 0.6.1, Credential Helper 0.6.3
Docker is in Windows containers mode with experimental features enabled (needed to run windows and linux images at the same time)
While my VPN (AnyConnect) was running, I had to run the following from PowerShell (admin mode):
Get-NetAdapter | Where-Object {$_.InterfaceDescription -Match "Cisco AnyConnect"} | Set-NetIPInterface -InterfaceMetric 6000
Actually i did it using Docker Desktop and Hyper-V virtual machines. Using OpenConnect but i think it can be done for most VPN client with minor adaptations.
The fully explained instructions are here Docker Desktop, Hyper-V and VPN with the settings for Docker containers, Windows VMs and Linux VMs
I created a new internal Virtual Switch (let's call it "Internal") and assigned to it a static IP address (let's say 192.168.4.2)
I created a new VM with Ubuntu server and OpenConnect, connected to both the default Virtual Switch and the "Internal"
On the OpenConnect VM
Assigned to "Internal" a fixed ip (192.168.4.3)
Added a new tun interface "persistent" telling openconnect to use that tun (adding the "-i tun0" parameter as openconnect start parameter)
sudo ip tuntap add name tun0 mode tun
Installed the persist-iptables
Forced the ip forwarding
sudo echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf && sysctl -p
Setup the routing
sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT
sudo iptables -A FORWARD -o tun0 -j ACCEPT
sudo iptables -A FORWARD -i tun0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -i tun0 -j ACCEPT
After connecting the vpn i added permanently the dns servers to the resolve.conf
And retrieve the class of addresses of the VPN (like 10...* )
On the Docker containers
Added on Dockerfile the basic route
RUN route add -net 10.0.0.0 netmask 255.0.0.0 gw 192.168.4.3
Then running the docker file i added the dns giving net admin and sys module permissions
--dns 8.8.8.8 --dns 10.1.77.21 --dns 10.4.52.21 --dns-search test.dns.it
--cap-add=NET_ADMIN --cap-add=SYS_MODULE

How to tunnel a URL that is hidden behind an AWS security group through a tunnel server that has access?

I have a tunnel server within the security group of an AWS ELB, I have an ingress that resolves domain names and directs the request to the correct service.
If I ssh into the tunnel server, I can do curl internal-url.mycompany.com/ping and it will work.
I would like to do something like: sudo ssh -i key.pem -N -L 80:localhost:80 user#tunnel-server
(sudo because it's a privileged port)
Then on my local machine invoke curl internal-url.mycompany.com/ping but this is not working.
What you have in your question is almost exactly what you would need. sudo ssh -i key.pem -N -L 80:internal-url.mycompany.com:80 user#tunnel-server should proxy requests to your machine's localhost:80 to internal-url.mycompany.com:80 via the SSH tunnel to your tunnel-server.

Script to check if SSH service in Docker is up

My docker container has two services: a web service and a SSH server.
The SSH server is openssh-server and I need to run the command docker exec -it my-container sudo service ssh restart from outside the container to start the SSH server.
However, the command doesn't succeed all the time. Every time I need to manually check if the SSH server is up in the container using the command: ssh root#localhost:
1) If the SSH server fails to start, the result is ssh_exchange_identification: Connection closed by remote host
2) Otherwise, it asks for the password. (Which indicates that the SSH server is up)
Since I have to deploy multiple containers at the same time, it is unrealistic to check every container manually. Therefore, I want to retry the docker exec -it my-container sudo service ssh restart command automatically if the SSH serve fails to start. But I am not sure how to write the bash script to achieve this. It basically should work like this:
while (ssh_server_fails_to_start):
docker exec -it my-container sudo service ssh restart
Any comments or ideas are appreciated. Thanks in advance!
If the sshd is up an running, it will accept connections on its certain port. Otherwise, the connection attempt will fail.
If you run the following command:
ssh -o PasswordAuthentication=No root#localhost true
This will fail in either way, but the output will be different. In the case that the server is running and accepting connections, the explicit switch-off of password authentication will make it fail with this message:
Permission denied (publickey,password).
Otherwise it will print out a message like this:
ssh: connect to host localhost port 22: Connection refused
So I propose to scan the error message for a hint like this:
if ssh -o PasswordAuthentication=No root#localhost true \
|& grep -q "Connection refused"
then
echo "No server reachable!"
else
echo "Server reachable."
fi
So you could write your script like this:
while ssh -o PasswordAuthentication=No root#localhost true \
|& grep -q "Connection refused"
do
docker exec -it my-container sudo service ssh restart
done
You might want to add some sleep delays to avoid hurried restarts. Maybe the ssh server just needs some time to accept connections after being restarted.
To test the ssh connection, we can use sshpass package to provide a password in the command line.
while : ; do
docker exec -it my-container sudo service ssh restart
sleep 5s
sshpass -p 'root' ssh -q root#localhost -p 2222 exit
if [ $? == 0 ]; then
echo "SSH server is running."
break
fi
echo "SSH server is not running. Restarting..."
done

Automate a ssh response

I have a bash script running on a host with IP1. The script does a ssh to a remote host with IP2
ssh ubuntu#IP2 "ls -l ~"
The ssh replies with a
The authenticity of host 'IP2 (IP2)' can't be established.
ECDSA key fingerprint is SHA256:S9ESYzoNs9dv/i/6T0aqXQoSXHM.
Are you sure you want to continue connecting (yes/no)?
I want to automate the response "yes" to the above ssh command. How can I do that from the bash script ?
IP2 is a random IP so I cannot add it to the known hosts list on host IP1.
If you don't want to verify/check the fingerprint you could use something like:
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ubuntu#IP2 "ls -l ~"
This is how it works:
-o UserKnownHostsFile=/dev/null
The UserKnownHostsFile parameter specifies the database file to use for storing the user host keys (default is ~/.ssh/known_hosts).
By configuring the null device file as the host key database, SSH is fooled into thinking that the SSH client has never connected to any SSH server before, and so will never run into a mismatched host key.
-o StrictHostKeyChecking=no
The parameter StrictHostKeyChecking specifies if SSH will automatically add new host keys to the host key database file. By setting it to no, the host key is automatically added, without user confirmation, for all first-time connection.
For more details: How to disable SSH host key checking
Have you tested "StrictHostKeyChecking" option:
ssh -o "StrictHostKeyChecking no" root#10.x.x.x

Resources