How to Enable Security for forwarded ports? - shell

I have two hosts, homehost, and digitalocean host.
I want to access my homehost remotely via ssh.
Homehost is behind a firewall with a dynamic IP.
I am using reverse proxy, to initiate a connection from homehost to digitalocean host.
By Running this on the homehost:
ssh -R *:1234:localhost:22 -i "~/path/to/privatekeyfile" root#digitalocean_host
This way, I can login to my digital ocean host via the private key pair file, and then ssh to port 1234 on localhost to connect to my homehost from anywhere outside the network.
Now I forwarded the port 1235 to port 1234 on the digitalocean host by doing this.
ssh -L *:1235:localhost:1234 root#localhost
This way, I can connect to my homehost from outside the network, by:
ssh useronhomehost#digitaloceanhost -p 1235
However, this does not depend on any keypair validation so my homehost is completely open to attacks.
How can I enable a key-pair authentication for port 1235 of my digital ocean host?

Related

Set up SSH tunnel with PgAdmin 4

I am new to pgAdmin and to SSH tunnels. I am trying to establish a connection to a postgres DB with SSH tunnel. I am on Windows 10. I am given the following instructions (I changed all the names and ports in the below)
Add the following to your SSH config (~/.ssh/config):
Host prod
Hostname myorg.org.uk
User sshusername
IdentityFile idef.pem
LocalForward 9999 localforward.amazonaws.com:8888
Now you can tunnel your way through to PostgreSQL:
ssh -N prod
And now psql et al can connect (You must open a new Terminal window while the SSH tunnel is running):
psql -h localhost -p 9999 -U connectionusername -d dproduction
I am also given the dproduction database password for the database I am trying to connect to: dproduction_pwd
I don't understand where everything goes in pgAdmin. I did the following:
Create-Server:
Name = test
Connection:
Host Name/Address: localhost
Prot: 9999
Maintenance database: postgres
username: connectionusername
SSH Tunnel:
Tunnel host: myorg.org.uk
Tunnel post: 9999
username: sshusername
Identity file: C:\idef.pem
Password: dproduction_pwd
I must be doing something wrong, as I don't use LocalForward from the ssh config above, where does this go? putting it in Tunnel host does not work.
I managed to use SSH tunnel to access my database with Windows 10 SSH and PGAdmin SSH Tunnel. It did take a while. pgAdmin's document isn't very clear on this. Here's the difference I found:
When setting SSH tunnel with Windows 10 SSH, you need to forward a local port (9999 in your case) to the remote port (8888).
In pgAdmin, that local port is no longer needed. My guess is since it already knows you want to access which service through which tunnel, it takes care of the local port in the background. That tunnel port, in the most common cases, should be the SSH port 22.
My suggested changes to your current setting would be:
in SSH Tunnel tab, set Tunnel port to 22
in Connection tab, set Port to 8888
This should work.

access host's ssh tunnel from docker container

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.
I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.
I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried
nc -luv -p 9999 # at host
nc -luv -p 9000 # at container
following this, parag. 2 but there was no perceived communication, even when doing
nc -luv host-ip -p 9000
at the container
I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).
So my questions are
1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?
2 - What's a quick way to test that the connection is up? Via bash, preferably.
Thanks.
Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.
In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).
Setting up the tunnel:
For this to work, retrieve the ip your docker0 bridge is using via:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.
Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.
Setting up your application:
In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.
Forwarding multiple connections:
When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.
Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user#]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.
on MacOS (tested in v19.03.2),
1) create a tunnel on host
ssh -i key.pem username#jump_server -L 3336:mysql_host:3306 -N
2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.
example,
mysql -h host.docker.internal -P 3336 -u admin -p
note from docker-for-mac official doc
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Mac.
The gateway is also reachable as gateway.docker.internal.
I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container
I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.
I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.
Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.
After configuring a host inside my ~/.ssh/config:
Host project-postgres-tunnel
HostName remote.server.host
User sshuser
Port 2200
ForwardAgent yes
TCPKeepAlive yes
ConnectTimeout 5
ServerAliveCountMax 10
ServerAliveInterval 15
And adding a service to the stack:
postgres:
image: cagataygurturk/docker-ssh-tunnel:0.0.1
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: project-postgres-tunnel
REMOTE_HOST: localhost
LOCAL_PORT: 5432
REMOTE_PORT: 5432
# uncomment if you wish to access the tunnel on the host
#ports:
# - 5432:5432
The PHP container started talking through the tunnel without any problems:
postgresql://user:password#postgres/db?serverVersion=11&charset=utf8
Just remember to put your public key inside that host if you haven't already:
ssh-copy-id project-postgres-tunnel
I'm pretty sure this will work regardless of the OS used (MacOS / Linux).
I agree with #hlobit that #B12Toaster answer should be the accepted answer.
In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.
In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user#dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).
Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.
References:
https://www.ssh.com/ssh/tunneling/example
https://docs.docker.com/network/network-tutorial-host/
https://docs.docker.com/network/host/
My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.
Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked.
This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):
-R [bind_address:]port:host:hostport
However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.
In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.
Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.
On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).
However, I found out I could directly use the host-ip insider my docker container.
Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
Bash into your Container and test if you can ping your host ip:
- docker exec -it d6b4be5b20f7 /bin/bash
- apt-get update && apt-get install iputils-ping
- ping 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms
Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.
In case anyone needs it (like I did), solution for Windows and WSL is same as #prayagupd mentioned for Mac OS
Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.
Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:
mysql -u user -p -h host.docker.internal -P 3300
An easy example to reproduce the situation and ssh to host
Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
Now you can access your host using localhost
Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user#localhost

Connecting to Heroku using port 443

I'm a university student and all ports except 80, 443 are blocked. I'm able to connect to github via
Host github.com
Hostname ssh.github.com
Port 443
git push heroku master gives me this error:
ssh: connect to host heroku.com port 22: Connection refused
fatal: The remote end hung up unexpectedly
I've tried the solutions posted here on SO but I've still not got it working. Is there a way I can connect to heroku do deploy my websites?
Thanks a lot
Heroku ssh only works over port 22. There is, however, a plugin that lets you push via HTTP. It does not use git, though. Instead, you heroku push.
https://github.com/ddollar/heroku-push
If your SSH port been blocked and wish to push into heroku using alternate port then you may consider Tunneling.
For tunneling you are required to have additional PC or Server resides outside of blocked network and having access to Port 22.
For the scenario below we can use house PC to tunnel into heroku server. Since the university network only allow Port 80 and 443, we can set the House PC to receive connection via port 443 and tunnel it to port 22.
At House PC:
Configure SSH server in the house PC to run on port 443. Refer here to configure SSH Server on Multiple Ports
University PC:
Configure University PC to resolve git_tunnel alias to point to localhost on port 9001. Edit ~/.ssh/config and add following
# ~/.ssh/config
Host git_tunnel
Hostname 127.0.0.1
User git
Port 9001
Add a new remote alias as tunnel which will point to git#git_tunnel:{app name}.git
git remote add tunnel git#git_tunnel:{app name}.git
From the University PC establish tunnel to house PC which listening on port 443.
ssh -L 9001:heroku.com:22 -p 443 root#housepc.com
Deploy to heroku using alias tunnel created earlier
git push tunnel master

Forward network connection over ssh so that my outbound IP changes

A service, for example an FTP server, only accepts connections from a specific network, where all users will have the same external IP-adress.
I want to connect to this service, but I'm currently not inside the allowed network.
I have ssh access to a server inside the network.
How do I use ssh to tunnel a certain port from my local machine, through a machine on the internal network, to the final service, so that any client opening the correct port won't notice any difference?
You can create a SSH tunnel to your specific network using the following command.
For instance, let's say you want to reach a web service on computer "mywebserver" (port 80).
Under Linux or BSD, using OpenSSH, you can use the following commandline:
ssh -f mysshserver -L 1234:mywebserver:80 -N
Under Windows, you can use MobaXterm which includes a simple graphical ssh tunnel builder
This will open a SSH tunnel between local port 1234 and remote webserver on port 80. You can then open your web browser and connect directly to your web server by typing "http://localhost:1234" in the address bar.

ssh to amazon EC2 over proxy

I have some problem connecting to my amazon EC2 server over ssh over proxy.
I have my username and password for http proxy port 8080.(dont have control over proxy)
Also I have my connection string which would work without proxy
ssh -i key.pem root#xx.compute.amazonaws.com
when I am trying to connect I am getting "No route to host" error
I tried to use putty, configured proxy + authentication file, But then I getting this error
"Unable to use this key file (OpenSSH SSH-2 private key)"
Also I dont know how putty inserts my proxy config, into ssh connection string, so I could try it in terminal
I was facing the same problem and this is what I used to connect, using corkscrew. My config file looks like this
Host AWS
Hostname <Public DNS>
Port 443
#Write the appropriate username depending on your AMI, eg : ubuntu, ec2-user
User ubuntu
IdentityFile </path to key file>
ProxyCommand /usr/bin/corkscrew 10.10.78.61 3128 %h %p
then I simply use this command to connect
ssh AWS
and it works flawlessly.
Note : You must edit your sshd_config file on the server to listen to ssh connections on port 443 (in addition to 22) and restart the ssh daemon.
Are you sure you can login as root? Try logging in as ec2-user instead.
Also, if you have assigned an elastic IP to your instance, the public DNS has probably changed. Log in to the aws console, and select your instance. Scroll down to look at the public DNS again and double check you are using the correct xx.compute.amazonaws.com addr.

Resources