Forward host port to guest - vagrant

I have MySQL running locally on my host machine and for reasons™ I can't run it inside of my Vagrant machine. I know that there's a way to address this issue with iptables by forwarding all traffic to 3306 on the guest to the host's IP address and port, but this complicates things a lot for me as I'll have to play around with iptables rules and probably get into TCP masquerading, which would be nice to avoid.
Is there a way in Vagrant (VirtualBox VM) to forward a host TCP port to the guest so that the guest can access 127.0.0.1:3306 and have all traffic forwarded to host:3306 seamlessly? If not, how exactly would I set this up in iptables?
According to this answer, Docker provides a way to do this natively without having to screw around with IP tables rules. Does VirtualBox and Vagrant provide a way to mimic this functionality?

I have two solutions, one involving iptables hacking and one more straightforward using SSH.
Tunnel a Host Port to the Guest over SSH
When connecting to the guest using vagrant ssh, pass the port along as an argument:
vagrant ssh -- -R 3306:localhost:3306
This will forward the local port 3306 to the remote machine at port 3306.
iptables Hackery
We can use iptables on the guest to forward all traffic to a local port on the guest to a remote port on the host. We need to ensure that the host and guest have more or less static IP addresses in relation to each other to ensure that everything works fine. We'll also need to open a port on the host's firewall to allow the guest to do this.
Give the Guest a Static IP
In your Vagrantfile, set a static IP address for the guest:
config.vm.network "private_network", ip: "10.10.10.10"
Now, when you hit 10.10.10.10, you'll always* be hitting your guest.
Configure iptables in the Guest
Found in this awesome answer in Server Fault:
$ remote_ip=10.0.2.2
$ mysql_port=3306
$ sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport $mysql_port \
-j DNAT --to $remote_ip:$mysql_port
$ sudo iptables -N INET-PRIV
$ sudo iptables -A FORWARD -i eth0 -o eth1 -j INET-PRIV
$ sudo iptables -A FORWARD -j drop
$ sudo iptables -A INET-PRIV -p tcp -d $remote_ip --dport $mysql_port \
-j ACCEPT
$ sudo iptables -A INET-PRIV -j DROP
Then, enable port forwarding:
$ echo "1" | sudo tee /proc/sys/net/ipv4/ip_forward
First, test it out, then when you're sure it works, run:
$ sudo iptables-save
I'm not sure that /proc/sys/net/ipv4/ip_forward will remember settings on boot, so you might want to add that to a startup script.
Which Should I Use?
SSH is definitely easier to do, but there's a bit of a performance overhead of having to encrypt that port's traffic and forward it back to the host.
iptables feels like black magic, but once you get it working, it's really nice and fairly seamless.

Port forwarding (using NAT back network backend) doesn't seem to fit the use case well.
In your use case, Public Network (Bridged Networking) is a better choice. Create a 2nd network in Vagrantfile and do a vagrant reload.
Vagrant.configure("2") do |config|
config.vm.network "public_network"
end
Basically this will add an extra virtual NIC in the VM, and it'll get an IP from the same DHCP server in your network. Get its IP by using ifconfig -a or ip addr.
The host <=> VM will be able to communicate. VM should be able to connect to mysql running on the host via port 3306.
HTH

Related

Docker Desktop Windows and VPN - no network connection inside container

I'm trying to use Docker on Windows while being connected to VPN.
When VPN is not connected, everything works OK.
But when I connect to our corporate VPN using Cisco AnyConnect client, network inside docker container is not working anymore:
docker run alpine ping www.google.com
ping: bad address 'www.google.com'
docker run alpine ping -c 5 216.58.204.36
PING 216.58.204.36 (216.58.204.36): 56 data bytes
--- 216.58.204.36 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
How to fix this issue and make it work?
My setup is:
Windows 10 Version 1809 (OS Build 17763.1098)
Docker Desktop Community 2.2.0.4 (43472): Engine 19.03.8, Compose 1.25.4, Kubernetes 1.15.5, Notary 0.6.1, Credential Helper 0.6.3
Docker is in Windows containers mode with experimental features enabled (needed to run windows and linux images at the same time)
While my VPN (AnyConnect) was running, I had to run the following from PowerShell (admin mode):
Get-NetAdapter | Where-Object {$_.InterfaceDescription -Match "Cisco AnyConnect"} | Set-NetIPInterface -InterfaceMetric 6000
Actually i did it using Docker Desktop and Hyper-V virtual machines. Using OpenConnect but i think it can be done for most VPN client with minor adaptations.
The fully explained instructions are here Docker Desktop, Hyper-V and VPN with the settings for Docker containers, Windows VMs and Linux VMs
I created a new internal Virtual Switch (let's call it "Internal") and assigned to it a static IP address (let's say 192.168.4.2)
I created a new VM with Ubuntu server and OpenConnect, connected to both the default Virtual Switch and the "Internal"
On the OpenConnect VM
Assigned to "Internal" a fixed ip (192.168.4.3)
Added a new tun interface "persistent" telling openconnect to use that tun (adding the "-i tun0" parameter as openconnect start parameter)
sudo ip tuntap add name tun0 mode tun
Installed the persist-iptables
Forced the ip forwarding
sudo echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf && sysctl -p
Setup the routing
sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT
sudo iptables -A FORWARD -o tun0 -j ACCEPT
sudo iptables -A FORWARD -i tun0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -i tun0 -j ACCEPT
After connecting the vpn i added permanently the dns servers to the resolve.conf
And retrieve the class of addresses of the VPN (like 10...* )
On the Docker containers
Added on Dockerfile the basic route
RUN route add -net 10.0.0.0 netmask 255.0.0.0 gw 192.168.4.3
Then running the docker file i added the dns giving net admin and sys module permissions
--dns 8.8.8.8 --dns 10.1.77.21 --dns 10.4.52.21 --dns-search test.dns.it
--cap-add=NET_ADMIN --cap-add=SYS_MODULE

What is the best practice for using IPTABLES with an Amazon Linux server?

Amazon has their own port security and IPTABLES is not running by default. Do I need to configure and enable IPTABLES?
Only Whitelists
Amazon effectively only gives you whitelisting ability.
Their documentation points this out directly:
Security group rules are always permissive; you can't create rules that deny access.
If you want fine-grained control over blacklists or you want to set up port forwarding, using iptables is one way to go.
Examples
Perhaps you want to drop packets from a bot scanning your box
$ iptables -I INPUT -s 174.132.223.252 -j DROP
You also might want to run an application as a non-root user on an unprivileged port and forward to port 80.
$ iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080

How to connect to Docker API from another machine?

I'm trying to use the Docker API to connect to docker daemon from another machine. I am able to do this command successfully:
docker -H=tcp://127.0.0.1:4243 images
But NOT when I use the real IP address:
docker -H=tcp://192.168.2.123:4243 images
2013/08/04 01:35:53 dial tcp 192.168.2.123:4243: connection refused
Why can't I connect when using a non-local IP?
I'm using a Vagrant VM with the following in Vagrantfile: config.vm.network :private_network, ip: "192.168.2.123"
The following is iptables:
# Generated by iptables-save v1.4.12 on Sun Aug 4 01:24:46 2013
*filter
:INPUT ACCEPT [1974:252013]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1511:932565]
-A INPUT -p tcp -m tcp --dport 4243 -j ACCEPT
COMMIT
# Completed on Sun Aug 4 01:24:46 2013
# Generated by iptables-save v1.4.12 on Sun Aug 4 01:24:46 2013
*nat
:PREROUTING ACCEPT [118:8562]
:INPUT ACCEPT [91:6204]
:OUTPUT ACCEPT [102:7211]
:POSTROUTING ACCEPT [102:7211]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.16.42.0/24 ! -d 172.16.42.0/24 -j MASQUERADE
Came across a similar issue, one thing I don't see mentioned here is you need to start docker to listen to both the network and a unix socket. All regular docker (command-line) commands on the host assume the socket.
sudo docker -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -d &
will start docker listening to any ip address on your host, as well as the typical unix socket.
You need to listen to 0.0.0.0. When you listen on 127.0.0.1, it means that no one outside your host will be able to connect.
Please note that in doing this, you have given anyone, and any URL sent to you by email access to your Docker API, and thus root permission.
you should, at minimum, secure your socket using https: http://docs.docker.com/articles/https/
There are 2 ways in configuring the docker daemon port
1) Configuring at /etc/default/docker file:
DOCKER_OPTS="-H tcp://127.0.0.1:5000 -H unix:///var/run/docker.sock"
2) Configuring at /etc/docker/daemon.json:
{
"hosts": ["tcp://<IP-ADDRESS>:<PORT>", "unix:///var/run/docker.sock"]
}
IP-ADDRESS - any address which is accessible can be used.
Restart the docker service after configuring the port.
The reason for adding both the user port[ tcp://127.0.0.1:5000] and default docker socket[unix:///var/run/docker.sock] is that the user port enables the access to the docker APIs whereas the default socket enables the CLI.

iptables nat FTP

I have computer (ubuntu server). Server have internet and share internet for local network. There is one computer in local network have FTP. How to get access to local FTP from internet?
A good Site to take a look: Linux Firewalls using iptables
iptables -A PREROUTING -p tcp -i ppp0 --dport 21 -j DNAT --to 192.168.1.1:21
ppp0 = Interface WWW
192.168.1.1 = The Server which provides FTP in your Lan

Is it possible to forward ssh requests that come in over a certain port to another machine?

I have a small local network. Only one of the machines is available to the outside world (this is not easily changeable). I'd like to be able to set it up such that ssh requests that don't come in on the standard port go to another machine. Is this possible? If so, how?
Oh and all of these machines are running either Ubuntu or OS X.
Another way to go would be to use ssh tunneling (which happens on the client side).
You'd do an ssh command like this:
ssh -L 8022:myinsideserver:22 paul#myoutsideserver
That connects you to the machine that's accessible from the outside (myoutsideserver) and creates a tunnel through that ssh connection to port 22 (the standard ssh port) on the server that's only accessible from the inside.
Then you'd do another ssh command like this (leaving the first one still connected):
ssh -p 8022 paul#localhost
That connection to port 8022 on your localhost will then get tunneled through the first ssh connection taking you over myinsideserver.
There may be something you have to do on myoutsideserver to allow forwarding of the ssh port. I'm double-checking that now.
Edit
Hmmm. The ssh manpage says this: **Only the superuser can forward privileged ports. **
That sort of implies to me that the first ssh connection has to be as root. Maybe somebody else can clarify that.
It looks like superuser privileges aren't required as long as the forwarded port (in this case, 8022) isn't a privileged port (like 22). Thanks for the clarification Mike Stone.
#Mark Biek
I was going to say that, but you beat me to it! Anyways, I just wanted to add that there is also the -R option:
ssh -R 8022:myinsideserver:22 paul#myoutsideserver
The difference is what machine you are connecting to/from. My boss showed me this trick not too long ago, and it is definitely really nice to know... we were behind a firewall and needed to give external access to a machine... he got around it by ssh -R to another machine that was accessible... then connections to that machine were forwarded into the machine behind the firewall, so you need to use -R or -L based on which machine you are on and which you are ssh-ing to.
Also, I'm pretty sure you are fine to use a regular user as long as the port you are forwarding (in this case the 8022 port) is not below the restricted range (which I think is 1024, but I could be mistaken), because those are the "reserved" ports. It doesn't matter that you are forwarding it to a "restricted" port because that port is not being opened (the machine is just having traffic sent to it through the tunnel, it has no knowledge of the tunnel), the 8022 port IS being open and so is restricted as such.
EDIT: Just remember, the tunnel is only open so long as the initial ssh remains open, so if it times out or you exit it, the tunnel will be closed.
(In this example, I am assuming port 2222 will go to your internal host. $externalip and $internalip are the ip addresses or hostnames of the visible and internal machine, respectively.)
You have a couple of options, depending on how permanent you want the proxying to be:
Some sort of TCP proxy. On Linux, the basic idea is that before the incoming packet is processed, you want to change its destination—i.e. prerouting destination NAT:
iptables -t nat -A PREROUTING -p tcp -i eth0 -d $externalip --dport 2222 --sport
1024:65535 -j DNAT --to $internalip:22
Using SSH to establish temporary port forwarding. From here, you have two options again:
Transparent proxy, where the client thinks that your visible host (on port 2222) is just a normal SSH server and doesn't realize that it is passing through. While you lose some fine-grained control, you get convenience (especially if you want to use SSH to forward VNC or X11 all the way to the inner host).
From the internal machine: ssh -g -R 2222:localhost:22 $externalip
Then from the outside world: ssh -p 2222 $externalip
Notice that the "internal" and "external" machines do not have to be on the same LAN. You can port forward all the way around the world this way.
Forcing login to the external machine first. This is true "forwarding," not "proxying"; but the basic idea is this: You force people to log in to the external machine (so you control on who can log in and when, and you get logs of the activity), and from there they can SSH through to the inside. It sounds like a chore, but if you set up simple shell scripts on the external machine with the names of your internal hosts, coupled with password-less SSH keypairs then it is very straightforward for a user to log in. So:
On the external machine, you make a simple script, /usr/local/bin/internalhost which simply runs ssh $internalip
From the outside world, users do: ssh $externalip internalhost and once they log in to the first machine, they are immediately forwarded through to the internal one.
Another advantage to this approach is that people don't get key management problems, since running two SSH services on one IP address will make the SSH client angry.
FYI, if you want to SSH to a server and you do not want to worry about keys, do this
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
I have an alias in my shell called "nossh", so I can just do nossh somehost and it will ignore all key errors. Just understand that you are ignoring security information when you do this, so there is a theoretical risk.
Much of this information is from a talk I gave at Barcamp Bangkok all about fancy SSH tricks. You can see my slides, but I recommend the text version as the S5 slides are kind of buggy. Check out the section called "Forward Anything: Simple Port Forwarding" for info. There is also information on creating a SOCKS5 proxy with OpenSSH. Yes, you can do that. OpenSSH is awesome like that.
(Finally, if you are doing a lot of traversing into the internal network, consider setting up a VPN. It sounds scary, but OpenVPN is quite simple and runs on all OSes. I would say it's overkill just for SSH; but once you start port-forwarding through your port-forwards to get VNC, HTTP, or other stuff happening; or if you have lots of internal hosts to worry about, it can be simpler and more maintainable.)
You can use Port Fowarding to do this. Take a look here:
http://portforward.com/help/portforwarding.htm
There are instructions on how to set up your router to port forward request on this page:
http://www.portforward.com/english/routers/port_forwarding/routerindex.htm
In Ubuntu, you can install Firestarter and then use it's Forward Service feature to forward the SSH traffic from a non standard port on your machine with external access to port 22 on the machine inside your network.
On OS X you can edit the /etc/nat/natd.plist file to enable port fowarding.
Without messing around with firewall rules, you can set up a ~/.ssh/config file.
Assume 10.1.1.1 is the 'gateway' system and 10.1.1.2 is the 'client' system.
Host gateway
Hostname 10.1.1.1
LocalForward 8022 10.1.1.2:22
Host client
Hostname localhost
Port 8022
You can open an ssh connection to 'gateway' via:
ssh gateway
In another terminal, open a connection to the client.
ssh client

Resources