I'm trying to use Docker on Windows while being connected to VPN.
When VPN is not connected, everything works OK.
But when I connect to our corporate VPN using Cisco AnyConnect client, network inside docker container is not working anymore:
docker run alpine ping www.google.com
ping: bad address 'www.google.com'
docker run alpine ping -c 5 216.58.204.36
PING 216.58.204.36 (216.58.204.36): 56 data bytes
--- 216.58.204.36 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
How to fix this issue and make it work?
My setup is:
Windows 10 Version 1809 (OS Build 17763.1098)
Docker Desktop Community 2.2.0.4 (43472): Engine 19.03.8, Compose 1.25.4, Kubernetes 1.15.5, Notary 0.6.1, Credential Helper 0.6.3
Docker is in Windows containers mode with experimental features enabled (needed to run windows and linux images at the same time)
While my VPN (AnyConnect) was running, I had to run the following from PowerShell (admin mode):
Get-NetAdapter | Where-Object {$_.InterfaceDescription -Match "Cisco AnyConnect"} | Set-NetIPInterface -InterfaceMetric 6000
Actually i did it using Docker Desktop and Hyper-V virtual machines. Using OpenConnect but i think it can be done for most VPN client with minor adaptations.
The fully explained instructions are here Docker Desktop, Hyper-V and VPN with the settings for Docker containers, Windows VMs and Linux VMs
I created a new internal Virtual Switch (let's call it "Internal") and assigned to it a static IP address (let's say 192.168.4.2)
I created a new VM with Ubuntu server and OpenConnect, connected to both the default Virtual Switch and the "Internal"
On the OpenConnect VM
Assigned to "Internal" a fixed ip (192.168.4.3)
Added a new tun interface "persistent" telling openconnect to use that tun (adding the "-i tun0" parameter as openconnect start parameter)
sudo ip tuntap add name tun0 mode tun
Installed the persist-iptables
Forced the ip forwarding
sudo echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf && sysctl -p
Setup the routing
sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT
sudo iptables -A FORWARD -o tun0 -j ACCEPT
sudo iptables -A FORWARD -i tun0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -i tun0 -j ACCEPT
After connecting the vpn i added permanently the dns servers to the resolve.conf
And retrieve the class of addresses of the VPN (like 10...* )
On the Docker containers
Added on Dockerfile the basic route
RUN route add -net 10.0.0.0 netmask 255.0.0.0 gw 192.168.4.3
Then running the docker file i added the dns giving net admin and sys module permissions
--dns 8.8.8.8 --dns 10.1.77.21 --dns 10.4.52.21 --dns-search test.dns.it
--cap-add=NET_ADMIN --cap-add=SYS_MODULE
Related
I have the following script I'm running in cloud-init on my cloud provider. It grabs a container from another host on my network, starts it, and then attempts to forward a port on the host to the container:
lxc init ...
lxc remote add gateway 10.132.98.1:8099 --accept-certificate --password securpwd
lxc copy gateway:build-slave build-slave
lxc start build-slave
CONTAINER_IP=$(lxc list "build-slave" -c 4 | awk '!/IPV4/{ if ( $2 != "" ) print $2}')
iptables -t nat -A PREROUTING -i ens3 -p tcp --dport 2200 -j DNAT --to ${CONTAINER_IP}
The only problem is that there is an arbitrary delay between when lxc start returns and when the IPV4 info is available. My current solution is to add sleep 5s after the lxc start command, but I'm worried that if my server is under load, it might actually be longer than 5 seconds before the container is initialized.
Is there a better solution that doesn't rely on an arbitrary wait period?
As Lawrence pointed out in the comments, LXD provides a "proxy" device that can be set on the container. In this way, I don't have to know the container's IP address in order to setup the correct IPTABLES entry. LXD will instead setup my proxy rule for me when the container I specify starts.
I configured this like so:
DROPLET_PUB_IP=$(ip -f inet addr show ens3 | sed -En -e 's/.*inet ([0-9.]+).*/\1/p')
lxc config device add build-slave ssh-slave proxy listen=tcp:${DROPLET_PUB_IP}:2200 connect=tcp:localhost:22
Note this is different from How to expose a service running inside a docker container, bound to localhost, which can be addressed in multiple ways in Docker for Linux, say through --net host or even -v to bind my Linux-flavor client in etc. My problem is specific for Docker for Mac, so it's not as straightforward.
I have a TCP server binding to localhost:5005 running inside Docker for Mac. (For security reason, I must not bind to 0.0.0.0:5005.)
I have a TCP client sending request to this server from my Mac (not inside the docker container).
My question is, how do I make it work?
In Linux Docker, I would simply use --net=host so the server binds to my host lo interface, but it seems that Docker for Mac runs on a managed VM, so the host network behavior is different behavior.
To illustrate my point:
On MacBook
It simply would not work
[me#MacBook App]$ docker run -v `pwd`:/App -p 127.0.0.1:5005:5005 nitincypher/docker-ubuntu-python-pip /App/server.py
[me#MacBook App]$ ./client.py
Client received data:
On Linux
In comparison, it would be trivial to do on Linux by using host network mode. Since I'm using my Linux's lo interface as my container lo interface.
[me#Linux App]$ docker run -v `pwd`:/App --net=host nitincypher/docker-ubuntu-python-pip /App/server.py
Server Connection address: ('127.0.0.1', 52172)
Server received data: Hello, World!
[me#Linux App]$ ./client.py
Client received data: Hello, World!
My Simulated Server Code
Requirement: It MUST bind to localhost, and nothing else. So I cannot change it to 0.0.0.0.
#!/usr/bin/env python
import socket
TCP_IP = 'localhost'
TCP_PORT = 5005
BUFFER_SIZE = 20 # Normally 1024, but we want fast response
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
conn, addr = s.accept()
print 'Server Connection address:', addr
while 1:
data = conn.recv(BUFFER_SIZE)
if not data: break
print "Server received data:", data
conn.send(data) # echo
conn.close()
My Simulated Client Code
Requirement: It MUST be ran on MacBook, since the real client is written in CPP and compiled to run only on MacBook.
#!/usr/bin/env python
import socket
TCP_IP = 'localhost'
TCP_PORT = 5005
BUFFER_SIZE = 1024
MESSAGE = "Hello, World!"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
s.send(MESSAGE)
data = s.recv(BUFFER_SIZE)
s.close()
print "Client received data:", data
Here's a working solution. The basic idea is to use SSH tunneling to do the port forwarding.
High Level Idea
You first need to build a docker image to support SSH access, because
ubuntu image doesn't have a sshd out of box, and also
you will need to know the password of root of your running container.
Then you will spin up your container as what you would normally do except that you are doing that based on the new image you created.
You create a SSH tunneling session from your MacBook, then you run your client on MacBook as you would normally do.
For reference, the command for SSH tunneling can be found here, the process of creating a sshd docker image is explained here, and how to ssh into docker container is explained here
Steps
Create a Docker file Dockerfile
#Use whatever image you are using on Docker Linux , say "FROM ubuntu:16.04"
FROM nitincypher/docker-ubuntu-python-pip
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Create a Docker Image from the Dockerfile
[me#MacBook App]$ docker build -t my_ssh_python .
Spin up your server container
[me#MacBook App]$ docker run -d -P -v `pwd`:/App --name myserver my_ssh_python
Start your server inside the container
[me#MacBook App]$ docker exec myserver /App/server.py
Create a SSH tunnel
[me#MacBook App]$ ssh root#`hostname` -p `docker port myserver 22 | awk -F ":" '{print $2}'` -L 8000:localhost:8000 -N
#Password is "screencast" as you built in Dockerfile
Note that
a. You have to use the IP address of your MacBook instead of your docker container's IP address.
b. You will use the port where the default container ssh port 22 is mapped to on host
c. In tunneling -L 8000:localhost:8000, you are saying forward anything from your MacBook 8000 (the first 8000) to Docker container's localhost at port 8000
Now you can use you client locally
[me#MacBook App]$ ./client.py
Client received data: Hello, World!
And on server side, you can see
Server Connection address: ('127.0.0.1', 55396)
Server received data: Hello, World!
I have MySQL running locally on my host machine and for reasons™ I can't run it inside of my Vagrant machine. I know that there's a way to address this issue with iptables by forwarding all traffic to 3306 on the guest to the host's IP address and port, but this complicates things a lot for me as I'll have to play around with iptables rules and probably get into TCP masquerading, which would be nice to avoid.
Is there a way in Vagrant (VirtualBox VM) to forward a host TCP port to the guest so that the guest can access 127.0.0.1:3306 and have all traffic forwarded to host:3306 seamlessly? If not, how exactly would I set this up in iptables?
According to this answer, Docker provides a way to do this natively without having to screw around with IP tables rules. Does VirtualBox and Vagrant provide a way to mimic this functionality?
I have two solutions, one involving iptables hacking and one more straightforward using SSH.
Tunnel a Host Port to the Guest over SSH
When connecting to the guest using vagrant ssh, pass the port along as an argument:
vagrant ssh -- -R 3306:localhost:3306
This will forward the local port 3306 to the remote machine at port 3306.
iptables Hackery
We can use iptables on the guest to forward all traffic to a local port on the guest to a remote port on the host. We need to ensure that the host and guest have more or less static IP addresses in relation to each other to ensure that everything works fine. We'll also need to open a port on the host's firewall to allow the guest to do this.
Give the Guest a Static IP
In your Vagrantfile, set a static IP address for the guest:
config.vm.network "private_network", ip: "10.10.10.10"
Now, when you hit 10.10.10.10, you'll always* be hitting your guest.
Configure iptables in the Guest
Found in this awesome answer in Server Fault:
$ remote_ip=10.0.2.2
$ mysql_port=3306
$ sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport $mysql_port \
-j DNAT --to $remote_ip:$mysql_port
$ sudo iptables -N INET-PRIV
$ sudo iptables -A FORWARD -i eth0 -o eth1 -j INET-PRIV
$ sudo iptables -A FORWARD -j drop
$ sudo iptables -A INET-PRIV -p tcp -d $remote_ip --dport $mysql_port \
-j ACCEPT
$ sudo iptables -A INET-PRIV -j DROP
Then, enable port forwarding:
$ echo "1" | sudo tee /proc/sys/net/ipv4/ip_forward
First, test it out, then when you're sure it works, run:
$ sudo iptables-save
I'm not sure that /proc/sys/net/ipv4/ip_forward will remember settings on boot, so you might want to add that to a startup script.
Which Should I Use?
SSH is definitely easier to do, but there's a bit of a performance overhead of having to encrypt that port's traffic and forward it back to the host.
iptables feels like black magic, but once you get it working, it's really nice and fairly seamless.
Port forwarding (using NAT back network backend) doesn't seem to fit the use case well.
In your use case, Public Network (Bridged Networking) is a better choice. Create a 2nd network in Vagrantfile and do a vagrant reload.
Vagrant.configure("2") do |config|
config.vm.network "public_network"
end
Basically this will add an extra virtual NIC in the VM, and it'll get an IP from the same DHCP server in your network. Get its IP by using ifconfig -a or ip addr.
The host <=> VM will be able to communicate. VM should be able to connect to mysql running on the host via port 3306.
HTH
I'm trying to use the Docker API to connect to docker daemon from another machine. I am able to do this command successfully:
docker -H=tcp://127.0.0.1:4243 images
But NOT when I use the real IP address:
docker -H=tcp://192.168.2.123:4243 images
2013/08/04 01:35:53 dial tcp 192.168.2.123:4243: connection refused
Why can't I connect when using a non-local IP?
I'm using a Vagrant VM with the following in Vagrantfile: config.vm.network :private_network, ip: "192.168.2.123"
The following is iptables:
# Generated by iptables-save v1.4.12 on Sun Aug 4 01:24:46 2013
*filter
:INPUT ACCEPT [1974:252013]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1511:932565]
-A INPUT -p tcp -m tcp --dport 4243 -j ACCEPT
COMMIT
# Completed on Sun Aug 4 01:24:46 2013
# Generated by iptables-save v1.4.12 on Sun Aug 4 01:24:46 2013
*nat
:PREROUTING ACCEPT [118:8562]
:INPUT ACCEPT [91:6204]
:OUTPUT ACCEPT [102:7211]
:POSTROUTING ACCEPT [102:7211]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.16.42.0/24 ! -d 172.16.42.0/24 -j MASQUERADE
Came across a similar issue, one thing I don't see mentioned here is you need to start docker to listen to both the network and a unix socket. All regular docker (command-line) commands on the host assume the socket.
sudo docker -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -d &
will start docker listening to any ip address on your host, as well as the typical unix socket.
You need to listen to 0.0.0.0. When you listen on 127.0.0.1, it means that no one outside your host will be able to connect.
Please note that in doing this, you have given anyone, and any URL sent to you by email access to your Docker API, and thus root permission.
you should, at minimum, secure your socket using https: http://docs.docker.com/articles/https/
There are 2 ways in configuring the docker daemon port
1) Configuring at /etc/default/docker file:
DOCKER_OPTS="-H tcp://127.0.0.1:5000 -H unix:///var/run/docker.sock"
2) Configuring at /etc/docker/daemon.json:
{
"hosts": ["tcp://<IP-ADDRESS>:<PORT>", "unix:///var/run/docker.sock"]
}
IP-ADDRESS - any address which is accessible can be used.
Restart the docker service after configuring the port.
The reason for adding both the user port[ tcp://127.0.0.1:5000] and default docker socket[unix:///var/run/docker.sock] is that the user port enables the access to the docker APIs whereas the default socket enables the CLI.
I have computer (ubuntu server). Server have internet and share internet for local network. There is one computer in local network have FTP. How to get access to local FTP from internet?
A good Site to take a look: Linux Firewalls using iptables
iptables -A PREROUTING -p tcp -i ppp0 --dport 21 -j DNAT --to 192.168.1.1:21
ppp0 = Interface WWW
192.168.1.1 = The Server which provides FTP in your Lan