Note this is different from How to expose a service running inside a docker container, bound to localhost, which can be addressed in multiple ways in Docker for Linux, say through --net host or even -v to bind my Linux-flavor client in etc. My problem is specific for Docker for Mac, so it's not as straightforward.
I have a TCP server binding to localhost:5005 running inside Docker for Mac. (For security reason, I must not bind to 0.0.0.0:5005.)
I have a TCP client sending request to this server from my Mac (not inside the docker container).
My question is, how do I make it work?
In Linux Docker, I would simply use --net=host so the server binds to my host lo interface, but it seems that Docker for Mac runs on a managed VM, so the host network behavior is different behavior.
To illustrate my point:
On MacBook
It simply would not work
[me#MacBook App]$ docker run -v `pwd`:/App -p 127.0.0.1:5005:5005 nitincypher/docker-ubuntu-python-pip /App/server.py
[me#MacBook App]$ ./client.py
Client received data:
On Linux
In comparison, it would be trivial to do on Linux by using host network mode. Since I'm using my Linux's lo interface as my container lo interface.
[me#Linux App]$ docker run -v `pwd`:/App --net=host nitincypher/docker-ubuntu-python-pip /App/server.py
Server Connection address: ('127.0.0.1', 52172)
Server received data: Hello, World!
[me#Linux App]$ ./client.py
Client received data: Hello, World!
My Simulated Server Code
Requirement: It MUST bind to localhost, and nothing else. So I cannot change it to 0.0.0.0.
#!/usr/bin/env python
import socket
TCP_IP = 'localhost'
TCP_PORT = 5005
BUFFER_SIZE = 20 # Normally 1024, but we want fast response
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
conn, addr = s.accept()
print 'Server Connection address:', addr
while 1:
data = conn.recv(BUFFER_SIZE)
if not data: break
print "Server received data:", data
conn.send(data) # echo
conn.close()
My Simulated Client Code
Requirement: It MUST be ran on MacBook, since the real client is written in CPP and compiled to run only on MacBook.
#!/usr/bin/env python
import socket
TCP_IP = 'localhost'
TCP_PORT = 5005
BUFFER_SIZE = 1024
MESSAGE = "Hello, World!"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
s.send(MESSAGE)
data = s.recv(BUFFER_SIZE)
s.close()
print "Client received data:", data
Here's a working solution. The basic idea is to use SSH tunneling to do the port forwarding.
High Level Idea
You first need to build a docker image to support SSH access, because
ubuntu image doesn't have a sshd out of box, and also
you will need to know the password of root of your running container.
Then you will spin up your container as what you would normally do except that you are doing that based on the new image you created.
You create a SSH tunneling session from your MacBook, then you run your client on MacBook as you would normally do.
For reference, the command for SSH tunneling can be found here, the process of creating a sshd docker image is explained here, and how to ssh into docker container is explained here
Steps
Create a Docker file Dockerfile
#Use whatever image you are using on Docker Linux , say "FROM ubuntu:16.04"
FROM nitincypher/docker-ubuntu-python-pip
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Create a Docker Image from the Dockerfile
[me#MacBook App]$ docker build -t my_ssh_python .
Spin up your server container
[me#MacBook App]$ docker run -d -P -v `pwd`:/App --name myserver my_ssh_python
Start your server inside the container
[me#MacBook App]$ docker exec myserver /App/server.py
Create a SSH tunnel
[me#MacBook App]$ ssh root#`hostname` -p `docker port myserver 22 | awk -F ":" '{print $2}'` -L 8000:localhost:8000 -N
#Password is "screencast" as you built in Dockerfile
Note that
a. You have to use the IP address of your MacBook instead of your docker container's IP address.
b. You will use the port where the default container ssh port 22 is mapped to on host
c. In tunneling -L 8000:localhost:8000, you are saying forward anything from your MacBook 8000 (the first 8000) to Docker container's localhost at port 8000
Now you can use you client locally
[me#MacBook App]$ ./client.py
Client received data: Hello, World!
And on server side, you can see
Server Connection address: ('127.0.0.1', 55396)
Server received data: Hello, World!
Related
I'm trying to use Docker on Windows while being connected to VPN.
When VPN is not connected, everything works OK.
But when I connect to our corporate VPN using Cisco AnyConnect client, network inside docker container is not working anymore:
docker run alpine ping www.google.com
ping: bad address 'www.google.com'
docker run alpine ping -c 5 216.58.204.36
PING 216.58.204.36 (216.58.204.36): 56 data bytes
--- 216.58.204.36 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
How to fix this issue and make it work?
My setup is:
Windows 10 Version 1809 (OS Build 17763.1098)
Docker Desktop Community 2.2.0.4 (43472): Engine 19.03.8, Compose 1.25.4, Kubernetes 1.15.5, Notary 0.6.1, Credential Helper 0.6.3
Docker is in Windows containers mode with experimental features enabled (needed to run windows and linux images at the same time)
While my VPN (AnyConnect) was running, I had to run the following from PowerShell (admin mode):
Get-NetAdapter | Where-Object {$_.InterfaceDescription -Match "Cisco AnyConnect"} | Set-NetIPInterface -InterfaceMetric 6000
Actually i did it using Docker Desktop and Hyper-V virtual machines. Using OpenConnect but i think it can be done for most VPN client with minor adaptations.
The fully explained instructions are here Docker Desktop, Hyper-V and VPN with the settings for Docker containers, Windows VMs and Linux VMs
I created a new internal Virtual Switch (let's call it "Internal") and assigned to it a static IP address (let's say 192.168.4.2)
I created a new VM with Ubuntu server and OpenConnect, connected to both the default Virtual Switch and the "Internal"
On the OpenConnect VM
Assigned to "Internal" a fixed ip (192.168.4.3)
Added a new tun interface "persistent" telling openconnect to use that tun (adding the "-i tun0" parameter as openconnect start parameter)
sudo ip tuntap add name tun0 mode tun
Installed the persist-iptables
Forced the ip forwarding
sudo echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf && sysctl -p
Setup the routing
sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT
sudo iptables -A FORWARD -o tun0 -j ACCEPT
sudo iptables -A FORWARD -i tun0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -i tun0 -j ACCEPT
After connecting the vpn i added permanently the dns servers to the resolve.conf
And retrieve the class of addresses of the VPN (like 10...* )
On the Docker containers
Added on Dockerfile the basic route
RUN route add -net 10.0.0.0 netmask 255.0.0.0 gw 192.168.4.3
Then running the docker file i added the dns giving net admin and sys module permissions
--dns 8.8.8.8 --dns 10.1.77.21 --dns 10.4.52.21 --dns-search test.dns.it
--cap-add=NET_ADMIN --cap-add=SYS_MODULE
I' am trying to deploy docker container on Mac machine. I ran the command:
docker run -P -it clickstream-collector_csapi -c "test_config.yml".
output: ts=2020-02-24T17:25:43Z lvl=info msg="Starting Collector"
ts=2020-02-24T17:25:43Z lvl=info msg="Start producer" service=collector
brokers=kafka.dev:9102
ts=2020-02-24T17:25:44Z lvl=info msg="Starting HTTP service"
ts=2020-02-24T17:25:44Z lvl=info msg="Starting server on" addr=0.0.0.0:13425
However I can't launch 0.0.0.0:13425 on my Mac , it shows me "This site can’t be reached0.0.0.0 refused to connect". It looks like my local machine doesn't look the docker . I know that Mac has some peculiarities but I pointed -p ( as I thought it should enough). Thanks a lot beforehand
The docker run -P (capital “P”) option asks Docker to pick a host port. That will almost always be a different port number from the one inside the container. You can print out the port number by using docker ps to find the container ID, and then docker port 0123456789ab to print out the actual port mapping. Once you’ve found the port number, you can use the special hostname localhost or the matching special IP address 127.0.0.1 and that port number to reach your container (not 0.0.0.0, a special address that means “everywhere”).
In typical use you’ll explicitly specify both host and container ports with a -p (little “p”) option, and also specify a --name so that you can find the container later.
docker run \
-it \
-p 13425:13425 \
--name clickstream_collector \
clickstream-collector_csapi \
...
I have the following script I'm running in cloud-init on my cloud provider. It grabs a container from another host on my network, starts it, and then attempts to forward a port on the host to the container:
lxc init ...
lxc remote add gateway 10.132.98.1:8099 --accept-certificate --password securpwd
lxc copy gateway:build-slave build-slave
lxc start build-slave
CONTAINER_IP=$(lxc list "build-slave" -c 4 | awk '!/IPV4/{ if ( $2 != "" ) print $2}')
iptables -t nat -A PREROUTING -i ens3 -p tcp --dport 2200 -j DNAT --to ${CONTAINER_IP}
The only problem is that there is an arbitrary delay between when lxc start returns and when the IPV4 info is available. My current solution is to add sleep 5s after the lxc start command, but I'm worried that if my server is under load, it might actually be longer than 5 seconds before the container is initialized.
Is there a better solution that doesn't rely on an arbitrary wait period?
As Lawrence pointed out in the comments, LXD provides a "proxy" device that can be set on the container. In this way, I don't have to know the container's IP address in order to setup the correct IPTABLES entry. LXD will instead setup my proxy rule for me when the container I specify starts.
I configured this like so:
DROPLET_PUB_IP=$(ip -f inet addr show ens3 | sed -En -e 's/.*inet ([0-9.]+).*/\1/p')
lxc config device add build-slave ssh-slave proxy listen=tcp:${DROPLET_PUB_IP}:2200 connect=tcp:localhost:22
I use docker on OSX with boot2docker.
I want to get an Ssh connection from my terminal into a running container.
But I can't do this :(
I think it's because Docker is running in a virtual machine.
There are several things you must do to enable ssh'ing to a container running in a VM:
install and run sshd in your container (example). sshd is not there by default because containers typically run only one process, though they can run as many as you like.
EXPOSE a port as part of creating the image, typically 22, so that when you run the container, the daemon connects to the EXPOSE'd port inside the container and something can be exposed on the outside of the container.
When you run the container, you need to decide how to map that port. You can let Docker do it automatically or be explicit. I'd suggest being explicit: docker run -p 42222:22 ... which maps port 42222 on the VM to port 22 in the container.
Add a portmap to the VM to expose the port to your host. e.g. when your VM is not running, you can add a mapping like this: VBoxManage modifyvm "boot2docker-vm" --natpf1 "containerssh,tcp,,42222,,42222"
Then from your host, you should be able to ssh to port 42222 on the host to reach the container's ssh daemon.
Here's what happens when I perform the above steps:
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "containerssh,tcp,,42222,,42222"
$ ./boot2docker start
[2014-04-11 12:07:35] Starting boot2docker-vm...
[2014-04-11 12:07:55] Started.
$ docker run -d -p 42222:22 dhrp/sshd
Unable to find image 'dhrp/sshd' (tag: latest) locally
Pulling repository dhrp/sshd
2bbfe079a942: Download complete
c8a2228805bc: Download complete
8dbd9e392a96: Download complete
11d214c1b26a: Download complete
27cf78414709: Download complete
b750fe79269d: Download complete
cf7e766468fc: Download complete
082189640622: Download complete
fa822d12ee30: Download complete
1522e919ec9f: Download complete
fa594d99163a: Download complete
1bd442970c79: Download complete
0fda9de88c63: Download complete
86e22a5fdce6: Download complete
79d05cb13124: Download complete
ac72e4b531bc: Download complete
26e4b94e5a13b4bb924ef57548bb17ba03444ca003128092b5fbe344110f2e4c
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26e4b94e5a13 dhrp/sshd:latest /usr/sbin/sshd -D 6 seconds ago Up 3 seconds 0.0.0.0:42222->22/tcp loving_einstein
$ ssh root#localhost -p 42222
The authenticity of host '[localhost]:42222 ([127.0.0.1]:42222)' can't be established.
RSA key fingerprint is ....
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:42222' (RSA) to the list of known hosts.
root#localhost's password: screencast
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.12.1-tinycore64 x86_64)
* Documentation: https://help.ubuntu.com/
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
root#26e4b94e5a13:~# exit
logout
So that shows ssh->localhost 42222->VM port 42222->container port 22.
Docker has added the docker exec command to Docker 1.3.0. You can connect to a running container using the following:
docker exec -it <container id> /bin/bash
That will connect to a bash prompt on the running container.
If you just want to get into the running container, you may consider using nsenter. Here is a simple bash script (suggested by Chris Jones) that you can use to enter into a docker container. Save it somewhere in your $PATH as docker-enter and chmod +x
#!/bin/bash
set-e
# Check for nsenter. If not found, install it
boot2docker ssh '[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter'
# Use bash if no command is specified
args=$#
if[[ $# = 1 ]]; then
args+=(/bin/bash)
fi
boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter "${args[#]}"
Then you can run docker-enter 89af3d (or whatever configuration you want to enter)
A slightly modified variant of Michael's answer that just requires the container you want to enter be named (APPNAME):
boot2docker ssh '[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter'
boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter $(docker ps | grep $APPNAME | awk '{ print $1 }')
I've tested this for an Ubuntu 16.04 image running on a host with the same OS, Docker 18.09.2, it should also work for boot2Docker with minor modifications.
Build the image.
Run it in background container (youruser may be root):
$ docker run -ditu <youruser> <imageId>
Attach to it with a shell:
$ docker exec -it <containerId> /bin/bash
Install the openssh-server (sudo only needed if youruser is not root, the command may differ for boot2Docker):
$ sudo apt-get install -y openssh-server
Run it:
$ sudo service ssh start
(The following step is optional, if youruser has a password, you can skip it and provide the password at each ssh connection).
Create a RSA key on the client host:
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/youruser/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/youruser/.ssh/id_rsa.
Your public key has been saved in /home/youruser/.ssh/id_rsa.pub.
On the docker image, create a directory $HOME/.ssh:
$ cd
$ mkdir .ssh && cd .ssh
$ vi authorized_keys
Copy and paste the content of $HOME/.ssh/id_rsa.pub on the client machine to authorized_keys on the docker image and save the file.
(End of optional step).
Jot down your image's IP address:
$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 63448863ac39
^^^^^^^^^^ this
Now the connection from the client host should be effective:
$ ssh 172.17.0.2
Enter passphrase for key '/home/youruser/.ssh/id_rsa':
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-46-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Fri Apr 5 09:50:30 2019 from 172.17.0.1
Of course you can apply the above procedure non-interactively in your Dockerfile.
I'm trying to use the Docker API to connect to docker daemon from another machine. I am able to do this command successfully:
docker -H=tcp://127.0.0.1:4243 images
But NOT when I use the real IP address:
docker -H=tcp://192.168.2.123:4243 images
2013/08/04 01:35:53 dial tcp 192.168.2.123:4243: connection refused
Why can't I connect when using a non-local IP?
I'm using a Vagrant VM with the following in Vagrantfile: config.vm.network :private_network, ip: "192.168.2.123"
The following is iptables:
# Generated by iptables-save v1.4.12 on Sun Aug 4 01:24:46 2013
*filter
:INPUT ACCEPT [1974:252013]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1511:932565]
-A INPUT -p tcp -m tcp --dport 4243 -j ACCEPT
COMMIT
# Completed on Sun Aug 4 01:24:46 2013
# Generated by iptables-save v1.4.12 on Sun Aug 4 01:24:46 2013
*nat
:PREROUTING ACCEPT [118:8562]
:INPUT ACCEPT [91:6204]
:OUTPUT ACCEPT [102:7211]
:POSTROUTING ACCEPT [102:7211]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.16.42.0/24 ! -d 172.16.42.0/24 -j MASQUERADE
Came across a similar issue, one thing I don't see mentioned here is you need to start docker to listen to both the network and a unix socket. All regular docker (command-line) commands on the host assume the socket.
sudo docker -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -d &
will start docker listening to any ip address on your host, as well as the typical unix socket.
You need to listen to 0.0.0.0. When you listen on 127.0.0.1, it means that no one outside your host will be able to connect.
Please note that in doing this, you have given anyone, and any URL sent to you by email access to your Docker API, and thus root permission.
you should, at minimum, secure your socket using https: http://docs.docker.com/articles/https/
There are 2 ways in configuring the docker daemon port
1) Configuring at /etc/default/docker file:
DOCKER_OPTS="-H tcp://127.0.0.1:5000 -H unix:///var/run/docker.sock"
2) Configuring at /etc/docker/daemon.json:
{
"hosts": ["tcp://<IP-ADDRESS>:<PORT>", "unix:///var/run/docker.sock"]
}
IP-ADDRESS - any address which is accessible can be used.
Restart the docker service after configuring the port.
The reason for adding both the user port[ tcp://127.0.0.1:5000] and default docker socket[unix:///var/run/docker.sock] is that the user port enables the access to the docker APIs whereas the default socket enables the CLI.