Docker: MacOSX Expose Container ports to host machine - macos

In my job I working with docker and the option --net=host working like a charm forwarding the docker container ports to the machine. This allows me to adding grunt tasks that use certain ports by example:
A taks for serving my coverage report in a port 9001
A local deployed version of my app served in the port 9000
A watch live reload the port 35729
For Unit testing runner use the 9876 port
When I begin to use Docker in Mac, the first problem that i had was: The option --net=host don't work anymore.
I researched and I understand why this is not possible (Docker in Mac runs in a own virtual machine) and my momentary solution it's use the -p option for expose the ports, but this limit to me to add more and more task that use ports because i need run the explicit -p command for each port that i need expose.
Anyone with this same problem? How to dealing with this ?

Your issue is most probably that you are using dockertoolbox or dhingy/dlite or anything else providing a full-fledged linux VM, which then hosts docker to run your container inside this VM. This VM has, of course, its own network stack and own IP on the host, and thats were your tools will have issues with. The exposed ports of the container are not exposed to OSX host localhost, but rather OSX Docker-VM-ip.
To solve those issues elegantly
Expose ports to OSX localhost from the container
First, use/install docker-for-mac https://docs.docker.com/engine/installation/mac/ instead of dockertoolbox or others. Its based on a special xhyve stack which reuses your hosts network stack
when you now do docker run -p 3306:3306 percona it will bind 3306 on the osx-host-localhost, thus every other osx-tool trying to attach to localhost:3306 will work ( very useful ) just as you have been used to it when you installed mysql using brew install mysql or likewise
If you experience performance issues with code shares on OSX with docker containers, check http://docker-sync.io - it is compatible with docker-for-mac ( hint: i am biased on this one )
Export ports from the OSX-host to a containter
You do not really export anything in particular, you rather make them accessable as a whole from all containers ( all ports of the OSX-host-localhost)
If you want to attach to a port you offered on the OSX host, from within a container, e.g. during a xdebug session were your IDE listens on port 9000 on the OSX-host-localhost and the container running FPM/PHP should attach to this osx-localhost:9000 on the mac, you need to do this: https://gist.github.com/EugenMayer/3019516e5a3b3a01b6eac88190327e7c
So you create a dummy loopback ip, so you can access your OSX-host ports from without containers using 10.254.254.254:9000 - this is portable and basically gives you all you need to develop like you have used to
So one gives you the connectivity to container-exposed ports to apps running on the mac and trying to connect to localhost:port
And the second the inverse, if something in the container wants to attach to a port on the host.

One workaround, mentioned in "Bind container ports to the host" would be to use -P:
(or --publish-all=true|false) to docker run which is a blanket operation that identifies every port with an EXPOSE line in the image’s Dockerfile or --expose <port> commandline flag and maps it to a host port somewhere within an ephemeral port range.
The docker port command then needs to be used to inspect created mapping.
So if your app can use docker port <CONTAINER> to retrieve the mapped port, you can add as many containers as you want and get the mapped ports that way (without needed an "explicit -p command for each port").

Not sure if docker for mac can support bi-directional connection later https://forums.docker.com/t/will-docker-for-mac-support-bi-directional-connection-between-host-and-container-in-the-future/19871
I have two solution:
you can write a simple wrapper script and pass the port you want to expose to the script
use vagrant to start a virtual machine with network under control.

Related

Docker Desktop on Mac issue with ssh to centos container on localhost

I know there are similar questions on the SO but many of the suggestions have not worked for me. I'm running Docker Desktop for Mac and I startup a docker container I've built that has ssh configured and running (I use these to connect to AWS, Azure etc). I startup the container with something like (the ubc/jlbase/jlbase image has ssh configure... and the following all works on a linux machine with docker0 network in place)
docker run -P --name test -d ubc/jlbase/jlbase
docker inspect test |grep IP
ping -c *the_ip_from_above*
does not connect. From what I can find, this is a known issue with Docker on Mac... but the help and links I've found don't seem to solve the problem. Can someone tell me what I've missed?
You can say that this is a know feature of Docker on Mac, not an issue. Docker on Mac is running on a virtual machine inside macOS, so the IP address you receive is the IP of the container inside the VM, not on macOS.
To address the two issues from the question:
How to enable ssh
To be able to ssh on your container, you will need to have the sshd running in the container and to publish the port 22. Check here to see how you can try this with a container that is already prepared
How to ping
Since the docker is running inside a VM, to be able to route traffic to the containers, you will need to setup the network layer to route the traffic. One approach is to create a tunnel between the VM and the machine.
This is much more complex setup and will require a help of a CNF (Conteinerized Network Function). One of the simplest CNF that was created just for this problem is soctun which creates a tunnel between the host and the docker network layer.

Connect to a MariaDB Docker Container in a own Docker network remotly

Hi what I am actually trying is to connect remotly from a MySQL Client in Windows Subsystem for Linux mysql -h 172.18.0.2 -P 3306 -u root -p and before that I started the Docker Container as follows: docker container run --name testdb --network testnetwork -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlRootPassword -e MYSQL_DATABASE=localtestdb -d mariadb/server.
The purpose why I put the container in a own network, is because I also have a dockerized Spring Boot Application (GraphQL-Server) which shall communicated with this db. But always when I try to connect from my built-in mysql client, in my Windows Subsystem for Linux, with the above shown command. I got the error message: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.2' (115).
What I already tried, to solve the problem on my own is, look up whether the configuration file line (bind-address) is commented out. But it wont work. Interestingly it already worked to set up a docker container with MariaDB and connect from the outside, but now when I try exactly the same, only with the difference that I now put the container in a own existing network, it wont work.
Hopefully there some one out there which is able to help me with this annonying problem.
Thanks!
So far,
Daniel
//edit:
Now I tried the solution advice from a guy from this topic: How to configure containers in one network to connect to each other (server -> mysql)?. Futhermore I linked my Spring Boot (server) application with the "--link databaseContainerName" parameter to the MariaDB container.
Now I am able to start both containers without any error, but I am still not able to connect remotly to the MariaDB container. Which is now running in a virtual docker network with his own subnet.
I explored this recently - this is by design - container isolation. Usually only main (service httpd) host is accessible externally, hiding internal connections (hosts it communicates to deliver response).
Container created in own network is not accessible from external adresses, even from containers in the same bridge but other network (172.19.0.0/16).
Your container should be accessible on docker host address (127.0.0.1 if run locally) and mapped ("-p 3306:3306") port - 3306. But of course it won't work if many running db containers have the same mapping to the same host port.
Isolation is done using firewall - iptables. You can list rules (iptables -L) to see that - from docker host level.
You can modify firewall to allow external access to internal networks. I used this rule:
iptables -A DOCKER -d 172.16.0.0/12 -j ACCEPT
After that your MySQL containerized engine should be accessible using internal address 172.18.0.2 and source (not mapped) port 3306.
Warnings
it disables all isolation, dont't use it on production;
you have to run this after every docker start - rules created/modified by docker on the fly
not every docker container will respond on ping, check it from docker host (linux subsystem in this case) first, from windows cmd later
I used this option (in docker.service) to make rule permanent:
ExecStartPost=/bin/sh -c '/etc/iptables/accept172_16.sh'
For docker on external(shared in lan) host you should use route add (or hosts file on your machine or router) to forward 172.x.x.x addresses into lan docker host.
Hint: use portainer project (with restart policy - always) to manage docker containers. It's easier to see config errors, too.

How to access a port on the host machine when running docker container on MacOS with --network=host?

I have set up a couple of containers that interact with each other. The main application container runs on --network = host because it queries several mySQL containers running on different ports exposed on the host network.
I am trying to hit the application on the host but get an error:
curl: (7) Failed to connect to 0.0.0.0 port 36081: Connection refused
I am working on Docker installed on MacOS.
I have read several questions that indicate that docker on MacOS runs on a VM. But what is the workaround to access the application from the host? Any way to get the IP of the said VM?
You cannot use --network=host on Mac to connect via host ports but binding to host port using -p options works.
https://docs.docker.com/docker-for-mac/networking/#/there-is-no-docker0-bridge-on-osx
I WANT TO CONNECT TO A CONTAINER FROM THE MAC
Port forwarding works
for localhost; --publish, -p, or -P all work. Ports exposed from Linux
are forwarded to the host.
Our current recommendation is to publish a port, or to connect from
another container. This is what you need to do even on Linux if the
container is on an overlay network, not a bridge network, as these are
not routed.
For your use case,
You need to create a docker network and attach both the DB and application containers to this network. Then the containers will be able to talk to each other by their name. You can also publish the application container port so that you can access it from your host.
https://docs.docker.com/network/bridge/
Instead of creating the network, attaching the containers to the network etc manually, you can use docker-compose.
https://docs.docker.com/compose/

Docker Mac alternative to --net=host

According to the docker documentation here
https://docs.docker.com/network/host/
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
On Mac what alternatives do people use?
My scenario
I want to run a docker container that'll host a micro-service
The micro-service has dependencies upon databases that I'm also running via docker
I thought I'd be able to use --net=host on Mac when running the micro-service
But the micro-service port is not exposed
I can override the db addresses (they default to localhost) on the microservice.
But that involves robust --env usage
What's the simplest / most elegant solution?
The most simple and most elegant solution is to use docker named bridge network.
You can create a custom bridge network (default is bridge) like this:
docker network create my-network
Every container deployed inside this network can communicate with each other by using the container name.
$ docker run --network=my-network --name my-app ...
$ docker run --network=my-network --name my-database...
In the example above you can connect to your database from inside your application by using my-database:port. If the container port is exposed in the Dockerfile you don't need to map it on your host and you can keep all your communication internal inside your custom docker bridge network.
In most cases the application its port is mapped (example: -p 80:80) so localhost:80 is mapped on container:80 and you can access the app from on your localhost. If the app needs to communicate with a db you don't need to expose the port of the db and you don't have to map it on localhost as explained already above.
Just keep the communication between app and db internal in your custom bridge network.

Map port of Elasticsearch in Docker

I want to start an Elasticsearch container in Docker. By default I see nearly everywhere something like:
docker run -d -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.5.1
Now is my question: Why are we mapping the port on our host network? I understand port mapping but I don't see the big advantage of it.
In my opinion I would always do something like this:
$ docker network create logging
20aa4c7bf2d8289d8cbd485c3e384f9371eed87204625998687c61e4bad27f14
$ docker run -d --name es --net logging docker.elastic.co/elasticsearch/elasticsearch:5.5.1
And connect to the ES by using it's name (es in this case) and deploying containers in the same network. I would think my ES is more secure in its private docker network.
I see there is an advantage for port mapping when your containers which need to connect to elasticsearch aren't in the same network. But are there other advantages or why is this always shown with port mapping?
So host access is more about accessibility. If you are running docker on local machine and you want to access the app only on that machine, then host mapping is not need.
Now if you need to access this app on a external computer other than your docker host then you need to do that port mapping.
docker run -d -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.5.1
This maps the host port 9200 (left side) to 9200 inside the docker (right side). The listening interface is 0.0.0.0 which means all interfaces. And hence it is accessible to to anyone how has access to this machine.
If you want to make it more secure then you do it like below
docker run -d -p 127.0.0.1:9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.5.1
This would listen on local host only. So only you can access it on the machine. But if you need to access it from some place else then you would use a SSH tunnel
ssh -L 9200:127.0.0.1:9200 <user>#<HOSTIP>
And on that machine you can access it on 127.0.0.1:9200
Next level of security is added when you use a firewall like ufw, firewalld etc.
What you did with network command
docker network create logging
Basically creates new network and isolates other docker containers from accessing it on the host. But as long as external accessibility is concerned, you still need to map it to the host port
Hope this answers your question

Resources