Docker UCP DTR used ports - docker-ucp

When trying to install UDP and DTR, I see un the requirements install a series of ports need to be opened. On the other hand, the install is just a container to run.
So, why do we have to open some ports?
In the docker run command, I didn't see the port mapping (host/container), how can we access to UCP web UI?
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp install
PS: docker-ee version : 18.03.1-ee-1

The set of ports that the UCP (Universal Control Plane) or DTR (Docker Trusted Registry) will check for availability during the installation are listed here.
These ports will be used by Docker EE components, Swarm and external users connecting to the platform.
Why do we have to open some ports?
On some Operating Systems by default, a software Firewall is active and running.
Follows that the Firewall process blocks all the networking traffic, and makes failing the installation. So, you have to specifically configure this firewall process or in some very specific cases, you can also deactivate it. An example of such OS can be CentOS.
Usually in completely isolated environments and offline installation of UCP, you run the following commands to deactive the Firewalld on CentOS:
sudo systemctl stop firewalld
sudo systemctl disable firewalld
How can we access to UCP web UI?
After a successful installation of the UCP, you can access the UCP Web UI using the IP address of any of your Swarm managers. For example: open a tab in your web browser and type, https://ip-of-a-swarm-manager. Any HTTP traffic will be redirected to HTTPS.
In case you put a load balancer in front of your Swarm managers, you need to use as IP the VIP of you load balancer.
Very good materials about architecting and installing UCP and DTR can be found on
https://success.docker.com/; for example this reference architecture for Docker EE 17.06.

Related

Connect to a MariaDB Docker Container in a own Docker network remotly

Hi what I am actually trying is to connect remotly from a MySQL Client in Windows Subsystem for Linux mysql -h 172.18.0.2 -P 3306 -u root -p and before that I started the Docker Container as follows: docker container run --name testdb --network testnetwork -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlRootPassword -e MYSQL_DATABASE=localtestdb -d mariadb/server.
The purpose why I put the container in a own network, is because I also have a dockerized Spring Boot Application (GraphQL-Server) which shall communicated with this db. But always when I try to connect from my built-in mysql client, in my Windows Subsystem for Linux, with the above shown command. I got the error message: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.2' (115).
What I already tried, to solve the problem on my own is, look up whether the configuration file line (bind-address) is commented out. But it wont work. Interestingly it already worked to set up a docker container with MariaDB and connect from the outside, but now when I try exactly the same, only with the difference that I now put the container in a own existing network, it wont work.
Hopefully there some one out there which is able to help me with this annonying problem.
Thanks!
So far,
Daniel
//edit:
Now I tried the solution advice from a guy from this topic: How to configure containers in one network to connect to each other (server -> mysql)?. Futhermore I linked my Spring Boot (server) application with the "--link databaseContainerName" parameter to the MariaDB container.
Now I am able to start both containers without any error, but I am still not able to connect remotly to the MariaDB container. Which is now running in a virtual docker network with his own subnet.
I explored this recently - this is by design - container isolation. Usually only main (service httpd) host is accessible externally, hiding internal connections (hosts it communicates to deliver response).
Container created in own network is not accessible from external adresses, even from containers in the same bridge but other network (172.19.0.0/16).
Your container should be accessible on docker host address (127.0.0.1 if run locally) and mapped ("-p 3306:3306") port - 3306. But of course it won't work if many running db containers have the same mapping to the same host port.
Isolation is done using firewall - iptables. You can list rules (iptables -L) to see that - from docker host level.
You can modify firewall to allow external access to internal networks. I used this rule:
iptables -A DOCKER -d 172.16.0.0/12 -j ACCEPT
After that your MySQL containerized engine should be accessible using internal address 172.18.0.2 and source (not mapped) port 3306.
Warnings
it disables all isolation, dont't use it on production;
you have to run this after every docker start - rules created/modified by docker on the fly
not every docker container will respond on ping, check it from docker host (linux subsystem in this case) first, from windows cmd later
I used this option (in docker.service) to make rule permanent:
ExecStartPost=/bin/sh -c '/etc/iptables/accept172_16.sh'
For docker on external(shared in lan) host you should use route add (or hosts file on your machine or router) to forward 172.x.x.x addresses into lan docker host.
Hint: use portainer project (with restart policy - always) to manage docker containers. It's easier to see config errors, too.

How to access a port on the host machine when running docker container on MacOS with --network=host?

I have set up a couple of containers that interact with each other. The main application container runs on --network = host because it queries several mySQL containers running on different ports exposed on the host network.
I am trying to hit the application on the host but get an error:
curl: (7) Failed to connect to 0.0.0.0 port 36081: Connection refused
I am working on Docker installed on MacOS.
I have read several questions that indicate that docker on MacOS runs on a VM. But what is the workaround to access the application from the host? Any way to get the IP of the said VM?
You cannot use --network=host on Mac to connect via host ports but binding to host port using -p options works.
https://docs.docker.com/docker-for-mac/networking/#/there-is-no-docker0-bridge-on-osx
I WANT TO CONNECT TO A CONTAINER FROM THE MAC
Port forwarding works
for localhost; --publish, -p, or -P all work. Ports exposed from Linux
are forwarded to the host.
Our current recommendation is to publish a port, or to connect from
another container. This is what you need to do even on Linux if the
container is on an overlay network, not a bridge network, as these are
not routed.
For your use case,
You need to create a docker network and attach both the DB and application containers to this network. Then the containers will be able to talk to each other by their name. You can also publish the application container port so that you can access it from your host.
https://docs.docker.com/network/bridge/
Instead of creating the network, attaching the containers to the network etc manually, you can use docker-compose.
https://docs.docker.com/compose/

Docker Mac alternative to --net=host

According to the docker documentation here
https://docs.docker.com/network/host/
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
On Mac what alternatives do people use?
My scenario
I want to run a docker container that'll host a micro-service
The micro-service has dependencies upon databases that I'm also running via docker
I thought I'd be able to use --net=host on Mac when running the micro-service
But the micro-service port is not exposed
I can override the db addresses (they default to localhost) on the microservice.
But that involves robust --env usage
What's the simplest / most elegant solution?
The most simple and most elegant solution is to use docker named bridge network.
You can create a custom bridge network (default is bridge) like this:
docker network create my-network
Every container deployed inside this network can communicate with each other by using the container name.
$ docker run --network=my-network --name my-app ...
$ docker run --network=my-network --name my-database...
In the example above you can connect to your database from inside your application by using my-database:port. If the container port is exposed in the Dockerfile you don't need to map it on your host and you can keep all your communication internal inside your custom docker bridge network.
In most cases the application its port is mapped (example: -p 80:80) so localhost:80 is mapped on container:80 and you can access the app from on your localhost. If the app needs to communicate with a db you don't need to expose the port of the db and you don't have to map it on localhost as explained already above.
Just keep the communication between app and db internal in your custom bridge network.

Docker: MacOSX Expose Container ports to host machine

In my job I working with docker and the option --net=host working like a charm forwarding the docker container ports to the machine. This allows me to adding grunt tasks that use certain ports by example:
A taks for serving my coverage report in a port 9001
A local deployed version of my app served in the port 9000
A watch live reload the port 35729
For Unit testing runner use the 9876 port
When I begin to use Docker in Mac, the first problem that i had was: The option --net=host don't work anymore.
I researched and I understand why this is not possible (Docker in Mac runs in a own virtual machine) and my momentary solution it's use the -p option for expose the ports, but this limit to me to add more and more task that use ports because i need run the explicit -p command for each port that i need expose.
Anyone with this same problem? How to dealing with this ?
Your issue is most probably that you are using dockertoolbox or dhingy/dlite or anything else providing a full-fledged linux VM, which then hosts docker to run your container inside this VM. This VM has, of course, its own network stack and own IP on the host, and thats were your tools will have issues with. The exposed ports of the container are not exposed to OSX host localhost, but rather OSX Docker-VM-ip.
To solve those issues elegantly
Expose ports to OSX localhost from the container
First, use/install docker-for-mac https://docs.docker.com/engine/installation/mac/ instead of dockertoolbox or others. Its based on a special xhyve stack which reuses your hosts network stack
when you now do docker run -p 3306:3306 percona it will bind 3306 on the osx-host-localhost, thus every other osx-tool trying to attach to localhost:3306 will work ( very useful ) just as you have been used to it when you installed mysql using brew install mysql or likewise
If you experience performance issues with code shares on OSX with docker containers, check http://docker-sync.io - it is compatible with docker-for-mac ( hint: i am biased on this one )
Export ports from the OSX-host to a containter
You do not really export anything in particular, you rather make them accessable as a whole from all containers ( all ports of the OSX-host-localhost)
If you want to attach to a port you offered on the OSX host, from within a container, e.g. during a xdebug session were your IDE listens on port 9000 on the OSX-host-localhost and the container running FPM/PHP should attach to this osx-localhost:9000 on the mac, you need to do this: https://gist.github.com/EugenMayer/3019516e5a3b3a01b6eac88190327e7c
So you create a dummy loopback ip, so you can access your OSX-host ports from without containers using 10.254.254.254:9000 - this is portable and basically gives you all you need to develop like you have used to
So one gives you the connectivity to container-exposed ports to apps running on the mac and trying to connect to localhost:port
And the second the inverse, if something in the container wants to attach to a port on the host.
One workaround, mentioned in "Bind container ports to the host" would be to use -P:
(or --publish-all=true|false) to docker run which is a blanket operation that identifies every port with an EXPOSE line in the image’s Dockerfile or --expose <port> commandline flag and maps it to a host port somewhere within an ephemeral port range.
The docker port command then needs to be used to inspect created mapping.
So if your app can use docker port <CONTAINER> to retrieve the mapped port, you can add as many containers as you want and get the mapped ports that way (without needed an "explicit -p command for each port").
Not sure if docker for mac can support bi-directional connection later https://forums.docker.com/t/will-docker-for-mac-support-bi-directional-connection-between-host-and-container-in-the-future/19871
I have two solution:
you can write a simple wrapper script and pass the port you want to expose to the script
use vagrant to start a virtual machine with network under control.

Accessing Hue on Cloudera Docker QuickStart

I have installed the cloudera quickstart using docker based on the instructions given here.
https://blog.cloudera.com/blog/2015/12/docker-is-the-new-quickstart-option-for-apache-hadoop-and-cloudera/
docker run --privileged=true --hostname=quickstart.cloudera -p 7180 -p 8888 -t -i 9f3ab06c7554 /usr/bin/docker-quickstart
You can see that I am doing -p 7180 and -p 8888 for port mapping.
when the container booted successfully. I saw that the hue service startup failed. but i ran it manually using sudo service hue restart and it showed OK.
Now I ran
/home/cloudera/cloudera-manager --express --force
this command was successful I got a message to connect to the CM using http://cloudera.quickstart:7180
Now on my host machine I did docker-machine env default and I could see the output
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/abhishek.srivastava/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
Now in my browser on host machine I did
http://192.168.99.100:7180
http://192.168.99.100:8888
http://quickstart.cloudera:7180
http://quickstart.cloudera:8888
but everything fails to connect to any page. So even after doing port forwarding... I am not able to access either cloudera manager or HUE UI from the host machine.
I am using OSX.
I also went into virtualbox manager UI and selected the default VM. I went into settings -> network -> port forwarding. and made the following entries
but still I cannot access the cloudera manager and HUE....
When you run docker using -p 7180 and -p 8888, it will allocate a random port on your windows host. However, if you use -p 7180:7180 and -p 8888:8888, assuming those ports are free on the host, it will map them directly.
Otherwise you can execute docker ps and it will show you which ports it mapped the 7180 and 8888 to. Then in your host browser you can enter
http://192.168.99.100:<docker-allocated-port>
instead of
http://192.168.99.100:7180
If its all on your local machine, you shouldn't need the port forwarding.
Since you're running the docker machine inside a VM, you need to open the port on VirtualBox.
You can do this from the Port Forwarding button in the network adapter panel in VirtualBox.
Settings > Network > Advanced > Port Forwarding
You should see an SSH port already being forwarded for docker. Just add any additional ports like that one.
And here are lists of all the ports used by CDH. Of course you don't need all of them. I would suggest at least Cloudera Manager (7180), namenode and datanode UI (50070 & 50075), and the job servers like mapreduce (8088,8042 & 10020) or spark (18080 & 18081). And I personally don't use it, but Hue is 8888.
The same issue happened to me. I was able start hue successfully after increasing the number of CPUs in VirtualBox.
I also increased the amount of RAM earlier. The original CPU I had was 1, changed to 3
I have encountered the same issue here, and resolved now based on the comments and posts above. There are two issues mentioned above:
Failed to start Hue.
In my case, this is caused by limited resources allocated with default docker VM settings. According to #Ronald Teo's answer, going to
VirtualBox -> 'default'[your docker-machine name] -> Settings ->
System
, increase base memory to 8192MB, and processors to at least 3, have fixed my problem.
Can not access Hue from my host machine. Based on the original post, Try docker run --privileged=true --hostname=quickstart.cloudera -p 7180:7180 -p 8888:8888 -t -i 9f3ab06c7554 /usr/bin/docker-quickstart should solve this problem.
Restart Hue after container is up
Increase the memory of docker to 8GB if you can. Otherwise, set it at least 4GB.
Let hue fail while starting the container.
After that, attach to the docker container and access its shell to run the following command,
To stop the Hue Server:
$ sudo service hue stop
To start the Hue Server:
$ sudo service hue start
I was just trying to spin up the Cloudera quickstart docker myself, and it turns out this seems to do the trick:
http://127.0.0.1:8888
Note the http, not https, and that I use 127.0.0.1 (or localhost)
Note that this assumes that the internal 8888 port is mapped to your 8888 port.
Suppose docker inspect yields something like
"8888/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32769"
}
Then you would want
http://127.0.0.1:32769

Resources