I am using Docker (Unix system) in order to make containers where I can deploy some web applications (made in Java). I installed and configured properly JoNaS in one of this container but I am missing something about the networking.
In fact, the thing is that I use a Jenkins job which call some Maven (maven-cargo-plugin more precisely) in order to deploy in the container (rmi:// protocol).
I guess they cannot discuss properly because the containers aren't on the same network. I am actually not allowed to change anything of the network settings so I search a solution which bypass bridges or something like that.
If they aren't any, does you guys have an idea for my problem ? I made a little draw about my configuration if you guys think I was not clear enough (http://img15.hostingpics.net/pics/209383DockerExp.png).
Although, sorry for my English mistakes.
It sounds like you want to expose the Cargo port from the Docker container so that Jenkins can connect to the Docker host IP on that port to get into the container. See http://docs.docker.com/reference/run/#expose-incoming-ports for usage. For example, if the Cargo port was 6767, you might run the container with -p 6767:6767 which would allow Jenkins to connect to port 6767 on the Docker host IP (same network).
Related
I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.
I want to run an WebRTC gateway in a docker container on my Mac.
I need to export essentially all ports (TCP and UDP) (Specify -p does not help because there seems to be limit on the number of ports) with its own IP address. Using --net=host does not work on Mac.
Is there another option?
You can expose all ports using -P (note the uppercase) or --publish-all=true (is the same) on docker run command.
Link to docker docs about this.
Then you can check the mappings docker assigned using:
docker port yourContainerName
My previous answer is identical to a similar question (about doing essentially the same thing on a different platform (ie Windows)).
The problems encountered on both platforms are different (because Mac OX and Windows have a different network stack), but the workaround is the same.
I think the answer (would help someone) encountering the problem (on both cases).
Working with Docker Windows Containers I want to go beyond only one Docker container running a App. As described in the Microsoft docs under the headline "Docker Compose and Service Discovery":
Built in to Docker is Service Discovery, which handles service
registration and name to IP (DNS) mapping for containers and services;
with service discovery, it is possible for all container endpoints to
discover each other by name (either container name, or service name).
And because docker-compose lets you define services in it´s yaml files, these should be discoverable (e.g. pingable) by there names (be sure to remind the difference between services and containers in docker-compose). This blog post by Microsoft provides a complete example with the service web and db including full source with the needed docker-compose.yml in the GitHub repo.
My problem is: the Docker windows containers do "find" each other only sometimes, and sometimes not at all. I checked them with docker inspect <container-id> and the alias db and web are present there. But when I powershell into one container (e.g. into one web container via docker exec -it myapps_web_1 powershell) and try to do a ping db this only works only occasionally.
And let me be clear here (because IMHO the docs are not): This problem is the same for non docker-compose scenarios. Building an example app without compose, the problem also appears without docker-compose services, but just plain old container names!
Any ideas on that strange behavior? For me this scenario gets worse with more apps coming into play. For more details, just have a look into https://github.com/jonashackt/spring-cloud-netflix-docker, where I have an example project with Spring Boot & Spring Cloud Eureka/Zuul and 4 docker-compose services, where the weatherbackend and weatherbackend-second are easily scalable - e.g. via docker compose scale weatherbackend=3.
My Windows Vagrant box is build via packer.io and is based on the latest Windows Server 2016 Evalutation ISO. The necessary Windows Features and Docker/docker-compose installation is done with Ansible.
Having no fix for this problem, Docker Windows Containers become mostly unusable for us at the customer.
After a week or two trying to solve this problem, I finally found the solution. Beginning with the read of this docker/for-win/issues/500, I found a link to this multicontainer example application source where one of the authors documented the solution as a sideline, naming it:
Temporary workaround for Windows DNS client weirdness
Putting the following into your Dockerfile(s) will fix the DNS problems:
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]
RUN set-itemproperty -path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' -Name ServerPriorityTimeLimit -Value 0 -Type DWord
(to learn how the execution of Powershell commands inside Dockerfiles work, have a look into the Dockerfile reference)
The problem is also discussed here and the solution will hopefully find it´s way into a official Docker image (or at least into the docs).
I have found out that I needed to open TCP in port 1888 to make the DNS work immediately. Without this port open, I had to connect to the container (windows in my case) and execute in PowerShell Clear-DnsClientCache each time the DNS changed (also during first swarm setup).
I'm currently experimenting with Swarm Services with Docker for Windows. The new Win10 Insider build supports overlay networking for Windows containers and I was pleased to see my IIS service actually starting. The only issue i came across is that i can not reach the service in the browser, despite trying multiple things such as different ports and networks. The command issued is as following:
docker service create --name webfarm -p 80:80 microsoft/iis
I have also tried to use the --network flag to try different networks and I have made sure to test all IP addresses visible in the docker service inspect webfarm command.
docker service ps webfarm does indicate that my service is in state RUNNING and does not have any errors, so i don't know what else i can try. Especially since these commands worked fine on Linux with Apache.
I was wondering if anyone has been able to successfully create a service using Windows Containers on the Windows Insider build (15046), and if so, how?
Never mind, i found this actually is not supported yet.
The following source states:
"At the moment only DNS round robin is implemented as described in the Microsoft blog post. You cannot use to publish ports externally right now. More to come in the near future." (https://stefanscherer.github.io/docker-swarm-mode-windows10/)
And indeed, the blogposts states the following:
"Currently, Windows supports DNS Round-Robin load balancing between services. The routing mesh for Windows Docker hosts is not yet supported, but will be coming soon. Users seeking an alternative load balancing strategy today can setup an external load balancer (e.g. NGINX) and use Swarm’s publish-port mode to expose container host ports over which to load balance." (https://blogs.technet.microsoft.com/virtualization/2017/02/09/overlay-network-driver-with-support-for-docker-swarm-mode-now-available-to-windows-insiders-on-windows-10/)
I guess I'll have to wait for this feature, in the meantime I will use the alternative.
I am running Windows 7 on my desktop at work and I am signed in to a regular user account on the VPN. To develop software, we are to normally open a Dev VM and work from in there however recently I've been assigned a task to research Docker and Mongo DB. I have very limited access to what I can install on the main machine.
Here lies my problem:
Is it possible for me to connect to a MongoDB instance inside a container inside the docker machine from Windows and make changes? I would ideally like to use a GUI tool such as Mongo Management Studio to make changes to a Mongo database within a container.
By inspecting the Mongo container, it has the ports listed as: 0.0.0.0:32768 -> 27017/tcp
and docker-machine ip (vm name) returns 192.168.99.111.
I have commented out the 127.0.0.1 binding host ip within the mongod.conf file also.
From what I have researched so far, most users resolve their problem by connecting to their docker-machine IP with the port they've set with -p or been given with -P. Unfortunately for me, trying to connect with 192.168.99.111:32768 does not work.
I am pretty stumped and quite new to this environment. I am able to get inside the container with bash and manipulate the database there however I'm wondering if I can do this within Windows.
Thank you if anyone can help.
After reading Smutje's advice to ping the VM IP and testing it out to no avail, I attempted to find a pingable IP which would hopefully move me closer to my goal.
By doing "ifconfig" within the Boot2Docker VM (but not inside the container), I was able to locate another IP listed under eth0. This IP looks something like 134.36.xxx.xxx to me and is pingable. With the Mongo container running I can now access the database from within Mongo Management Studio by connecting to 134.36.xxx.xxx:32768 and manipulate the data from there.
If you have the option of choosing the operating system for your dev VM, go with Ubuntu and setup docker with all of the the containers you want to test on that. Either way, you will need to have a VM for testing docker on windows since it uses VirtualBox if i'm not mistaken. Instead, setup an Ubuntu VM and do all of your testing on that.