I want to include support for Xdebug in a PHP Docker container, however as part of this, I need to specify the IP of the Windows machine running the Docker container via XDEBUG_CONFIG=remote_host=${HOST_IP} - Currently HOST_IP is manually specified in a .env file, but I'd like to automate this to reduce the setup steps for other users.
My problem is that I can't seem to find a way to easily determine the IP of the host machine. It also needs to work on both Windows and Linux Docker hosts, as not all users use Windows as their desktop environment. I also can't use ${HOSTNAME}, as this fails to resolve in DNS.
Does anyone have any suggestions on how to achieve this?
EDIT2: Updating this answer for the newer versions of Docker: From 18.03 onward, Docker For Windows and other Docker platforms have been updated to include a cross-platform hostname for their Docker host, host.docker.internal - which is bloody helpful.
You might try a formatted docker info:
https://docs.docker.com/engine/reference/commandline/info/#format-the-output
docker info --format '{{json .}}'
docker info --format '{{json .path.to.ip}}'
E.g. in a (single-host) Docker Swarm you can get the host ip by:
docker info --format '{{json .Swarm.NodeAddr}}'
Via command substition stored in a variable:
docker_host_ip=$(docker info --format '{{json .Swarm.NodeAddr}}')
I could not try it on on Windows or on Docker without Swarm ... but docker info should work across platforms.
Update (according to comments below):
Not really (syntactically) "beautiful" you can use --format {{index path arrayIndex "Key"}} with docker network inspect and access the first element of an array (index 0) and then access the map inside this array via its key ("Gateway"):
docker network inspect docker_dockernet --format '{{index .IPAM.Config 0 "Gateway"}}'
Related
I'm new using Docker. I have been trying to deploy a Linux container (with Windows as a host) with a Google Cloud image inside using Docker. I'm able to do everything well, at the end the server is running perfectly, but when I want to check the server, using the localhost in the browser, I got a blank page with:
Blank page
This is the Dockerfile:
FROM google/cloud-sdk
ENV PATH /usr/lib/google-cloud-sdk/bin:$PATH
WORKDIR docker_folder
COPY local_folder/ .
RUN pwd
EXPOSE 8080
CMD ["java_dev_appserver.sh", "."]
This is the command I'm using to build my image (in the CMD):
docker build --tag serverdeploy .
This is the command I'm using to run my container
docker run -p 8080:8080 serverdeploy
This is the stack trace that I got when I run the server
where I know that I running the server
I did some research and looks like Docker had a problem with the ports when you use a Linux container in Windows (Not sure if it's already solved or not). I've already tried all the possible solutions that I found out there (even trying to replace 'localhost' by all the ip's that I get when I run ipconfig on the cmd) but I still get the same error.
And, as last hope, I need your help to understand what I'm doing wrong, or if I missing something
You are running your service bind to localhost - that means no remote connections are accepted (as well as binding to 127.0.0.1. And for your container the host is a remote connection.
Change binding to 0.0.0.0 (which I guess is default) and enjoy.
Btw sharing your java_dev_appserver.sh would be helpful for answering the question.
I'm struggling with setting up a JMX connection to Tomcat running in a Docker container using Docker for Mac.
I think I understand the basics, and have a setenv.sh in the tomcat/bin directory looking like this:
CATALINA_OPTS="-Dcom.sun.management.jmxremote=true\
-Dcom.sun.management.jmxremote.local.only=false\
-Dcom.sun.management.jmxremote.authenticate=false\
-Dcom.sun.management.jmxremote.ssl=false\
-Djava.rmi.server.hostname=185.83.15.228\
-Dcom.sun.management.jmxremote.port=9999\
-Dcom.sun.management.jmxremote.rmi.port=9999"
I think the problematic part might be the java.rmi.server.hostname property. I've set this to the IP of the host machine, but I've also tried other obvious things. I believe this should be the IP of the machine on which jconsole or jvisualvm will be running, but this is not working for me.
I start the container like this:
docker run -d -v /Users/timbo/tomcat-jmx.sh:/usr/local/tomcat/bin/setenv.sh -p 8080:8080 -p 9999:9999 tomcat:8.0
so port 9999 is exposed.
When I try to connect using jvisualvm connecting to localhost:9999 (which Docker for Mac will route to the container which is actually on 172.17.0.2) I get the error:
Cannot connect to localhost:9999 using service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi
Any hints on what is wrong?
OK, I think I managed to find it eventually. Setting the value of java.rmi.server.hostname to the hostname of the host (e.g. mymac.local, or whatever is returned by hostname) seem to get it working. All other settings were OK.
Docker for Mac works in a bit different way. The port you map actually gets mapped to the Linux VM it is running in the background. This VM usually has in IP 192.168.99.100. So you should try and connect to 192.168.99.100:9999
To verify what is the IP of your VM, open the Docker CLI terminal and execute below
echo $DOCKER_HOST
tcp://192.168.99.100:2376
I am using docker on windows. With the use of kitematic, I have created an ubuntu container. This ubuntu image has postgresql installed on it.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)?
Where exactly does the container store its file system on the host machine?
I hope it would be part of image file with format VMDK.
Please correct me if I'm wrong.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)
That is not how Docker would allow you to modify a file in a container.
For that, you should mount a host (Windows) folder when starting (docker run -v) your container.
See "Mount a host directory as a data volume"
docker run -d -P --name web -v /c/Users/<myACcount>/src/webapp:/opt/webapp training/webapp python app.py
Issue 247 mentions ~/Library/Application Support/Kitematic for App data, and ~/Kitematic "for easy access to volume data".
This question may be similar to other questions already answered, but I could not find any that is specific to OSX.
I'm new to Docker. I'm using Docker Version 1.12.1-beta25 (build: 11807) native support for OSX. I wanted to install a Docker Bamboo remote agent, following the instructions at https://confluence.atlassian.com/bamboo/getting-started-with-docker-and-bamboo-687213473.html. My Bamboo server is running on the host.
When running the Docker container with docker run -e HOME=/root/ -e BAMBOO_SERVER=http://hostname:port/bamboo -i -t atlassian/bamboo-java-agent:latest, it failed with Connecting to http://hostname:port/bamboo refused
The problem seems to be that the container could not access the host's http://hostname:port/bamboo. What do I need to do to get this working?
You may try to use http://172.17.0.1:port to find the host from the container. You find this address with docker inspect 'name'.
Or, you can use -p hostPort:containerPort in the docker run command and yse http://localhost:containerPort as Banboo_server
I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.