Docker networking on windows IP confusion - windows

I'm new to Docker and things are a bit confusing so far.
I ran these containers and they work just fine.
docker network inspect dockersymfony_default tells the IP of nginx container is 172.19.0.5. I can access nginx with this IP from other containers in network (php, db, ...).
BUT from the host machine (Windows) nginx container is not accessible by 172.19.0.5. On the other hand nginx is accessible by 127.0.0.1 or localhost.
[
{
"Name": "dockersymfony_default",
"Id": "12b4f5a663450bb44e87f9635860bbcede354f2fea27f93b056d7159251dc465",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Containers": {
"1eef50ee350782654bb96e7d16d09e3b9fe54abca97cc339a89791083e08563c": {
"Name": "dockersymfony_db_1",
"EndpointID": "8b48ea4934a01703ac23a7f27f8cee0ff9226b7b5401550859b22fbf17a4c10a",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"1faf199dd6285c6700640b570fd842212962c299d762c531b011205f58598102": {
"Name": "dockersymfony_nginx_1",
"EndpointID": "a9be2a542565262d31c93f7d1960a13e373f5a701c299cedf3dc0510c8de9bf4",
"MacAddress": "02:42:ac:13:00:05",
"IPv4Address": "172.19.0.5/16",
"IPv6Address": ""
},
"2c2b994c92895e7a83c189d1e5002d2eb7d88f62761f8c324f00ffdece4efb4a": {
"Name": "dockersymfony_php_1",
"EndpointID": "98a0ad7bef2f2c963c68561bb2b56bc0feb113c390a29c0dd9aefd6e32b7e5be",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"65ee8088e6bc2405b80c56ae21d83a4f4f7fff252a7e782c4046629d670f7b74": {
"Name": "dockersymfony_redis_1",
"EndpointID": "80fcd76b2fe0326be8d30abaffe2310ef58f23935f0f67f7476fa4fad951cba6",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"86665f35aa1599028d8e91aa45f86505a112b44496386e674b7039458dcda45f": {
"Name": "dockersymfony_elk_1",
"EndpointID": "c82ef8cc18f1cc345e97c3bdd0e04f0bf398efc0c38f9876b0e922a1e6dd494c",
"MacAddress": "02:42:ac:13:00:06",
"IPv4Address": "172.19.0.6/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Why is that ?
I suppose that this problem is the cause of another one: xdebug doesn't work.

The containers in the project you linked to are Linux containers. Docker on Windows runs Linux containers inside a Linux VM it creates using Hyper-V. Assuming you started those containers using the docker-compose file in the project, they will all be attached to default bridge network. I found this description of the docker0 bridge helpful
By default, the Docker server creates and configures the host system’s docker0 interface as an Ethernet bridge inside the Linux kernel that can pass packets back and forth between other physical or virtual network interfaces so that they behave as a single Ethernet network.
Since the containers are all attached to the same network, they are all able to access each other by IP address. If you were able to connect to the VM that Docker for Windows creates, you would be able to access those containers by their IP addresses as well.
The nginx container is accessible from localhost and from 127.0.0.1 because of port binding. When you create a container and publish (as the docker-compose file does on this line), Docker for Windows does some magic1 and requests made on that port are forwarded to the VM that's running your containers, that VM then forwards the request to the container running nginx, which then responds to the request and you see the response in the browser.
1. Not actually magic

Related

DC/OS stateful app with persistent external storage

I'm trying to setup a stateful app in DC/OS by assigning an external (EBS) volume to the docker container. I've ran the demo app provided in the docs and it created a 100GB EBS volume in AWS. Is there a way to specify the size of the volume in the marathon.json file? Can I use the same EBS volume for multiple apps? Here's the demo app I've tested.
{
"id": "/test-docker",
"instances": 1,
"cpus": 0.1,
"mem": 32,
"cmd": "date >> /data/test-rexray-volume/test.txt; cat /data/test-rexray-volume/test.txt",
"container": {
"type": "DOCKER",
"docker": {
"image": "alpine:3.1",
"network": "HOST",
"forcePullImage": true
},
"volumes": [
{
"containerPath": "/data/test-rexray-volume",
"external": {
"name": "my-test-vol",
"provider": "dvdi",
"options": { "dvdi/driver": "rexray" }
},
"mode": "RW"
}
]
},
"upgradeStrategy": {
"minimumHealthCapacity": 0,
"maximumOverCapacity": 0
}
}
You cannot attach one EBS volume to multiple EC2 instances. My bad! I ditched the rexray persistent storage option in favor of EFS.
I had to create an EFS share and attach it to the cluster's VPC. Then I had to ssh into every slave node, mount it like an NFS share under the same folder on all nodes and finally mount it in the container from marathon.json.

Docker not syncing hosts folder (Windows)

When I share a folder between my host and my containers, my files edited in Sublime are not syncing inside the containers.
I'm using Docker version 1.13.0, build 49bf474 and I tried many fixes that some issues on github told me to do, but none of them worked for me.
I'm sharing my C/ driver with docker host, configuring my compose like this:
uwsgi:
build: .
links:
- postgres
command: ./uwsgi.sh
env_file: .env
volumes:
- /static
- /data/media:/media
- ./api:/app
My volume ./api:/app works, but when i change something, its not reflects on the container and I can't use for development.
Here is my inspect for this container: (Mounts/Volumes)
"Mounts": [
{
"Type": "bind",
"Source": "/C/Users/tif/projetos/my/jl.api/api",
"Destination": "/app",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/data/media",
"Destination": "/media",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "b931d6d30c2b8e1bcdc2a20d5e6d2c27dd515c5041d2ea64ca01b5dc08047879",
"Source": "/var/lib/docker/volumes/b931d6d30c2b8e1bcdc2a20d5e6d2c27dd515c5041d2ea64ca01b5dc08047879/_data",
"Destination": "/static",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Volumes": {
"/app": {},
"/media": {},
"/static": {}
},
This things I have already tried:
atomic_save: false (Sublime)
nginx.conf with sendfile off;
Someone have experienced this?
After some research, I could see that I was using uWsgi for development environment, and I couldn't get my app reloading without the py-autoreload.
All i had to get done was start my uwsgi setting the py-autoreload to 2 and my app started to reloads.
I'm starting this command on docker now:
"/usr/local/bin/uwsgi --socket :5000 --wsgi-file ......... --py-autoreload 2
Reading this could be useful if you are experiencing this issue: http://chase-seibert.github.io/blog/2014/03/30/uwsgi-python-reload.html

Docker disconnect all containers from docker network

I have docker network "my_network". I want to remove this docker network with docker network rm my_network. Before it I should disconnect all my containers from this network. I can use docker network inspect and get output like
[
{
"Name": "my_network",
"Id": "aaaaaa",
"Scope": "some_value",
"Driver": "another_value",
"EnableIPv6": bool_value,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.0.0/1"
}
]
},
"Internal": false,
"Containers": {
"bbb": {
"Name": "my_container_1",
"EndpointID": "ENDPOITID1",
"MacAddress": "MacAddress1",
"IPv4Address": "0.0.0.0/1",
"IPv6Address": ""
},
"ccc": {
"Name": "my_container_2",
"EndpointID": "ENDPOINTID2",
"MacAddress": "MacAddress2",
"IPv4Address": "0.0.0.0/2",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
It is okay to manual disconnect if I have only several containers but if I have 50 containers I have problem.
How can I disconnect all containers from this network with single or several command?
docker network inspect has a format option.
That means you can list all Container names with:
docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' network_name
That should then be easy, by script, to read each name and call docker network disconnect.
wwerner proposes below in the comments the following command:
for i in ` docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' network_name`; do docker network disconnect -f network_name $i; done;
In multiple line for readability:
for i in ` docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' network_name`;\
do \
docker network disconnect -f network_name $i; \
done;
Adding:
Note that there is a space in the format as opposed to the answer to split the names by a space.

How to mount HDFS in a Docker container

I made an application Dockerized in a Docker container. I intended to make the application able to access files from our HDFS. The Docker image is to be deployed on the same cluster where we have HDFS installed via Marathon-Mesos.
Below is the json to be POST to Marathon. It seems that my app is able to read and write files in the HDFS. Can someone comment on the safety of this? Would files changed by my app correctly changed in the HDFS as well? I Googled around and didn't find any similar approaches...
{
"id": "/ipython-test",
"cmd": null,
"cpus": 1,
"mem": 1024,
"disk": 0,
"instances": 1,
"container": {
"type": "DOCKER",
"volumes": [
{
"containerPath": "/home",
"hostPath": "/hadoop/hdfs-mount",
"mode": "RW"
}
],
"docker": {
"image": "my/image",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 8888,
"hostPort": 0,
"servicePort": 10061,
"protocol": "tcp",
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10061,
"protocol": "tcp",
"labels": {}
}
]
}
You might have a look at the Docker volume docs.
Basically, the volumes definition in the app.json would trigger the start of the Docker image with the flag -v /hadoop/hdfs-mount:/home:RW, meaning that the host path gets mapped to the Docker container as /home in read-write mode.
You should be able to verify this if you SSH into the node which is running the app and do a docker inspect <containerId>.
See also
https://mesosphere.github.io/marathon/docs/native-docker.html

Trouble hitting a container's exposed port from a seperate container & host

I have two vagrant hosts running local on my machine and I am trying to hit a container within one host from a second container on the other host.
When I curl from within the container:
curl search.mydomain.localhost:9090/ping
I receive the curl: (7) Failed to connect to search.mydomain.localhost port 9090: Connection refused
However when I curl without specifying the port:
curl search.mydomain.localhost/ping
OK
I'm certain the port is properly exposed as if I try the same from within the host instead of within the container I get:
curl search.mydomain.localhost:9090/ping
OK
Which shows that the service at port 9090 is exposed, however there is some networking issue with the container trying to reach it.
A fellow dev running the same versions of vbox/vagrant/docker/docker-compose and using an identical commit of the repos has no trouble hitting the service from within the container. I'm really stumped as to what to try from here...
I'm using the default bridge network:
sudo brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02427c9cea3c no veth5dc6655
vethd1867df
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "e4b8df614d4b8c451cd4a26c5dda09d22d77de934a4be457e1e93d82e5321a8b",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {
"1d67a1567ff698694b5f10ece8a62a7c2cdcfcc7fac6bc58599d5992def8df5a": {
"EndpointID": "4ac99ce582bfad1200d59977e6127998d940a688f4aaf4f3f1c6683d61e94f48",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"3e8b6cbd89507814d66a026fd9fad26d649ecf211f1ebd72ed4689b21e781e2c": {
"EndpointID": "2776560da3848e661d919fcc24ad3ab80e00a0bf96673e9c1e0f2c1711d6c609",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
}
]
I'm on Docker version 1.9.0, build 76d6bc9 and docker-compose version: 1.5.0.
Any help would be appreciated.
I resolved my issue which seems like it might be a bug. Essentially the container was inheriting the /etc/hosts from my local macbook, bypassing the /etc/hosts on the actual vagrant host running the container, causing my entry of "127.0.0.1 search.mydomain.localhost" to make all connection attempts within the container redirect to itself.

Resources