I have docker network "my_network". I want to remove this docker network with docker network rm my_network. Before it I should disconnect all my containers from this network. I can use docker network inspect and get output like
[
{
"Name": "my_network",
"Id": "aaaaaa",
"Scope": "some_value",
"Driver": "another_value",
"EnableIPv6": bool_value,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.0.0/1"
}
]
},
"Internal": false,
"Containers": {
"bbb": {
"Name": "my_container_1",
"EndpointID": "ENDPOITID1",
"MacAddress": "MacAddress1",
"IPv4Address": "0.0.0.0/1",
"IPv6Address": ""
},
"ccc": {
"Name": "my_container_2",
"EndpointID": "ENDPOINTID2",
"MacAddress": "MacAddress2",
"IPv4Address": "0.0.0.0/2",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
It is okay to manual disconnect if I have only several containers but if I have 50 containers I have problem.
How can I disconnect all containers from this network with single or several command?
docker network inspect has a format option.
That means you can list all Container names with:
docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' network_name
That should then be easy, by script, to read each name and call docker network disconnect.
wwerner proposes below in the comments the following command:
for i in ` docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' network_name`; do docker network disconnect -f network_name $i; done;
In multiple line for readability:
for i in ` docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' network_name`;\
do \
docker network disconnect -f network_name $i; \
done;
Adding:
Note that there is a space in the format as opposed to the answer to split the names by a space.
Related
I have encountered an issue during provisioning with HashiCorp Packer for virtualbox-iso on Alpine Linux v3.16.
Provisioning script runs OK, and it logs that build has finished, however when I open the outputted ovf file in VirtualBox moved files and docker are not present.
I would be grateful for any advice.
I run packer build packer-virtualbox-alpine-governator.json
packer-virtualbox-alpine-governator.json file:
{
"variables": {
"password": "packer"
},
"builders": [
{
"type": "virtualbox-iso",
"memory": 8192,
"guest_os_type": "Other_64",
"iso_url": "https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-standard-3.16.0-x86_64.iso",
"iso_checksum": "file:https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-standard-3.16.0-x86_64.iso.sha256",
"ssh_username": "root",
"ssh_password": "{{user `password`}}",
"shutdown_command": "poweroff",
"hard_drive_interface": "sata",
"boot_command": [
"root<enter><wait>",
"setup-alpine<enter><wait>us<enter><wait>us<enter><wait><enter><wait><enter><wait><enter><wait><enter><wait5>{{user `password`}}<enter><wait>{{user `password`}}<enter><wait><enter><wait><enter><wait><enter><wait15><enter><wait>openssh<enter><wait>openssh-full<enter><wait5>test123<enter><wait5>test123<enter><wait><enter><wait><enter><wait>sda<enter><wait>sys<enter><wait>y<enter><wait30>",
"echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config<enter><wait>",
"/etc/init.d/sshd restart<enter><wait5>"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": ["mkdir -p /opt/site/governator"]
},
{
"type": "file",
"source": "files/docker-compose.yaml",
"destination": "/opt/site/"
},
{
"type": "file",
"source": "files/governator.conf",
"destination": "/opt/site/governator/"
},
{
"type": "shell",
"scripts": [
"scripts/alpine/install-docker-on-alpine.sh"
]
}
]
}
./scritps/alpine/install-docker-on-alpine.sh
#! /bin/ash
cat > /etc/apk/repositories << EOF; $(echo)
https://dl-cdn.alpinelinux.org/alpine/v$(cut -d'.' -f1,2 /etc/alpine-release)/main/
https://dl-cdn.alpinelinux.org/alpine/v$(cut -d'.' -f1,2 /etc/alpine-release)/community/
https://dl-cdn.alpinelinux.org/alpine/edge/testing/
EOF
apk update
apk add docker
addgroup $USER docker
rc-update add docker boot
service docker start
apk add docker-compose
sync
I installed Docker on Windows. It's switched to Switched to Linux containers.
When I type in my console: docker inspect e3a934c54979 I see an information:
[
{
...
"Image": "sha256:2359fa12fdedef2af79d9b836a26175808d4b1433b5e7022d2d73c72b2a43b60",
"ResolvConfPath": "/var/lib/docker/containers/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8/hostname",
"HostsPath": "/var/lib/docker/containers/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8/hosts",
"LogPath": "/var/lib/docker/containers/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8/e3a934c549799d9ec45d65ad6aa73bba8fad924215087a9c9c60535ef2a5c2e8-json.log",
"Name": "/festive_edison",
...
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"80/tcp": [
{
"HostIp": "",
"HostPort": "80"
}
]
},
...
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/10f5348d5bfa76612ab30d1a253f17a6989fcd3f7ce23642b313c49f99a95f44-init/diff:/var/lib/docker/overlay2/028eac1b0f37fd3be798d222f7d1da48a40f0ef9c4470709e63c4c8f322a477f/diff:/var/lib/docker/overlay2/d15e7ce0f29f82d6d3b9537980b766c32e7f6ffc81374cdb26fede3872afed1e/diff:/var/lib/docker/overlay2/efab543606225e581832ef6e2b732a78c82b2f6d9fe662babe09b188f600dd72/diff:/var/lib/docker/overlay2/263366359e8a86cc6c009f70fa00a158dbcbcfd2a4e31d9538c559dd82e29b10/diff:/var/lib/docker/overlay2/32ea6c48b53f4846284e1baac83dffcfb039a53a8d2f33ac2728691160f5d100/diff:/var/lib/docker/overlay2/685745d44609453debf484b2ccf63035532b334e75b9f18a00c5e1253e18841a/diff:/var/lib/docker/overlay2/e30c0a304544255bc9eba90dfb720c332e168b4972df926a79ef27df707889fd/diff:/var/lib/docker/overlay2/a5743532bc060895f0a495249182787322400a1a33fd187b3210895e1ca83129/diff",
"MergedDir": "/var/lib/docker/overlay2/10f5348d5bfa76612ab30d1a253f17a6989fcd3f7ce23642b313c49f99a95f44/merged",
"UpperDir": "/var/lib/docker/overlay2/10f5348d5bfa76612ab30d1a253f17a6989fcd3f7ce23642b313c49f99a95f44/diff",
"WorkDir": "/var/lib/docker/overlay2/10f5348d5bfa76612ab30d1a253f17a6989fcd3f7ce23642b313c49f99a95f44/work"
},
"Name": "overlay2"
},
...
}
]
But Windows doesn't have those directories. It only has "MobyLinuxVM.vhdx" which, I think, contains this stuff.
My question is how to edit "config.json" and "hostconfig.json" in this case? How do I view a GUID-json.log? How do I view container's hashes (/var/lib/docker/aufs/diff)?
Information from https://blog.jongallant.com/2017/11/ssh-into-docker-vm-windows/
In a Windows command prompt enter:
docker run --privileged -it -v
/var/run/docker.sock:/var/run/docker.sock
jongallant/ubuntu-docker-client
docker run --net=host --ipc=host --uts=host --pid=host -it
--security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
chroot /host
From here you'll have access to the /var/lib/Docker/containers/ directories for the hostconfig.json and other files.
I'm new to Docker and things are a bit confusing so far.
I ran these containers and they work just fine.
docker network inspect dockersymfony_default tells the IP of nginx container is 172.19.0.5. I can access nginx with this IP from other containers in network (php, db, ...).
BUT from the host machine (Windows) nginx container is not accessible by 172.19.0.5. On the other hand nginx is accessible by 127.0.0.1 or localhost.
[
{
"Name": "dockersymfony_default",
"Id": "12b4f5a663450bb44e87f9635860bbcede354f2fea27f93b056d7159251dc465",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Containers": {
"1eef50ee350782654bb96e7d16d09e3b9fe54abca97cc339a89791083e08563c": {
"Name": "dockersymfony_db_1",
"EndpointID": "8b48ea4934a01703ac23a7f27f8cee0ff9226b7b5401550859b22fbf17a4c10a",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"1faf199dd6285c6700640b570fd842212962c299d762c531b011205f58598102": {
"Name": "dockersymfony_nginx_1",
"EndpointID": "a9be2a542565262d31c93f7d1960a13e373f5a701c299cedf3dc0510c8de9bf4",
"MacAddress": "02:42:ac:13:00:05",
"IPv4Address": "172.19.0.5/16",
"IPv6Address": ""
},
"2c2b994c92895e7a83c189d1e5002d2eb7d88f62761f8c324f00ffdece4efb4a": {
"Name": "dockersymfony_php_1",
"EndpointID": "98a0ad7bef2f2c963c68561bb2b56bc0feb113c390a29c0dd9aefd6e32b7e5be",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"65ee8088e6bc2405b80c56ae21d83a4f4f7fff252a7e782c4046629d670f7b74": {
"Name": "dockersymfony_redis_1",
"EndpointID": "80fcd76b2fe0326be8d30abaffe2310ef58f23935f0f67f7476fa4fad951cba6",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"86665f35aa1599028d8e91aa45f86505a112b44496386e674b7039458dcda45f": {
"Name": "dockersymfony_elk_1",
"EndpointID": "c82ef8cc18f1cc345e97c3bdd0e04f0bf398efc0c38f9876b0e922a1e6dd494c",
"MacAddress": "02:42:ac:13:00:06",
"IPv4Address": "172.19.0.6/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Why is that ?
I suppose that this problem is the cause of another one: xdebug doesn't work.
The containers in the project you linked to are Linux containers. Docker on Windows runs Linux containers inside a Linux VM it creates using Hyper-V. Assuming you started those containers using the docker-compose file in the project, they will all be attached to default bridge network. I found this description of the docker0 bridge helpful
By default, the Docker server creates and configures the host system’s docker0 interface as an Ethernet bridge inside the Linux kernel that can pass packets back and forth between other physical or virtual network interfaces so that they behave as a single Ethernet network.
Since the containers are all attached to the same network, they are all able to access each other by IP address. If you were able to connect to the VM that Docker for Windows creates, you would be able to access those containers by their IP addresses as well.
The nginx container is accessible from localhost and from 127.0.0.1 because of port binding. When you create a container and publish (as the docker-compose file does on this line), Docker for Windows does some magic1 and requests made on that port are forwarded to the VM that's running your containers, that VM then forwards the request to the container running nginx, which then responds to the request and you see the response in the browser.
1. Not actually magic
I made an application Dockerized in a Docker container. I intended to make the application able to access files from our HDFS. The Docker image is to be deployed on the same cluster where we have HDFS installed via Marathon-Mesos.
Below is the json to be POST to Marathon. It seems that my app is able to read and write files in the HDFS. Can someone comment on the safety of this? Would files changed by my app correctly changed in the HDFS as well? I Googled around and didn't find any similar approaches...
{
"id": "/ipython-test",
"cmd": null,
"cpus": 1,
"mem": 1024,
"disk": 0,
"instances": 1,
"container": {
"type": "DOCKER",
"volumes": [
{
"containerPath": "/home",
"hostPath": "/hadoop/hdfs-mount",
"mode": "RW"
}
],
"docker": {
"image": "my/image",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 8888,
"hostPort": 0,
"servicePort": 10061,
"protocol": "tcp",
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10061,
"protocol": "tcp",
"labels": {}
}
]
}
You might have a look at the Docker volume docs.
Basically, the volumes definition in the app.json would trigger the start of the Docker image with the flag -v /hadoop/hdfs-mount:/home:RW, meaning that the host path gets mapped to the Docker container as /home in read-write mode.
You should be able to verify this if you SSH into the node which is running the app and do a docker inspect <containerId>.
See also
https://mesosphere.github.io/marathon/docs/native-docker.html
I have two vagrant hosts running local on my machine and I am trying to hit a container within one host from a second container on the other host.
When I curl from within the container:
curl search.mydomain.localhost:9090/ping
I receive the curl: (7) Failed to connect to search.mydomain.localhost port 9090: Connection refused
However when I curl without specifying the port:
curl search.mydomain.localhost/ping
OK
I'm certain the port is properly exposed as if I try the same from within the host instead of within the container I get:
curl search.mydomain.localhost:9090/ping
OK
Which shows that the service at port 9090 is exposed, however there is some networking issue with the container trying to reach it.
A fellow dev running the same versions of vbox/vagrant/docker/docker-compose and using an identical commit of the repos has no trouble hitting the service from within the container. I'm really stumped as to what to try from here...
I'm using the default bridge network:
sudo brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02427c9cea3c no veth5dc6655
vethd1867df
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "e4b8df614d4b8c451cd4a26c5dda09d22d77de934a4be457e1e93d82e5321a8b",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {
"1d67a1567ff698694b5f10ece8a62a7c2cdcfcc7fac6bc58599d5992def8df5a": {
"EndpointID": "4ac99ce582bfad1200d59977e6127998d940a688f4aaf4f3f1c6683d61e94f48",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"3e8b6cbd89507814d66a026fd9fad26d649ecf211f1ebd72ed4689b21e781e2c": {
"EndpointID": "2776560da3848e661d919fcc24ad3ab80e00a0bf96673e9c1e0f2c1711d6c609",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
}
]
I'm on Docker version 1.9.0, build 76d6bc9 and docker-compose version: 1.5.0.
Any help would be appreciated.
I resolved my issue which seems like it might be a bug. Essentially the container was inheriting the /etc/hosts from my local macbook, bypassing the /etc/hosts on the actual vagrant host running the container, causing my entry of "127.0.0.1 search.mydomain.localhost" to make all connection attempts within the container redirect to itself.