I create 2 windows container(I try to run windows application, not docker for windows in VM). And want to add link from a to b.
docker run -d --name a imageA
docker run -d --link a:a --name b imageB
I can access a from b by ip, but access by name is not working
Create a docker network first:
docker network create myNetwork
Connect both the containers(container1 and container2) to the network as following:
docker network connect myNetwork container1
docker network connect myNetwork container2
Run the docker network inspect command
docker network inspect myNetwork
Enter the bash of container1 as following:
docker exec -it container1 /bin/bash
Now, you can ping container2 by name:
ping container2
Hope that helps!
Related
I use the following command to create a container on MacOS,my docker version is "Docker for Mac",
docker run -itd --name dns-mysql1 --network=host -p 192.168.43.178:53:53 brilliance/dns-mysql:latest
but when it starts,it does not affect,the mapped port and IP address has changed,
As follows:
but it does work on Ubuntu or other Linux system. I want to know why.
I am using a docker container to build and deploy my software to a collection of ec2's. In the deployment script I build my software and then package it in a docker image. The image is pushed to my private registry, pulled by my production ec2's and then run. So essentially I will need to run docker within a docker container.
The problem is that I can't actually start docker on my container. If I try
service docker start
I get
bash: service: command not found
And if I try
docker -d
I get
2014/10/07 15:54:35 docker daemon: 0.11.1-dev 02d20af/0.11.1; execdriver: native; graphdriver:
[e2feb6f9] +job serveapi(unix:///var/run/docker.sock)
[e2feb6f9] +job initserver()
[e2feb6f9.initserver()] Creating server
2014/10/07 15:54:35 Listening for HTTP on unix (/var/run/docker.sock)
[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed
[e2feb6f9] -job initserver() = ERR (1)
2014/10/07 15:54:35 loopback mounting failed
The service command doesn't exist on the docker container so I can't start docker. I'm not sure what I should be doing now to start docker so I'm a bit stuck here, any help is appreciated.
A bit more information
Host machine is running fedora 20 (will eventually be running amazon linux on an ec2)
Docker container is running centos 7.0
Host is running Docker version 1.2.0, build fa7b24f/1.2.0
Container is running docker-0.11.1-22.el7.centos.x86_64
How about not running 'docker inside docker' and run docker on your host, but from within your docker container? Just mount your docker.sock and docker binary:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image]
https://github.com/sameersbn/docker-gitlab uses this approach to spin up docker containers, take a look at this image.
You can also take a look at: https://registry.hub.docker.com/u/mattgruter/doubledocker/
UPDATE on july 2016
The most current approach is to use docker:dind image, as described here:
https://hub.docker.com/_/docker/
Short summary:
$ docker run --privileged --name some-docker -d docker:dind
and then:
$ docker run --rm --link some-docker:docker docker info
While in almost all cases I would suggest following #cthulhu's answer and not running "docker in docker", in the cases when you must (e.g. a test suite which tests against multiple docker version), use the following to create additional loopback devices:
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(Taken from the thread for Docker Issue #7058)
You can simply run docker inside the docker container using dind. Try this image from Jerome, as follows:
docker run --privileged -t -i jpetazzo/dind
Check this page for more details:
https://github.com/jpetazzo/dind
I have Node services which are running in Docker container
I am using shell script to run these services
I want to run three different instances of the same service on 3 different port. say 9011 9022 9033
I also want it to configure it in such a way that after every new deployment it should stop the previous service and remove it
I am using docker rm test-service to remove it but it will remove other instances too.
by this approach only once instance can be running.
Is there any way to remove Docker service running on the specific port.
here is my shell script
#!/bin/bash
ORGANISATION="$1"
SERVICE_NAME="$2"
VERSION="$3"
ENVIRONMENT="$4"
INTERNAL_PORT_NUMBER="$5"
EXTERNAL_PORT_NUMBER="$6"
NETWORK="$7"
docker build -t ${ORGANISATION}/${SERVICE_NAME}:${VERSION} --build-arg PORT=${INTERNAL_PORT_NUMBER} --build-arg ENVIRONMENT=${ENVIRONMENT} --no-cache .
docker stop ${SERVICE_NAME}
docker rm ${SERVICE_NAME}
sudo npm install
sudo npm install -g express
docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} --network ${NETWORK} --name ${SERVICE_NAME} --restart always -itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
I can not run more than one container with the same name. Can I run the docker service with the same name on 3 different port. if yes what modifications do i need to make in above shell file?
That would be three docker run, each using the same internal port, but mapped to a different host port, with three different names
docker run -p ${EXTERNAL_PORT_NUMBER1}:${INTERNAL_PORT_NUMBER} --name ${SERVICE_NAME1}
docker run -p ${EXTERNAL_PORT_NUMBER2}:${INTERNAL_PORT_NUMBER} --name ${SERVICE_NAME2}
docker run -p ${EXTERNAL_PORT_NUMBER3}:${INTERNAL_PORT_NUMBER} --name ${SERVICE_NAME3}
I want to perform LoadBalance for service
See docker swarm mode
The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm.
The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service. You can specify any unused port. If you do not specify a port, the swarm manager assigns the service a port in the 30000-32767 range.
Example:
the following command publishes port 80 in the nginx container to port 8080 for any node in the swarm
$ docker service create \
--name my-web \
--publish 8080:80 \
--replicas 2 \
nginx
I've installed the Docker for Mac beta which allows you to use docker commands directly. I want to try to run rethinkdb through docker, so I've followed the instructions of the rethinkdb docker container docs and done the following:
docker run --name some-rethink -v "$PWD:/data" -d rethinkdb
This works, and I can see the container with docker ps and start shell with docker exec -it /bin/bash
However, I can't connect to the admin panel on my Mac directly with their suggestion
$BROWSER "http://$(docker inspect --format \
'{{ .NetworkSettings.IPAddress }}' some-rethink):8080"
This essentially amounts to google-chrome http://172.17.0.2:8080/, but this doesn't work. I asked around and was told
You can't use the docker private ip address space to access the ports
You have to forward them to the mac
However, I'm not sure how to do this as I don't have any port forwarding tools I'm familiar with such as ssh on the container itself. Using the suggested port forwarding command in the rethinkdb container docs ssh -fNTL ... but with localhost instead of remote does not work.
How can I connect to the rethinkdb admin panel through http with the docker beta on a Mac?
Try forwarding the container port using the -p flag in the docker run command, e.g.:
docker run -p 8080:8080 --name some-rethink -v "$PWD:/data" -d rethinkdb
and then it should be accessible on localhost,
google-chrome http://127.0.0.1:8080/
Relevant docker run docs: https://docs.docker.com/engine/reference/run/#/expose-incoming-ports
I'm using Docker Terminal on Windows running a container from my nginx image and when I access the docker-machine IP on my browser I get "CONNECTION_REFUSED".
This is command that I used to run the container
docker run -it -d -v /home/user/html:/usr/share/nginx/html -p 80:80 myimage
Check if your container is running (docker ps)
Log in your container to see if there is any error log (docker exec -it container_name /bin/bash)
Make sure you are using correct IP address (docker-machine ip container_name)
It's very important to check logs with docker logs <container name>
After that, you'll see if connection refused is due to a
Address visibility problem.
NginX configuration problem.
Port 80 is already being used.
...