I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Related
The docker compose yml file below keeps the container open after I run docker compose up -d but command: bash does not get executed:
version: "3.8"
services:
service:
container_name: pw-cont
image: mcr.microsoft.com/playwright:v1.30.0-focal
stdin_open: true # -i
tty: true # -t
network_mode: host # --network host
volumes: # Ensures that updates in local/container are in sync
- $PWD:/playwright/
working_dir: /playwright
command: bash
After I spin the container up, I wanted to visit Docker Desktop > Running container's terminal.
Expectation: Since the file has command: bash, I expect that in docker desktop, when I go to the running container's terminal, it will show root#docker-desktop:/playwright#.
Actual: Container's terminal in docker desktop is showing #, still need to type bash to see root#docker-desktop:/playwright#.
Can the yml file be updated so that bash gets auto executed when spinning up the container?
docker compose doesn't provide that sort of interactive connection. Your docker-compose.yaml file is fine; once your container is running you can attach to it using docker attach pw-cont to access stdin/stdout for the container.
$ docker compose up -d
[+] Running 1/1
⠿ Container pw-cont Started 0.1s
$ docker attach pw-cont
root#rocket:/playwright#
root#rocket:/playwright#
I'm not sure what you are trying to achieve, but using the run command
docker-compose run service
gives me the prompt you expect.
After I devise a Spring Boot project with the usage of MinIo, I tried to run it in Docker but I have an issue.
Here is my docker-compose.yaml file
version: '3.8'
services:
minio:
image: minio/minio:latest
container_name: minio
environment:
MINIO_ACCESS_KEY: "minioadmin"
MINIO_SECRET_KEY: "minioadmin"
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
I firstly run this command docker-compose up -d.
Then I run docker ps -a to check if it is located in container. After that, I run this command docker run <container-id> (a07fdf1ef8c4), here is a message shown below.
Unable to find image 'a07fdf1ef8c4:latest' locally
docker: Error response from daemon: pull access denied for a07fdf1ef8c4, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
I also run this option shown below nothing changed.
C:\Users\host\IdeaProjects\SpringBootMinio>docker run -p 9000:9000 9001:9001 minio/minio:latest
Unable to find image '9001:9001' locally
docker: Error response from daemon: pull access denied for 9001, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
Even if I run the command docker login, I couldn't fix it.
How can I solve it out?
1st Error
docker run <container-id> - That is not how you run a container with Docker. When you run docker-compose up -d, it already starts the containers; in this case it's MinIO.
The docker run function requires an image name as the argument. So when you do docker run <container-id>, it tries to find an image with the container ID, which doesn't exist.
So when you do docker-compose up -d, it starts minio. You do not need to start it again.
2nd Error
When you run docker run -p 9000:9000 9001:9001 minio/minio:latest, you are basically saying that the image name is 9001:9001. But no such image exists. If you want to expose another port, just do docker run -p 9000:9000 -p 9001:9001 minio/minio:latest. For every single port you want to expose, just do -p and enter the port mapping.
I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.
I am new to Docker and I am trying to run Selenium Grid tests on Docker. For this purpose, I created a docker compose file and executed below command
docker-compose -f docker-compose.yaml up
Everything worked fine but after a few hours I restarted host machine and executed above command again. This time I get below error
ERROR: for selenium-hub Cannot create container for service selenium-hub: Conflict. The container name "/selenium-hub" is already in use by container "some-hash". You have to remove (or rename) that container to be able to reuse that name.
I tried docker-compose -f docker-compose.yaml run selenium-hub but this command does not start selenium nodes. So my questions are -
Do I need to remove the container everytime before I run the docker compose
again?
Is there any way I can use docker-compose like file, so that
everytime I restart docker, I can just run the file to start all containers together?
Below the Docker-Compose I used
version: "3"
services:
selenium-hub:
image: selenium/hub:3.141.59-20200525
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.141.59-20200525
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
firefox:
image: selenium/node-firefox:3.141.59-20200525
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
opera:
image: selenium/node-opera:3.141.59-20200525
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
There are possible ways
docker system prune will clean up the cache and remove the dangling intermediate container and delete containers name that not actively running. This command have to be used carefully
docker container prune will delete only dead/stop containers and will free up names
docker rm -v $(docker ps -aq -f 'status=exited')
docker rmi $(docker images -aq -f 'dangling=true')
docker-compose rm --force emoves one-off containers created by docker-compose up or docker-compose run
Just use docker-compose up command to start the already created containers.
TL;DR: How do I have to change my below docker-compose.yml in order to allow one container to use a service of another over a custom (non-standard) port?
I have a pretty common setup: containers for a web app (Padrino [Ruby]), Postgres, Redis, and a queueing framework (Sidekiq). The web app comes with its custom Dockerfile, the remaining services come either from standard images (Postgres, Redis), or mount the data from the web app (Sidekiq). They are ties together via the following docker-compose.yml:
version: '2'
services:
web:
build: .
command: 'bundle exec puma -C config/puma.rb'
volumes:
- .:/myapp
ports:
- "9000:3000"
depends_on:
- postgres
- redis
sidekiq:
build: .
command: 'bundle exec sidekiq -C config/sidekiq.yml -r ./config/boot.rb'
volumes:
- .:/myapp
depends_on:
- postgres
- redis
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: my-postgres-user
POSTGRES_PASSWORD: my-postgres-pass
ports:
- '9001:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
redis:
image: redis
ports:
- '9002:6379'
volumes:
- 'redis:/var/lib/redis/data'
volumes:
redis:
postgres:
One key point to notice here is that I am exposing the containers services on non-standard ports (9000-9002).
If I start the setup with docker-compose up, the Redis and Postgres containers come up fine, but the containers for the web app and Sidekiq fail since they can't connect to Redis at redis:9002. Remarkably enough, the same setup works if I use 6379 (the standard Redis port) instead of 9002.
docker ps also looks fine afaik:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9148566c2509 redis "docker-entrypoint.sh" Less than a second ago Up About a minute 0.0.0.0:9002->6379/tcp rubydockerpadrino_redis_1
e6d47321c939 postgres:9.5 "/docker-entrypoint.s" Less than a second ago Up About a minute 0.0.0.0:9001->5432/tcp rubydockerpadrino_postgres_1
What's even more confusing: I can access the Redis container from the host via redis-cli -h localhost -p 9002 -n 0, but the web app and Sidekiq containers fail to establish a connection.
I am using this docker version on MacOS:
Docker version 1.12.3, build 6b644ec, experimental
Any ideas what I am doing wrong? I'd appreciate any hint how to get my setup running.
When you bind ports like this '9002:6379' you're telling Docker to forward traffic from localhost:9002 -> redis:6379. That's why this works from your host machine:
redis-cli -h localhost -p 9002 -n 0
However, when containers talk to each other, they are all connected to the same network by default (the Docker bridge or docker0). By default, containers can communicate with each other freely on this network, without needing any ports opened. Within this network, your redis container is listening for traffic on it's usual port (6379), host isn't involved at all. That's why your container to container communication works on 6379.