How do I run a docker-compose container in interactive mode such that other containers see it? - debugging

The scenario: I set a breakpoint in code, which is mounted (as a volume) into a container, created by docker-compose. The code is an odoo module, so it's part of the odoo container.
There is also a webapp container, which has a link to odoo in order to use the API.
The odoo container doesn't expose the API port because the host doesn't need it; the webapp container can, of course, see it.
services:
odoo:
volumes:
- localpath:/mnt/extra-addons-bundles
webapp:
links:
- odoo
Of course, the purpose of the breakpoint is to allow me to control the application - so I need a TTY attached. I can do this with docker-compose run --rm odoo. However, when I do this, it creates a new container, so the webapp never actually hits it. Plus, it doesn't tell me what the new container is called, so I have to manually figure it out to do that.
I can use docker exec to run another odoo in the odoo container but then I have to run it on a new port, and thus change the webapp config to use this new instance.
Is there a way of achieving what I want, i.e. to run the odoo container in interactive mode, such that the webapp container can see it, and without reconfiguring the webapp container?

The answer is to use tty: true in the docker-compose file, and docker attach to actually get that process attached to your terminal.

Try this and see if it works
services:
odoo:
volumes:
- localpath:/mnt/extra-addons-bundles
tty: true
stdin_open: true
webapp:
links:
- odoo
I have also added stdin_open in case you need it, if you don't just remove it
Edit-1
Also if you need to attach to the running container, then you need to use docker attach as docker-compose doesn't have that functionality
docker attach <containername>

Related

Connecting to a Mongo container from Spring container

I have a problem here that I really cannot understand. I already saw few topics here with the same problem and those topics was successfully solved. I basically did the same thing and cannot understand what I'm doing wrong.
I have a Spring application container that tries to connect to a Mongo container through the following Docker Composer:
version: '3'
services:
app:
build: .
ports:
- "8080:8080"
links:
- db
db:
image: mongo
volumes:
- ./database:/data
ports:
- "27017:27017"
In my application.properties:
spring.data.mongodb.uri=mongodb://db:27017/app
Finally, my Dockerfile:
FROM eclipse-temurin:11-jre-alpine
WORKDIR /home/java
RUN mkdir /home/java/bar
COPY ./build/libs/foo.jar /home/java/bar/foo.jar
CMD ["java","-jar", "/home/java/bar/foo.jar"]
When I run docker compose up --build I got:
2022-11-17 12:08:53.452 INFO 1 --- [null'}-db:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server db:27017
Caused by: java.net.UnknownHostException: db
Running the docker compose ps I can see the mongo container running well, and I am able to connect to it through Mongo Compass and with this same Spring Application but outside of container. The difference running outside of container is the host from spring.data.mongodb.uri=mongodb://db:27017/app to spring.data.mongodb.uri=mongodb://localhost:27017/app.
Also, I already tried to change the host for localhost inside of the spring container and didnt work.
You need to specify MongoDB host, port and database as different parameters as mentioned here.
spring.data.mongodb.host=db
spring.data.mongodb.port=27017
spring.data.mongodb.authentication-database=admin
As per the official docker-compose documentation the above docker-compose file should worked since both db and app are in the same network (You can check if they are in different networks just in case)
If the networking is not working, as a workaround, instead of using localhost inside the spring container, use the server's IP, i.e, mongodb://<server_ip>:27017/app (And make sure there is no firewall blocking it)

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

How to reference service in another docker-compose.yml in .env?

I have some containers on the same network, but they are in separate docker-compose.yml files.
service_1 is an API container.
service_2 is a Laravel app using Laradock.
I do this inside both my docker-compose.yml files;
networks:
default:
external:
name: my_network
I check with docker network inspect my_network that both my containers are connected.
Now inside service_2 in my Laravel env file, i want to reference my_network and grab the ip/host of service_1 dynamically so I don't have to change the IP everytime.
This doesn't work however, since when I try and make call from my Laravel app it doesn't swap the url and tries to make request to service_1:443.
Hopefully someone can explain!

Docker-compose Laravel how to wait for the database to load?

I am trying to set up tests for my Laravel application.
The application runs with Docker compose.
When I try to start my tests with this command:
docker-compose -p tests --env-file .env_tests --rm run myapp ./vendor/bin/phpunit
the tests start to run before the database container is ready.
How can I make my tests wait for the database to become ready?
My docker-compose.yml looks like this:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:10.1'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=my_user
- MARIADB_DATABASE=my_database
- MARIADB_PASSWORD=my_password
ports:
# connect your dbeaver/workbench to localhost:${WORKBENCH_PORT}
- ${WORKBENCH_PORT}:3306
# volumes:
# Do not load databases here, as there is no
# good way for other containers to wait for this to finish
# - ./database:/docker-entrypoint-initdb.d
myapp:
tty: true
image: bitnami/laravel:6-debian-9
environment:
- DB_HOST=mariadb
- DB_USERNAME=my_user
- DB_DATABASE=my_database
- DB_PASSWORD=my_password
depends_on:
- mariadb
ports:
- 3000:3000
volumes:
- ./:/app
When I start the application normally (docker-compose up), Laravel waits for the mariadb container to finish loading, but I couldn't find out how this is done.
---- Edit ----
I found that the bitami/laravel Docker container that I use has a script called wait_for_db() that seems to wait for the database.
What I didn't find out yet is why this script is run in normal mode, but not when I start the tests.
According to the official docs, it is not possible to wait until the database is ready, but only until it has started:
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
(...)
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason.
The difference in your app's behaviour between the general case and the test case may be related to other reasons, such as the test taking less time to load (giving less time to the database to get ready) or test handling connection failure in a different way (not retrying after some time).
EDIT
Using docker-compose run overrides the entrypoint of the container. Therefore, even if originally there was a script intended to wait for the database initialization, it will not be run.
Check the docs of the command:
First, the command passed by run overrides the command defined in the service configuration. For example, if the web service configuration is started with bash, then docker-compose run web python app.py overrides it with python app.py.

How can I take upload the docker Image into Github

I wanted to use Kibana docker image.
Below is the docker-compose that I am using.
kibana:
image: kibana
links:
- "elasticsearch"
ports:
- "9201:9201"
networks:
- cloud
This image gets run on 5601 though I am specifying 9201
I learn about changing the Port for Kibana here
If I do so every time I run the container using docker-compose up it will pull the latest image.
As a reason, I want to put this docker Image into VSTS/GIT so that I can modify the content there and use it.
I don't want just a docker image. I want the configuration of it which I can change as per my requirement.
Any help will be appreciable.
Thanks in advance.
The docker image is a base from which the container is created. When you run docker-compose up docker-compose will use the docker engine to start all the services that you have specified in the compose.yml file.
If the containers do not exist, it will create new ones and if they do exist, it will start your running containers.
The YML file (docker-compose.yml) can be uploaded to whichever repository you wish, but if you wish to upload a image, you will have to use a docker registry.
This is probably not really what you wish to do. You should rather store all your configurations in a mounted directory which is actually on the computer running the docker containers instead of in the image or container itself.
You could also create a docker volume which you store the configuration in, it will persist as long as you do not remove it.
Now, when it comes to the port issue, you are binding the containers 9201 to your 9201, if the service in the container is not running on 9201 you will not be able to reach it through that port.
Port mapping is done with host:container, so if you want to bind your computers 9201 to the containers 5601 you write:
ports:
- "9201:5601"
And it should map 5601 to 9201.

Resources