How to reference service in another docker-compose.yml in .env? - laravel

I have some containers on the same network, but they are in separate docker-compose.yml files.
service_1 is an API container.
service_2 is a Laravel app using Laradock.
I do this inside both my docker-compose.yml files;
networks:
default:
external:
name: my_network
I check with docker network inspect my_network that both my containers are connected.
Now inside service_2 in my Laravel env file, i want to reference my_network and grab the ip/host of service_1 dynamically so I don't have to change the IP everytime.
This doesn't work however, since when I try and make call from my Laravel app it doesn't swap the url and tries to make request to service_1:443.
Hopefully someone can explain!

Related

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

How to connect my spring boot app to redis container on docker?

I have created a spring app and i want to connect it to redis server which is deployed on docker-compose i put the needed properties as follow :
spring.redis.host=redis
spring.redis.port=6379
But i keep getting a ConnexionException so how can i Know on which host redis is running and how to connect to it.
Here is my docker-compose file :
version: '2'
services:
redis:
image: 'bitnami/redis:5.0'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
volumes:
- 'redis_data:/bitnami/redis/data'
volumes:
redis_data:
driver: local
From docker compose documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name
If you want to access redis by container name ('redis' in this case), the Spring boot application has also be deployed as a docker compose service, but it doesn't appear in the docker-compose file that you've provided in the question, so please add it.
Alternatively If you're trying to run the spring boot application on host machine, use 'localhost' instead of 'redis' in order to access the redis container.
Another approach you can use is "docker network" , Below are the steps to follow :
Create a docker network for redis
docker network create redis-docker
Spin up redis container is redis-docker network.
docker run -d --net redis-docker --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
Inspect the redis-docker container
docker network inspect redis-docker
Copy the "IPv4Address" IP and paster in application.yml
Now build , start your application.

Communication between two ddev projects

I got two ddev projects that needs to interact with each other. When a ran into some issues, I check the resolved IP for the connection.
I did it by ssh into project1 and ping project2 (ping project2.ddev.local)
The domain resolves to 127.0.0.1
So every request I send to this domain will stay in the current container and is not routet to the other project.
Steps to reproduce:
Start two separate ddev containers and ssh into one of them. Try to ping the the other project by using the ddev domain.
Is there a solution that two (or more) projects can interact with each other?
Edit 2019-01-08: It's actually easy to do this with just the docker name of the container, no extra docker-compose config is required. For a db container that's ddev-<projectname>-db. So you can access the db container of a project named "d8composer" by using the hostname ddev-d8composer-db; for example mysql -udb -pdb -h ddev-d8composer-db db
Here's another technique that actually does have two projects communicating with each other.
Let's say that you have two projects named project1 and project2, and you want project2 to have access to the db container from project1.
Add a .ddev/docker-compose.extradb.yaml to project2's .ddev folder with this content:
version: '3.6'
services:
web:
external_links:
- ddev-project1-db:proj1-db
And now project1's database container is accessible from the web container on project2. For example, you can mysql -h proj1-db from within the project2 web container.
Note that this is often a bad idea, it's best not to have two dev projects depend on each other, it's better to figure out development environments that are as simple as possible. If you just need an extra database, you might want to try How can I create and load a second database in ddev? . If you just need an additional web container as an API server or whatever, the other answer is better.
A simple example of extra_hosts. I needed to use an HTTPS URL in a Drupal module's UI, entity_share, to cURL another ddev site.
On foo I add a .ddev/docker-compose.hosts.yaml
version: '3.6'
services:
web:
extra_hosts:
- bar.ddev.site:172.18.0.6
I tried this and it worked quite nicely; the basic idea is to run a separate ddev-webserver as a service. We usually think of a ddev "service" as something like redis or memcache or solr, but it can really be an API server of any type, and can use the ddev-webserver image (or any other webserver image you want to use).
For example, add this docker-compose.api.yaml to your project's .ddev folder (updated for ddev v1.1.1):
version: '3.6'
services:
myapi:
image: drud/ddev-webserver:v1.1.0
restart: "no"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
volumes:
- "../myapi_docroot/:/var/www/html:cached"
- ".:/mnt/ddev_config:ro"
web:
links:
- myapi:$DDEV_HOSTNAME
and put a dummy index.html in your project's ./myapi_docroot.
After ddev start you can ddev ssh -s myapi and do whatever you want there (and myapi_docroot is mounted at /var/www/html). If you ddev ssh into the web container you can curl http://myapi and you'll see the contents of your myapi_docroot/index.html. Your myapi container can access the 'db' container, or you can run another db container, or ...
Note that this mounts a subdirectory of the main project as /var/www/html, but it can actually mount anything you want. For example,
volumes:
- "../../fancyapiproject/:/var/www/html:cached"

How can I take upload the docker Image into Github

I wanted to use Kibana docker image.
Below is the docker-compose that I am using.
kibana:
image: kibana
links:
- "elasticsearch"
ports:
- "9201:9201"
networks:
- cloud
This image gets run on 5601 though I am specifying 9201
I learn about changing the Port for Kibana here
If I do so every time I run the container using docker-compose up it will pull the latest image.
As a reason, I want to put this docker Image into VSTS/GIT so that I can modify the content there and use it.
I don't want just a docker image. I want the configuration of it which I can change as per my requirement.
Any help will be appreciable.
Thanks in advance.
The docker image is a base from which the container is created. When you run docker-compose up docker-compose will use the docker engine to start all the services that you have specified in the compose.yml file.
If the containers do not exist, it will create new ones and if they do exist, it will start your running containers.
The YML file (docker-compose.yml) can be uploaded to whichever repository you wish, but if you wish to upload a image, you will have to use a docker registry.
This is probably not really what you wish to do. You should rather store all your configurations in a mounted directory which is actually on the computer running the docker containers instead of in the image or container itself.
You could also create a docker volume which you store the configuration in, it will persist as long as you do not remove it.
Now, when it comes to the port issue, you are binding the containers 9201 to your 9201, if the service in the container is not running on 9201 you will not be able to reach it through that port.
Port mapping is done with host:container, so if you want to bind your computers 9201 to the containers 5601 you write:
ports:
- "9201:5601"
And it should map 5601 to 9201.

How do I run a docker-compose container in interactive mode such that other containers see it?

The scenario: I set a breakpoint in code, which is mounted (as a volume) into a container, created by docker-compose. The code is an odoo module, so it's part of the odoo container.
There is also a webapp container, which has a link to odoo in order to use the API.
The odoo container doesn't expose the API port because the host doesn't need it; the webapp container can, of course, see it.
services:
odoo:
volumes:
- localpath:/mnt/extra-addons-bundles
webapp:
links:
- odoo
Of course, the purpose of the breakpoint is to allow me to control the application - so I need a TTY attached. I can do this with docker-compose run --rm odoo. However, when I do this, it creates a new container, so the webapp never actually hits it. Plus, it doesn't tell me what the new container is called, so I have to manually figure it out to do that.
I can use docker exec to run another odoo in the odoo container but then I have to run it on a new port, and thus change the webapp config to use this new instance.
Is there a way of achieving what I want, i.e. to run the odoo container in interactive mode, such that the webapp container can see it, and without reconfiguring the webapp container?
The answer is to use tty: true in the docker-compose file, and docker attach to actually get that process attached to your terminal.
Try this and see if it works
services:
odoo:
volumes:
- localpath:/mnt/extra-addons-bundles
tty: true
stdin_open: true
webapp:
links:
- odoo
I have also added stdin_open in case you need it, if you don't just remove it
Edit-1
Also if you need to attach to the running container, then you need to use docker attach as docker-compose doesn't have that functionality
docker attach <containername>

Resources