I got two ddev projects that needs to interact with each other. When a ran into some issues, I check the resolved IP for the connection.
I did it by ssh into project1 and ping project2 (ping project2.ddev.local)
The domain resolves to 127.0.0.1
So every request I send to this domain will stay in the current container and is not routet to the other project.
Steps to reproduce:
Start two separate ddev containers and ssh into one of them. Try to ping the the other project by using the ddev domain.
Is there a solution that two (or more) projects can interact with each other?
Edit 2019-01-08: It's actually easy to do this with just the docker name of the container, no extra docker-compose config is required. For a db container that's ddev-<projectname>-db. So you can access the db container of a project named "d8composer" by using the hostname ddev-d8composer-db; for example mysql -udb -pdb -h ddev-d8composer-db db
Here's another technique that actually does have two projects communicating with each other.
Let's say that you have two projects named project1 and project2, and you want project2 to have access to the db container from project1.
Add a .ddev/docker-compose.extradb.yaml to project2's .ddev folder with this content:
version: '3.6'
services:
web:
external_links:
- ddev-project1-db:proj1-db
And now project1's database container is accessible from the web container on project2. For example, you can mysql -h proj1-db from within the project2 web container.
Note that this is often a bad idea, it's best not to have two dev projects depend on each other, it's better to figure out development environments that are as simple as possible. If you just need an extra database, you might want to try How can I create and load a second database in ddev? . If you just need an additional web container as an API server or whatever, the other answer is better.
A simple example of extra_hosts. I needed to use an HTTPS URL in a Drupal module's UI, entity_share, to cURL another ddev site.
On foo I add a .ddev/docker-compose.hosts.yaml
version: '3.6'
services:
web:
extra_hosts:
- bar.ddev.site:172.18.0.6
I tried this and it worked quite nicely; the basic idea is to run a separate ddev-webserver as a service. We usually think of a ddev "service" as something like redis or memcache or solr, but it can really be an API server of any type, and can use the ddev-webserver image (or any other webserver image you want to use).
For example, add this docker-compose.api.yaml to your project's .ddev folder (updated for ddev v1.1.1):
version: '3.6'
services:
myapi:
image: drud/ddev-webserver:v1.1.0
restart: "no"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
volumes:
- "../myapi_docroot/:/var/www/html:cached"
- ".:/mnt/ddev_config:ro"
web:
links:
- myapi:$DDEV_HOSTNAME
and put a dummy index.html in your project's ./myapi_docroot.
After ddev start you can ddev ssh -s myapi and do whatever you want there (and myapi_docroot is mounted at /var/www/html). If you ddev ssh into the web container you can curl http://myapi and you'll see the contents of your myapi_docroot/index.html. Your myapi container can access the 'db' container, or you can run another db container, or ...
Note that this mounts a subdirectory of the main project as /var/www/html, but it can actually mount anything you want. For example,
volumes:
- "../../fancyapiproject/:/var/www/html:cached"
Related
Actually I have checked some questions like this
What I do not understand is; if I change my docker-compose.yml and add profile to it then should I leave the Dockerfile without profile ?
For example my docker-compose file:
backend:
container_name: backend
image: backend
build: ./backend
restart: always
deploy:
restart_policy:
condition: on-failure
max_attempts: 15
ports:
- '8080:8080'
environment:
- MYSQL_ROOT_PASSWORD=DbPass3008
- MYSQL_PASSWORD=DbPass3008
- MYSQL_USER=DbUser
- MYSQL_DATABASE=db
depends_on:
- mysql
And I will add:
environment:
- "SPRING_PROFILES_ACTIVE=test
As far as I understand I need to put 3 different compose file and run them with -f parameter for different environments like:
docker-compose -f docker-compose-local/test/prod up -d
But my question is that my Dockerfile is already specifying profile as:
FROM openjdk:17-oracle
ADD ./target/backend-0.0.1-SNAPSHOT.jar backend.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar", "-Dspring.profiles.active=TEST", "backend.jar"]
So how should I change this Dockerfile? Even if I create 3-4 different compose file, they are all using same Dockerfile. Should I create different Dockerfiles too (seems ridicilous) but what is the correct way ?
There's no need to add a java -Dspring.profiles.active=... command-line option; Spring will recognize the runtime SPRING_PROFILES_ACTIVE environment variable on its own. That means all of your environments can use the same image (which is generally a good practice).
Compose can also expand host environment variables in some contexts, so you may be able to use a single Compose file with environment-variable references
version: '3.8'
services:
backend:
environment:
- SPRING_PROFILES_ACTIVE=${ENVIRONMENT:-dev}
ENVIRONMENT=test docker-compose up -d
I tend to discourage putting environment-specific settings in a src/main/resources/*.yml file, since it means you need to recompile the application jar file whenever you deploy to a new environment. Another possibility is to set most Spring properties as environment variables, and then use multiple Compose files to include environment-specific settings. The one downside here is that you need multiple docker-compose -f options and you need to repeat them on every docker-compose invocation.
I have a weird problem, that sometimes a docker container cannot see a .jar file, while most of the time it does not have any problem with it.
Before i show you the docker image, a little bit of background. Normally i build a jar archive before running my container, a pretty simple container to run a spring boot application. However at some seemingly random point in the daily routine it does not boot up with the container reporting "Unable to access jarfile".
I thought it must be some weird permission stuff, so i took snapshot of my "target" directory when working and when it stopped working via ls -alR target and later comparing those snapshot with git diff. It does not show any difference. I am still pretty convinced it must be related to file-permissions, locking or something of that sort but i do not know where to start.
I am on Mac 12.0.1 btw. Any ideas appreciated.
The docker file
FROM openjdk:8-oraclelinux8
RUN mkdir /app
WORKDIR /app
CMD "java" "-jar" "app.war"
And docker-compose.yml
version: "3.9"
services:
app:
build: .
depends_on:
- sql1
volumes:
- ./target:/app
ports:
- "8080:8080"
links:
- "sql1:sqlserver"
...
I'm not sure if this helps, but I don't see your Dockerfile as robust enough to produce consistent results regardless of the state of your localhost workspace. I may ask, are you building your war file manually and then creating your Docker container?
Please try to follow this approach if it fits your needs :
make sure you delete jar/war files before building the container.
Have a multistage Dockerfile with a "build" phase for your spring boot app where you generate the jar/war file from a builder image (ant, gradle, maven), and then have a second stage where the jar/war file gets copied over to it's final location and the application gets executed, this way you ensure consistency and that the file will be there at all times :
This is an example for my spring boot templates that I use very often, it's quite generic (as I handle the renaming of the jar file without having to worry about how pom.xml is configured individually) and I guess could be implemented in a variety of scenarios
FROM maven:3.8.6-openjdk-18 as builder
WORKDIR /usr/app/
COPY . /usr/app
RUN mvn package -Dmaven.test.skip
RUN JAR_FILE="target/*.jar"; cp ${JAR_FILE} /app.jar
FROM openjdk:18
WORKDIR /usr/app
COPY --from=builder /app.jar /usr/app
EXPOSE 8080
CMD ["java","-jar","app.jar"]
docker compose :
services:
app:
build: .
depends_on:
- sql1
ports:
- 8080:8080
networks:
- spring-boot-api-network
volumes:
- ./target:/app
...
NOTE : I would also remove the "links" option as it is a legacy feature you should avoid using and use networks instead :
You can try this network implementation added at the bottom of your compose file, just make sure you don't forget to add the network: to the sql1 portion as well
networks:
spring-boot-api-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 182.16.0.1/24
gateway: 182.16.0.1
name: spring-boot-api-network
I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.
I have some containers on the same network, but they are in separate docker-compose.yml files.
service_1 is an API container.
service_2 is a Laravel app using Laradock.
I do this inside both my docker-compose.yml files;
networks:
default:
external:
name: my_network
I check with docker network inspect my_network that both my containers are connected.
Now inside service_2 in my Laravel env file, i want to reference my_network and grab the ip/host of service_1 dynamically so I don't have to change the IP everytime.
This doesn't work however, since when I try and make call from my Laravel app it doesn't swap the url and tries to make request to service_1:443.
Hopefully someone can explain!
Hello I'm starting with docker and docker compose and I have the following problem:
I'm working in a spring micro services architecture where I have one configuration service, one discovery service, one gateway service and multiple resource services.
To run these services, I build jar files, which I place in separated folder per service with their config files (application.yml and bootstrap.yml):
e.g:
config-service/
config-service.jar
application.yml
discovery-service/
discovery-service.jar
bootstrap.yml
gateway-service/
gateway-service.jar
bootstrap.yml
crm-service/
crm-service.jar
bootstrap.yml
This works so far on my server.
Now I want to deploy my services in different environments as docker images (created with mvn build image and buildpack) using docker compose, where the configuration files vary depending on the environment. How can I deploy a service as a container using an existing image but with a different configuration file?
Thank you in advance!
There are a few possibilities when handling configuration in a containerized environment.
One of the options is that Spring boot allows you to use environment variables for each application property. For example, let's say you have a spring.datasource.url property, in that case you could also define that property by setting a SPRING_DATASOURCE_URL environment variable:
version: '3.8'
services:
my-spring-boot-app:
image: my-image:0.0.1
environment:
- SPRING_DATASOURCE_URL=jdbc:my-database-url
Alternatively, you could use volumes to put an external file on a specific location within a container:
version: '3.8'
services:
my-spring-boot-app:
image: my-image:0.0.1
volumes:
./my-app/bootstrap.yml:/etc/my-app/bootstrap.yml
In this example I'm copyingbootstrap.yml from a relative folder on my host machine, to /etc/my-app within the container. If you put these files within the same folder as your JAR file, you can override the configuration.