How to choose correct profile on different environments for Docker and Docker-Compose? - spring

Actually I have checked some questions like this
What I do not understand is; if I change my docker-compose.yml and add profile to it then should I leave the Dockerfile without profile ?
For example my docker-compose file:
backend:
container_name: backend
image: backend
build: ./backend
restart: always
deploy:
restart_policy:
condition: on-failure
max_attempts: 15
ports:
- '8080:8080'
environment:
- MYSQL_ROOT_PASSWORD=DbPass3008
- MYSQL_PASSWORD=DbPass3008
- MYSQL_USER=DbUser
- MYSQL_DATABASE=db
depends_on:
- mysql
And I will add:
environment:
- "SPRING_PROFILES_ACTIVE=test
As far as I understand I need to put 3 different compose file and run them with -f parameter for different environments like:
docker-compose -f docker-compose-local/test/prod up -d
But my question is that my Dockerfile is already specifying profile as:
FROM openjdk:17-oracle
ADD ./target/backend-0.0.1-SNAPSHOT.jar backend.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar", "-Dspring.profiles.active=TEST", "backend.jar"]
So how should I change this Dockerfile? Even if I create 3-4 different compose file, they are all using same Dockerfile. Should I create different Dockerfiles too (seems ridicilous) but what is the correct way ?

There's no need to add a java -Dspring.profiles.active=... command-line option; Spring will recognize the runtime SPRING_PROFILES_ACTIVE environment variable on its own. That means all of your environments can use the same image (which is generally a good practice).
Compose can also expand host environment variables in some contexts, so you may be able to use a single Compose file with environment-variable references
version: '3.8'
services:
backend:
environment:
- SPRING_PROFILES_ACTIVE=${ENVIRONMENT:-dev}
ENVIRONMENT=test docker-compose up -d
I tend to discourage putting environment-specific settings in a src/main/resources/*.yml file, since it means you need to recompile the application jar file whenever you deploy to a new environment. Another possibility is to set most Spring properties as environment variables, and then use multiple Compose files to include environment-specific settings. The one downside here is that you need multiple docker-compose -f options and you need to repeat them on every docker-compose invocation.

Related

Docker sometimes cannot see jar file

I have a weird problem, that sometimes a docker container cannot see a .jar file, while most of the time it does not have any problem with it.
Before i show you the docker image, a little bit of background. Normally i build a jar archive before running my container, a pretty simple container to run a spring boot application. However at some seemingly random point in the daily routine it does not boot up with the container reporting "Unable to access jarfile".
I thought it must be some weird permission stuff, so i took snapshot of my "target" directory when working and when it stopped working via ls -alR target and later comparing those snapshot with git diff. It does not show any difference. I am still pretty convinced it must be related to file-permissions, locking or something of that sort but i do not know where to start.
I am on Mac 12.0.1 btw. Any ideas appreciated.
The docker file
FROM openjdk:8-oraclelinux8
RUN mkdir /app
WORKDIR /app
CMD "java" "-jar" "app.war"
And docker-compose.yml
version: "3.9"
services:
app:
build: .
depends_on:
- sql1
volumes:
- ./target:/app
ports:
- "8080:8080"
links:
- "sql1:sqlserver"
...
I'm not sure if this helps, but I don't see your Dockerfile as robust enough to produce consistent results regardless of the state of your localhost workspace. I may ask, are you building your war file manually and then creating your Docker container?
Please try to follow this approach if it fits your needs :
make sure you delete jar/war files before building the container.
Have a multistage Dockerfile with a "build" phase for your spring boot app where you generate the jar/war file from a builder image (ant, gradle, maven), and then have a second stage where the jar/war file gets copied over to it's final location and the application gets executed, this way you ensure consistency and that the file will be there at all times :
This is an example for my spring boot templates that I use very often, it's quite generic (as I handle the renaming of the jar file without having to worry about how pom.xml is configured individually) and I guess could be implemented in a variety of scenarios
FROM maven:3.8.6-openjdk-18 as builder
WORKDIR /usr/app/
COPY . /usr/app
RUN mvn package -Dmaven.test.skip
RUN JAR_FILE="target/*.jar"; cp ${JAR_FILE} /app.jar
FROM openjdk:18
WORKDIR /usr/app
COPY --from=builder /app.jar /usr/app
EXPOSE 8080
CMD ["java","-jar","app.jar"]
docker compose :
services:
app:
build: .
depends_on:
- sql1
ports:
- 8080:8080
networks:
- spring-boot-api-network
volumes:
- ./target:/app
...
NOTE : I would also remove the "links" option as it is a legacy feature you should avoid using and use networks instead :
You can try this network implementation added at the bottom of your compose file, just make sure you don't forget to add the network: to the sql1 portion as well
networks:
spring-boot-api-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 182.16.0.1/24
gateway: 182.16.0.1
name: spring-boot-api-network

How do I have my jar re-deployed and put into docker image every time I run compose?

So I know that there are a lot of tutorials on the topics, both docker and maven, but I'm having some confusion in combining them alltogether.
I created a multi-module Maven project with 2 modules, 2 spring applications, let's call them application 1 and application 2.
Starting each other via IntelliJ IDEA green "run" button works fine, now I'd like to automate things and run via docker.
I have Dockerfiles that looks the same in both cases:
(in both modules it's the same, only JAR name's different)
FROM adoptopenjdk:11-jre-hotspot
MAINTAINER *my name here lol*
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar
ENTRYPOINT ["java","-jar","/application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar"]
CMD /wait && /*.jar
I also have docker-compose:
version: '2.1'
services:
application1:
container_name: app1
build:
context: ../app1
image: docker.io/myname/app1:latest
hostname: app1
ports:
- "8080:8080"
networks:
- spring-cloud-network-app1
application2:
container_name: app2
build:
context: ../app2
depends_on:
application1:
condition: service_started
links:
- application1
image: docker.io/myname/app2:latest
environment:
WAIT_HOSTS: application1:8080
ports:
- "8070:8070"
networks:
- spring-cloud-network-app2
networks:
spring-cloud-network-app1:
driver: bridge
spring-cloud-network-app2:
driver: bridge
What I do currently is:
I run maven package for each module and receive files like "application1(-2)-0.0.1-SNAPSHOT-jar-with-dependencies.jar" in both target folders.
"docker build -t springio/app1 ."
"docker-compose up --build"
And it works, but I feel I do some extra steps.
How can I do the project so that I ONLY have to run docker compose?
(after each time I change things in the code)
Again, I know it's a quite simple thing but I kinda lost the logic.
Thanks!
P.S
Ah, and about the "...docker-compose-wait/releases/download/2.9.0/wait /wait"
It's important that app start one after another, tried different solutions, unfortunately, doesn't really work as good as I would like to. But I guess I'll leave it as is.
So, again, if anyone ever wonders how to do the things I asked, here's the answer: you need multi-stage build Dockerfile.
It'll look like this:
#
# Build stage
#
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
#
# Package stage
#
FROM openjdk:11-jre-slim
COPY --from=build /home/app/target/demo-0.0.1-SNAPSHOT.jar /usr/local/lib/demo.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/demo.jar"]
What it does is it basically first creates a jar file, copies it into package stage and eventually runs.
That's allow you to run your app in docker by running only docker compose.

Weird behaviour passing build-args to Dockerfile through docker-compose

I'm facing a strange problem (or better: two different, weird problems) trying to pass build-args to my Dockerfile through docker-compose up.
My files - initial setup
Dockerfile:
ARG NODE_VERSION
FROM node:${NODE_VERSION}
ARG NPM_REGISTRY_TOKEN
RUN echo "=====> token ${NPM_REGISTRY_TOKEN}"
... ... ...
docker-compose.yml:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
With this initial setup in place, I have the following behaviour (on Linux Mint 20, docker-compose version 1.26.2, build eefe0d31):
running docker build --build-arg NPM_REGISTRY_TOKEN=xyz123 produces in output =====> token xyz123: the NPM_REGISTRY_TOKEN arg flows to the Dockerfile
running docker-compose build --build-arg NPM_REGISTRY_TOKEN=xyz123 myservice produces in output =====> token xyz123: the NPM_REGISTRY_TOKEN arg flows to the Dockerfile
running NPM_REGISTRY_TOKEN=xyz123 docker-compose up myservice produces in output =====> token : the NPM_REGISTRY_TOKEN env arg should flow to the Dockerfile due to - NPM_REGISTRY_TOKEN (according to https://docs.docker.com/compose/compose-file/#args: You can omit the value when specifying a build argument, in which case its value at build time is the value in the environment where Compose is running) but it seems to not be available during build
My files - reloaded
Simply changing my docker-compose.yml file to
version: '3'
services:
myservice:
build:
context: ./myservice
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
dockerfile: ../Dockerfile
seems to solve the problem: switching args and dockerfile entries in yml file unlocks the capability to pass environment variables to Dockerfile as build-args through docker-compose up, too. Problem solved. Or not?
Changing OS, getting new problem
So, developers in my team use a bunch of different operating systems: Linux, Mac Os, and Windows, too.
Running the same commands on the same version (1.26.2) of docker-compose on Windows 10 Professional 1909 we're getting the same problem we faced initially, both using the initial version of the docker-compose.yml file and using the version that works on Linux.
We tried passing env var from command line, setting them in the command prompt, setting them as system variables through GUI... we tried launching docker-compose up for git-bash, too, but we're not able to get the variable value in Dockerfile.
I googled a bit aaround but I've not found any reference to known bugs or limitation of the Windows version of docker-compose.
Anyone have any idea what the problem might be? Thank you very much in advance!
So, finally, after some try-and-fail on different OSs and with different configurations, I ended up with an explanation of my problem - and therefore with a viable workaround, which allowed me to reach a satisfactory configuration for my docker-compose-yml file.
Short answer: it wasn't a matter of OSs nor env var passing nor order of context / dockerfile sections - it was a matter of clash between different services in my compose file.
More in detail: my docker-compose.yml file contained an additional service, too, whose job was to initialize the database the application was pointing to:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
depends_on:
- persistence
- db_initializer
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- npm run start:dev'
persistence:
# Setting up the DBMS here
db_initializer:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
depends_on:
- persistence
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- npm run db:migrate'
So, the problem was that I was configuring two services based on the same, self-build image, launching it with different commands (npm run db:migrate for the db_initializer service, npm run start:dev for the application service). Apparently compose took the configuration provided for the first initialized service (db_initializer, because myservice was dependant on it) and used that configuration for both services, ignoring the (different) args section I was providing for the second container: so I was able to solve (this time really!) the problem simply merging services declaration, including all args I needed:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
depends_on:
- persistence
- db_initializer
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- run db:migrate && npm run start:dev'
persistence:
# Setting up the DBMS here
So, after a bunch of months without collecting answers, I think it's time to share my experience, hoping it can help someone encountering this weird behaviour.

How to connect two docker containers, one containing hazelcast in memory data grid, and one containing war file

I have two docker containers, one with hazelcast java application (the core for the web application - jar package) and one with rest service for the web application (war package). I'm using docker-compose to build up whole project in docker which looks like this:
version: "3"
services:
escomled_datagrid:
image: escomled/escomled_datagrid
build:
context: ./sh_scripts/escomled_data_grid
tomcat:
image: escomled/tomcat
build:
context: ./tomcat/app
ports:
- 8585:8080
depends_on:
- escomled_datagrid
links:
- escomled_datagrid:escomled_datagrid
I also have Dockerfile for each container:
-escomled_datagrid:
FROM openjdk:8-jdk-alpine as build
WORKDIR /EscomledML
COPY ./. ./
COPY ./escomled.properties
/home/escomled/escomled_server/config/escomled.properties
CMD ["sh","/EscomledML/escomled_data_grid.sh","start"]
EXPOSE 8085
-tomcat
FROM tomcat:8.5-alpine
COPY ./sample.war /usr/local/tomcat/webapps/
COPY ./escomled-rest.war /usr/local/tomcat/webapps/
COPY ./escomled.properties
/home/escomled/escomled_server/config/escomled.properties
RUN sh -c 'touch /usr/local/tomcat/webapps/sample.war'
RUN sh -c 'touch /usr/local/tomcat/webapps/escomled-rest.war'
EXPOSE 8080
First container uses sh script in the runtime.
This way everyting works fine, the containers start and stay active.
The only problem is that they dont see each other, hazelcast server starts and waits for "member" to connect, war file (hazelcast member) also starts, but they dont "see" each other and wont connect. I put in the docker-compose file "links" and "depends on" tags, but that wont help.
The code for the project works fine when I start it localy, first I start data grid server as java application, then I start the tomcat containing rest service and the connection is established in no time.
So my question is, how do I link this two containers so they can see each other and work together?
try putting the containers in the same "network" by specifying the network bridge
version: "3"
services:
escomled_datagrid:
image: escomled/escomled_datagrid
build:
context: ./sh_scripts/escomled_data_grid
networks:
- networknamename
tomcat:
image: escomled/tomcat
build:
context: ./tomcat/app
ports:
- 8585:8080
depends_on:
- escomled_datagrid
links:
- escomled_datagrid:escomled_datagrid
networks:
- networknamename
networks:
networknamename:
driver: bridge

Docker-compose: get pulled & deployed images

Context:
I am currently trying to create a Jenkins job that builds periodically and updates the images that are in my docker-compose file. I managed to get a basic version of this to work by labeling my services in my docker-compose.yml. For example:
gitlab:
image: 'gitlab/gitlab-ce:latest'
container_name: 'gitlab'
labels:
update: 'notify'
...
letsencrypt:
image: 'jrcs/letsencrypt-nginx-proxy-companion'
container_name: 'letsencrypt-companion'
labels:
update: 'auto'
...
Notify meant that it should pull new docker images periodically and notify me that an image is ready to be updated. Auto means that it is allowed to automatically deploy the new image.
Problem:
I want to make it so that when new images are pulled Jenkins will automatically notify my that new images are ready / deployed. The problem however is that I have to interpret the output of docker-compose pull and docker-compose up -d to know which images were actually new and deployed. I need a solution that works for a Jenkins pipeline (declarative or scripted)
try to watch this video https://www.youtube.com/watch?v=ZL3hMP9BdmQ; I think that's you are looking for.
https://github.com/v2tec/watchtower
"If you mount the config file as described below, be sure to also prepend the url for the registry when starting up your watched image (you can omit the https://). Here is a complete docker-compose.yml file that starts up a docker container from a private repo at dockerhub and monitors it with watchtower. Note the command argument changing the interval to 30s rather than the default 5 minutes."
version: "3"
services:
cavo:
image: index.docker.io/<org>/<image>:<tag>
ports:
- "443:3443"
- "80:3080"
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30

Resources