Pass multiple system parameters thorugh docker-compose-->dockerfile--> springboot application housed in docker container - spring-boot

Below is the content of my docker-compose.yml file:
eureka-server:
image: controlsplm/eureka-server
environment:
HOST_IP: X.X.X.X
ACTIVE_PROFILE=docker-development-cloud
ports:
- "8761:8761"
restart: always
And below is the content of my docker file:
FROM java:8
VOLUME /tmp
ADD eureka-server-0.1.0-SNAPSHOT.jar app.jar
EXPOSE 8761
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Denvironment=$HOST_IP","-Dspring.profiles.active=$ACTIVE_PROFILE","-jar","/app.jar"]
But when i run the docker container using compose, HOST_IP is picked up but not ACTIVE_PROFILE. AM i missing anything here? Kindly help...

Indeed, if you have Spring properties
my.spring.property.one=green
my.spring.property.two=blue
You can include them as follows in the docker-compose.yml:
environment:
- my_spring_property_one=green
- my_spring_property_two=blue
worked for my with
Spring Boot v1.5.10.RELEASE
Spring v4.3.14.RELEASE
docker version '3.6'

I see two problems:
the format of the ACTIVE_PROFILE is wrong, it would be ACTIVE_PROFILE: docker-development-cloud as mentioned by #andreas-jägle in the comments
The entrypoint uses json array notation, which means the command is exec'ed directly without a shell. The shell (usually bash) is what replaced the variables with their values, so you need to run in a shell to use those variables. You can either use the string form of ENTRYPOINT, or use:
ENTRYPOINT ["bash", "-c" ,"java - Dspring.profiles.active=$ACTIVE_PROFILE ..."]

I resolved the issue by adding below lines to my docker-compose.yml file:
eureka-server:
image: controlsplm/eureka-server:latest
environment:
HOST_IP: X.X.X.X
SPRING_PROFILES_ACTIVE: docker-development-cloud
ports:
- "8761:8761"
restart: always
and below lines in dockerfile:
FROM java:8
VOLUME /tmp
ADD eureka-server-0.1.0-SNAPSHOT.jar app.jar
EXPOSE 8761
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-Denvironment=$HOST_IP","-jar","/app.jar"]

Related

Mounting Volume in DockerFile on Windows

I am working on a Springboot project with docker. I tried to mount volume so I could have access to generated files from the Springboot application in my local directory. The data is generated in the docker container but I can not find it the local directory.
I have read many topics but none seems to be helpful.
Please, I am still new to docker and would appreciate suggestions to assist.
I have tried to mount the volume directly in the dockerfile as there is a docker compose file to run the service alongside others. Below is what I have in my Dockerfile and docker-compose
Dockerfile
FROM iron/java:1.8
EXPOSE 8080
ENV USER_NAME myprofile
ENV APP_HOME /home/$USER_NAME/app
#Test Script>>>>>>>>>>>>>>>>>>>>>>
#Modifiable
ENV SQL_SCRIPT $APP_HOME/SCRIPTS_TO_RUN
ENV SQL_OUTPUT_FILE $SQL_SCRIPT/data
ENV NO_OF_USERS 3
ENV RANGE_OF_SKILLS "1-4"
ENV HOST_PATH C:"/Users/user1/IdeaProjects/path/logs"
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
RUN adduser -S $USER_NAME
RUN mkdir $APP_HOME
RUN mkdir $SQL_SCRIPT
RUN chown $USER_NAME $SQL_SCRIPT
VOLUME $HOST_PATH: $SQL_SCRIPT
ADD myprofile-*.jar $APP_HOME/myprofile.jar
RUN chown $USER_NAME $APP_HOME/myprofile.jar
USER $USER_NAME
WORKDIR $APP_HOME
RUN sh -c 'touch myprofile.jar'
ENTRYPOINT ["sh", "-c","java -Djava.security.egd=file:/dev/./urandom -jar myprofile.jar -o $SQL_OUTPUT_FILE -n $NO_OF_USERS -r $RANGE_OF_SKILLS"]
Docker-compose
myprofile-backend:
extra_hosts:
- remotehost
container_name: samplecontainer-name
image: sampleimagename
links:
- rabbitmq
- db:redis
expose:
- "8080"
ports:
- "8082:8080"
volumes:
- ./logs/:/tmp/logs
- ./logs/:/app
The problem here is that you are mounting the same folder ./logs twice. Docker-compose volume mount syntax is - <your-host-path>:<your-container-path>. Also, its better to use relative paths when you are building the application. So change docker-compose file to (assuming you want to see the files in ./target relative to the Dockerfile:
myprofile-backend:
extra_hosts:
- remotehost
container_name: samplecontainer-name
image: sampleimagename
links:
- rabbitmq
- db:redis
expose:
- "8080"
ports:
- "8082:8080"
volumes:
- ./logs/:/tmp/logs
- ./target/:/app

Make a request to a spring api running in a docker container from windows host

So, I searched around for an answer on this matter but either people don't address the issue or they say there's no problem doing this on their computer (mac or linux). It seems like this might be a windows problem.
I have a spring api running on a docker container (linux container). I use docker desktop on windows and I'm trying to make a request (in insomnia/postman/wtv) to that api.
If I run the api locally making the following request works perfectly:
http://localhost:8080/api/task/
This will list multiples task elements.
I've containerized this application like so:
Dockerfile
FROM openjdk:11.0.7
COPY ./target/spring-api-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
RUN sh -c 'touch spring-api-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java", "-jar", "spring-api-0.0.1-SNAPSHOT.jar"]
docker-compose.yml
version: '3.8'
services:
api:
build: .
depends_on:
- mysql
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/test?createDatabaseIfNotExist=true
ports:
- "8080:80"
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
If I do docker-compose up this works without issue:
The problem is, if I try to call the same endpoint as before from localhost I don't get any response.
Insomnia returns an error saying: Error: Server returned nothing (no headers, no data)
I've also tried connecting to the container's ip (got it from docker inspect) but no luck.
Ports are exposed in docker-compose.yml. What am I missing?
Thanks in advance.
Port mapping is incorrect.Spring boot application started at 8080 (from the image I see) inside container and it should be mapped to 8080 inside the container.
It should be like below:
ports:
- "8080:8080"

Docker compose can not start service network not found after restart docker

I'm using docker for windows (Version 18.03.0-ce-win59 (16762)) in a windows 10 pro. All the containers run ok after running the command docker-compose -up -d. The problem is when I restart the docker service. Then, once restarted, all the containers are stoped and when I run the command docker-compose start -d the following error is shown:
Error response from daemon: network ccccccccccccc not found
I don't know what's happening. When I run the container using run and the --restart=always option everything works as expected. No error is shown on restart.
This is the docker-compose file:
version: '3'
services:
service_1:
image: image1
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_2:
image: image2
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_3:
image: image3
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
The dockerfiles are like this:
FROM microsoft/dotnet-framework:3.5
ARG ENTRY
ENV my_env=$ENTRY
WORKDIR C:\\foo2
ENTRYPOINT C:/foo2/app.exe %my_env%
The network has changed. I used docker network prune command to meet the same problem.Recreate the container would fix the problem. Docker would set up the network again for the new containers.
#remove all containers
docker rm $(docker ps -qa)
#or
docker system prune
There might be some old container instances which were not removed. Check the instances with
docker container ls -a
You might get output like this if you have some instances which were not removed
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b4678e6666b b4a75a01d539 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago zealous_allen
ee862a3418f2 1eaaf48e9b42 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago jolly_torvalds
Remove the containers by the container id
docker container rm 8b4678e6666b
docker container rm ee862a3418f2
Now start your container with docker-compose file
This worked for me. Hope it helps!
I found a possible solution editing the docker-compose.yml file as follows:
version: '3'
services:
cm04:
image: tnc530_cm04
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530/bin/x86/Release:C:/adontec
cm06:
image: tnc620_cm06
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
cm08:
image: tnc620_cm08
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
networks:
test:
external:
name: nat
As you can see I created a network called test linked with the external network nat. Now, when I restart the docker services the containers are started with no errors.
Alternatively, you can just open your docker app and manually delete the containers. Then run docker-compose up on your terminal. Now it should be working. Go to the port either 9000 or 9001 or whichever port you are using and see if minio is actually running.

Docker compose, link file outside container

I'm working with docker-compose for a laravel project and nginx.
This is my docker-compose.yml :
version: '2'
services:
backend:
image: my_image
depends_on:
- elastic
- mysql
mysql:
image: mysql:8.0.0
nginx:
depends_on:
- backend
image: my_image
ports:
- 81:80
So, my Laravel project is in the container backend, and If I run the command : docker-compose up -d it's ok, all containers are created and my project is running on port 81.
My problem is, In the Laravel project in my container backend, I have a .env file with database login, password and other stuff.
How can I edit this file after docker-compose up ? Directly in the container is not a good idea, is there a way to link a file outside a container with docker-compose ?
Thanks
One approach to this is to use the 'env_file' directive on the docker-compose.yml, in there you can put a list of key value pairs that will be exported into the container. For example:
web:
image: nginx:latest
env_file:
- .env
ports:
- "8181:80"
volumes:
- ./code:/code
Then you can configure your application to use these env values.
One catch with this approach is that you need to recreate the containers if you change any value or add a new one (docker-compose down && docker-compose up -d).

Docker in Docker docker-compose daemon not running on host. Windows 10

Running docker in docker image (dind), on windows 10. Powershell is run as admin. I have a manager and a worker up container up through docker-compose.yml file.
The compose yaml file is as such:
version: '2'
services:
manager:
image: docker:latest
ports:
- "2375"
- "8080"
privileged: true
tty: true
worker:
image: docker:latest
ports:
- "8080"
privileged: true
tty: true
I don't know why or what tty: true even does but it's the only way to get it to stay up for some reason.
I try to init the manager with:
docker-compose exec manager docker swarm init --listen-addr 0.0.0.0:2377
I also tried with the port being 0.0.0.0:2375 as what is open in the compose yaml.
When I run the command and get this:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Here is the result of a docker-compose ps
Name Command State Ports
swarmtest_manager_1 docker-entrypoint.sh sh Up 0.0.0.0:32782->2375/tcp, 0.0.0.0:32781->8080/tcp
swarmtest_worker_1 docker-entrypoint.sh sh Up 0.0.0.0:32780->8080/tcp
Running and testing services in the dind environment would be ideal, although I still don't like compose as I am trying to learn how to use it better, creating a docker-machine and using the docker swarm mode seems much easier, although I'm truthfully not sure of the limitations with compose in this manner.

Resources