can `bootBuildImage` create writeable volumes? - spring-boot

Given a spring boot app that writes files to /var/lib/app/files.
I create an docker image with the gradle task:
./gradlew bootBuildImage --imageName=app:latest
Then, I want to use it in docker-compose:
version: '3.5'
services:
app:
image: app:latest
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This will fail, because the folder is created during docker-compose up and is owned by root and the app, hence, has no write access to the folder.
The quick fix is to run the image as root by specifying user: root:
version: '3.5'
services:
app:
image: app:latest
user: root # <------------ required
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This works fine, but I do not want to run it as root. I wonder how to achieve it? I normally could create a Dockerfile that creates the desired folder with correct ownership and write permissions. But as far as I know build packs do not use a custom Dockerfile and hence bootBuildImage would not use it - correct? How can we create writable volumes then?

By inspecting the image I found that the buildpack uses /cnb/lifecycle/launcher to launch the application. Hence I was able to customize the docker command and fix the owner of the specific folder before launch:
version: '3.5'
services:
app:
image: app:latest
# enable the app to write to the storage folder (docker will create it as root by default)
user: root
command: "/bin/sh -c 'chown 1000:1000 /var/lib/app/files && /cnb/lifecycle/launcher'"
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
Still, this is not very nice, because it is not straight forward (and hence my future self will need to spent time on understand it again) and also it is very limited in its extensibility.
Update 30.10.2020 - Spring Boot 2.3
We ended up creating another Dockerfile/layer so that we do not need to hassle with this in the docker-compose file:
# The base_image should hold a reference to the image created by ./gradlew bootBuildImage
ARG base_image
FROM ${base_image}
ENV APP_STORAGE_LOCAL_FOLDER_PATH /var/lib/app/files
USER root
RUN mkdir -p ${APP_STORAGE_LOCAL_FOLDER_PATH}
RUN chown ${CNB_USER_ID}:${CNB_GROUP_ID} ${APP_STORAGE_LOCAL_FOLDER_PATH}
USER ${CNB_USER_ID}:${CNB_GROUP_ID}
ENTRYPOINT /cnb/lifecycle/launcher
Update 25.11.2020 - Spring Boot 2.4
Note that the above Dockerfile will result in this error:
ERROR: failed to launch: determine start command: when there is no default process a command is required
The reason is that the default entrypoint by the paketo builder changed. Changing the entrypoint from /cnb/lifecycle/launcher to the new one fixes it:
ENTRYPOINT /cnb/process/web
See also this question: ERROR: failed to launch: determine start command: when there is no default process a command is required

Related

Docker sometimes cannot see jar file

I have a weird problem, that sometimes a docker container cannot see a .jar file, while most of the time it does not have any problem with it.
Before i show you the docker image, a little bit of background. Normally i build a jar archive before running my container, a pretty simple container to run a spring boot application. However at some seemingly random point in the daily routine it does not boot up with the container reporting "Unable to access jarfile".
I thought it must be some weird permission stuff, so i took snapshot of my "target" directory when working and when it stopped working via ls -alR target and later comparing those snapshot with git diff. It does not show any difference. I am still pretty convinced it must be related to file-permissions, locking or something of that sort but i do not know where to start.
I am on Mac 12.0.1 btw. Any ideas appreciated.
The docker file
FROM openjdk:8-oraclelinux8
RUN mkdir /app
WORKDIR /app
CMD "java" "-jar" "app.war"
And docker-compose.yml
version: "3.9"
services:
app:
build: .
depends_on:
- sql1
volumes:
- ./target:/app
ports:
- "8080:8080"
links:
- "sql1:sqlserver"
...
I'm not sure if this helps, but I don't see your Dockerfile as robust enough to produce consistent results regardless of the state of your localhost workspace. I may ask, are you building your war file manually and then creating your Docker container?
Please try to follow this approach if it fits your needs :
make sure you delete jar/war files before building the container.
Have a multistage Dockerfile with a "build" phase for your spring boot app where you generate the jar/war file from a builder image (ant, gradle, maven), and then have a second stage where the jar/war file gets copied over to it's final location and the application gets executed, this way you ensure consistency and that the file will be there at all times :
This is an example for my spring boot templates that I use very often, it's quite generic (as I handle the renaming of the jar file without having to worry about how pom.xml is configured individually) and I guess could be implemented in a variety of scenarios
FROM maven:3.8.6-openjdk-18 as builder
WORKDIR /usr/app/
COPY . /usr/app
RUN mvn package -Dmaven.test.skip
RUN JAR_FILE="target/*.jar"; cp ${JAR_FILE} /app.jar
FROM openjdk:18
WORKDIR /usr/app
COPY --from=builder /app.jar /usr/app
EXPOSE 8080
CMD ["java","-jar","app.jar"]
docker compose :
services:
app:
build: .
depends_on:
- sql1
ports:
- 8080:8080
networks:
- spring-boot-api-network
volumes:
- ./target:/app
...
NOTE : I would also remove the "links" option as it is a legacy feature you should avoid using and use networks instead :
You can try this network implementation added at the bottom of your compose file, just make sure you don't forget to add the network: to the sql1 portion as well
networks:
spring-boot-api-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 182.16.0.1/24
gateway: 182.16.0.1
name: spring-boot-api-network

How do I have my jar re-deployed and put into docker image every time I run compose?

So I know that there are a lot of tutorials on the topics, both docker and maven, but I'm having some confusion in combining them alltogether.
I created a multi-module Maven project with 2 modules, 2 spring applications, let's call them application 1 and application 2.
Starting each other via IntelliJ IDEA green "run" button works fine, now I'd like to automate things and run via docker.
I have Dockerfiles that looks the same in both cases:
(in both modules it's the same, only JAR name's different)
FROM adoptopenjdk:11-jre-hotspot
MAINTAINER *my name here lol*
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar
ENTRYPOINT ["java","-jar","/application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar"]
CMD /wait && /*.jar
I also have docker-compose:
version: '2.1'
services:
application1:
container_name: app1
build:
context: ../app1
image: docker.io/myname/app1:latest
hostname: app1
ports:
- "8080:8080"
networks:
- spring-cloud-network-app1
application2:
container_name: app2
build:
context: ../app2
depends_on:
application1:
condition: service_started
links:
- application1
image: docker.io/myname/app2:latest
environment:
WAIT_HOSTS: application1:8080
ports:
- "8070:8070"
networks:
- spring-cloud-network-app2
networks:
spring-cloud-network-app1:
driver: bridge
spring-cloud-network-app2:
driver: bridge
What I do currently is:
I run maven package for each module and receive files like "application1(-2)-0.0.1-SNAPSHOT-jar-with-dependencies.jar" in both target folders.
"docker build -t springio/app1 ."
"docker-compose up --build"
And it works, but I feel I do some extra steps.
How can I do the project so that I ONLY have to run docker compose?
(after each time I change things in the code)
Again, I know it's a quite simple thing but I kinda lost the logic.
Thanks!
P.S
Ah, and about the "...docker-compose-wait/releases/download/2.9.0/wait /wait"
It's important that app start one after another, tried different solutions, unfortunately, doesn't really work as good as I would like to. But I guess I'll leave it as is.
So, again, if anyone ever wonders how to do the things I asked, here's the answer: you need multi-stage build Dockerfile.
It'll look like this:
#
# Build stage
#
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
#
# Package stage
#
FROM openjdk:11-jre-slim
COPY --from=build /home/app/target/demo-0.0.1-SNAPSHOT.jar /usr/local/lib/demo.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/demo.jar"]
What it does is it basically first creates a jar file, copies it into package stage and eventually runs.
That's allow you to run your app in docker by running only docker compose.

Docker does not create a new container when using docker-compose build

I've set up two windows container for ASP.NET and MSSQL server. On the first docker-compose build everything works as expected. Then after I've made some changes to the custom dockerfile and run docker-compose build again it uses the old container again, not making any changes.
I assumed that when i did a build it created a new container. Am i misunderstanding how docker works?
This is the docker-compose.yml
version: '3'
services:
db:
image: microsoft/mssql-server-windows-developer
environment:
sa_password: "Password1234!"
ACCEPT_EULA: "Y"
ports:
- "8003:1433"
build:
context: .
dockerfile: mssql.dockerfile
web:
build:
context: .
dockerfile: web.dockerfile
image: mcr.microsoft.com/dotnet/framework/aspnet:4.8
#volumes:
# - .:C:/inetpub/wwwroot
ports:
- "8080:80"
- "8081:431"
This is the mssql.dockerfile
# escape=`
FROM microsoft/mssql-server-windows-developer
#set shell
SHELL ["powershell.exe", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
#make temp folder
RUN mkdir C:\temp
#copy script to temp folder
COPY DownloadDatabase.ps1 C:\temp
COPY RestoreDatabase.ps1 C:\temp
#run script to retrieve production database
WORKDIR C:\temp
RUN .\DownloadDatabase.ps1 -sourcefile <url> -destinationfile <target>
CMD .\RestoreDatabase.ps1
It is very easy to tell if the image has been re-used because the mkdir C:\temp errors out saying the directory already exists.
EDIT: I've already tried all the options on docker compose. no-cache, force-rm
docker-compose build
Only builds images but does not start containers.
That's why your changes in dockerfile are not applied. You have rebuilded the image but not the container. It's the reason why the container previoulsy launched is based on the older version of the image.
docker-compose up
From Docker documentation :
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
In order to make shure that both of your image and container are rebuilded you have to add this flags :
docker-compose up --force-recreate --build
That way your containers are based on the correct image version.
Explanation on flags from Docker documentation :
--build Build images before starting containers.
--force-recreate Recreate containers even if their configuration
and image haven't changed.
If you want to do this for a specific service just add the service name at the end of command line :
docker-compose up --force-recreate --build serviceName
Another flag useful if you want a clear output is the -d flag :
-d, --detach Detached mode: Run containers in the background,
print new container names. Incompatible with
It turns out i simply had to do docker-compose pull before docker-compose build to refresh the dockerfile! Now it builds a fresh image every time!

Routes not updating inside my Laravel Container

I've got this docker-compose:
version: '3'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
image: laravel-docker
ports:
- 8080:80
volumes:
- ./:/var/www
links:
- mysql
- redis
environment:
DB_HOST: mysql
DB_DATABASE: laravel_docker
DB_USERNAME: app
DB_PASSWORD: password
REDIS_HOST: redis
SESSION_DRIVER: redis
CACHE_DRIVER: redis
mysql:
image: mysql:5.7
ports:
- 13306:3306
environment:
MYSQL_DATABASE: laravel_docker
MYSQL_USER: app
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: password
redis:
image: redis:4.0-alpine
ports:
- 16379:6379
and this Dockerfile:
FROM php:7.1.8-apache
COPY . /srv/app
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf
WORKDIR /srv/app
RUN docker-php-ext-install mbstring pdo pdo_mysql \
&& chown -R www-data:www-data /srv/app
RUN a2enmod rewrite
which is my configuration to run a Laravel container with MySQL and Redis. Everything works perfectly, but I'm encountering problems when I try to add (or update) a new route: it doesn't appear until I don't stop all containers and restart them with --build tag.
Is there a way to add and update routes without restart my containers?
ssh to the app container and from the project directory run this command:
php artisan route:clear
Based on the Dockerfile your app lives at /srv/app, yet in the yml file you list /var/www as the mount target. Change that to /srv/app
Explanation:
Building the Dockerfile results in an immutable image. The software inside the image was configured to serve your application from /srv/app. Since COPY . /srv/app added your app to the image at the right location, it could be served from there just fine, but that command adds it when the image is built, and then it becomes an immutable part of the image, so the changes you make on the host are not going to be visible inside. What you want to do is bind mounting your project directory to /srv/app, and that will obscure (temporarily "replace") the contents of that directory with the one on your host, and this is what that yml line does. (Btw the fact that mounts obscure the existing directory is not docker-specific.)
https://docs.docker.com/storage/bind-mounts/#mounting-into-a-non-empty-directory-on-the-container
The reason why we often both COPY and bind mount our project directories is that this practice allows us to use the same Dockerfile for both development (without frequent image rebuilds) and production.
I'd be curious to know if your change is properly propagating to your volume. It could be a permissions issue inside the container. What happens if you connect to the container and "cat" the contents of the routes file? Does it match the file outside the container? What OS are you running docker on? How locked down is the OS's file system? Are there any constraints that would make volumes work funky? Also, what file system sync process are you using? Are you just using the default?

Docker, mount volumes as readonly

I am working with Docker, and I want to mount a dynamic folder that changes a lot (so I would not have to make a Docker image for each execution, which would be too costly), but I want that folder to be read-only. Changing the folder owner to someone else works. However, chown requires root access, which I would prefer not to expose to an application.
When I use -v flag to mount, it gives whatever the username I give, I created a non-root user inside the docker image, however, all the files in the volume with the owner as the user that ran docker, changes into the user I give from the command line, so I cannot make read-only files and folders. How can I prevent this?
I also added mustafa ALL=(docker) NOPASSWD: /usr/bin/docker, so I could change to another user via terminal, but still, the files have permissions for my user.
You can specify that a volume should be read-only by appending :ro to the -v switch:
docker run -v volume-name:/path/in/container:ro my/image
Note that the folder is then read-only in the container and read-write on the host.
2018 Edit
According to the Use volumes documentation, there is now another way to mount volumes by using the --mount switch. Here is how to utilize that with read-only:
$ docker run --mount source=volume-name,destination=/path/in/container,readonly my/image
docker-compose
Here is an example on how to specify read-only containers in docker-compose:
version: "3"
services:
redis:
image: redis:alpine
read_only: true
docker-compose
Here is a proper way to specify read-only volume in docker-compose:
Long syntax
version: "3.2" # Use version 3.2 or above
services:
my_service:
image: my:image
volumes:
- type: volume
source: volume-name
target: /path/in/container
read_only: true
volumes:
volume-name:
https://docs.docker.com/compose/compose-file/compose-file-v3/#long-syntax-3
Short syntax
Add :ro to the volume mount definition:
version: "3.0" # Use version 3.0 or above
services:
my_service:
image: my:image
volumes:
- /path/on/host:/path/inside/container:ro
https://docs.docker.com/compose/compose-file/compose-file-v3/#short-syntax-3

Resources