docker-compose up gives 'failed to read dockerfile: error from sender' - spring-boot

Tried to simply dockerize my mongodb and spring boot application. Had a lot of struggles and thought I almost had it running and than the terminal hits me with this error:
Building user
[+] Building 0.0s (1/2)
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 121B 0.0s
------
> [internal] load build definition from Dockerfile:
------
failed to solve with frontend dockerfile.v0: failed to read dockerfile: error from sender: walk \\?\C:\Users\ZRC\Documents\GitHub\s6-kwetter-backend\user\Dockerfile: The system cannot
find the path specified.
ERROR: Service 'user' failed to build : Build failed
Dockerfile (which is in a sub-directory; user-module):
FROM openjdk:11
EXPOSE 8081
ADD target/user-module-docker.jar user-docker.jar
CMD ["java", "-jar", "user-docker.jar"]
docker-compose.yml (which is in the main directory that has multiple modules/microservices):
version: '3.8'
services:
user:
build: ./user/Dockerfile
restart: unless-stopped
container_name: user-ms
ports:
- 8081:8080
mongodb:
image: mongo
restart: always
container_name: mongodb
ports:
- 27017:27017
Like it says that the path specified cannot be found but it literally exists, so where could I have gone wrong?

The error is telling you that the Dockerfile was not found, because the path doesn't exist. That's because it is trying to enter the path as folder.
The system cannot find the path specified.
This comes because you made a mistake in the compose build syntax. There are 2 ways it can be used.
1. The simple form:
This is using ./users/ as context, expecting a Dockerfile to be in this directory.
user:
build: ./user
2. The complex form:
user:
build:
context: ./
dockerfile: ./users/Dockerfile
This lets you separate the context and where the Dockerfile is. In this example, the current folder is used as context, and the Dockerfile is taken from ./users/Dockerfile. It is also useful when you have a different name for your Dockerfile. I.E. Dockerfile.dev.
Note that this is just an example, I don't know if this would make sense in your project. You need to know what context is the correct one.
What do I mean by context?
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
As example:
docker build --file /path/to/Dockerfile /path/to/context

Related

Docker sometimes cannot see jar file

I have a weird problem, that sometimes a docker container cannot see a .jar file, while most of the time it does not have any problem with it.
Before i show you the docker image, a little bit of background. Normally i build a jar archive before running my container, a pretty simple container to run a spring boot application. However at some seemingly random point in the daily routine it does not boot up with the container reporting "Unable to access jarfile".
I thought it must be some weird permission stuff, so i took snapshot of my "target" directory when working and when it stopped working via ls -alR target and later comparing those snapshot with git diff. It does not show any difference. I am still pretty convinced it must be related to file-permissions, locking or something of that sort but i do not know where to start.
I am on Mac 12.0.1 btw. Any ideas appreciated.
The docker file
FROM openjdk:8-oraclelinux8
RUN mkdir /app
WORKDIR /app
CMD "java" "-jar" "app.war"
And docker-compose.yml
version: "3.9"
services:
app:
build: .
depends_on:
- sql1
volumes:
- ./target:/app
ports:
- "8080:8080"
links:
- "sql1:sqlserver"
...
I'm not sure if this helps, but I don't see your Dockerfile as robust enough to produce consistent results regardless of the state of your localhost workspace. I may ask, are you building your war file manually and then creating your Docker container?
Please try to follow this approach if it fits your needs :
make sure you delete jar/war files before building the container.
Have a multistage Dockerfile with a "build" phase for your spring boot app where you generate the jar/war file from a builder image (ant, gradle, maven), and then have a second stage where the jar/war file gets copied over to it's final location and the application gets executed, this way you ensure consistency and that the file will be there at all times :
This is an example for my spring boot templates that I use very often, it's quite generic (as I handle the renaming of the jar file without having to worry about how pom.xml is configured individually) and I guess could be implemented in a variety of scenarios
FROM maven:3.8.6-openjdk-18 as builder
WORKDIR /usr/app/
COPY . /usr/app
RUN mvn package -Dmaven.test.skip
RUN JAR_FILE="target/*.jar"; cp ${JAR_FILE} /app.jar
FROM openjdk:18
WORKDIR /usr/app
COPY --from=builder /app.jar /usr/app
EXPOSE 8080
CMD ["java","-jar","app.jar"]
docker compose :
services:
app:
build: .
depends_on:
- sql1
ports:
- 8080:8080
networks:
- spring-boot-api-network
volumes:
- ./target:/app
...
NOTE : I would also remove the "links" option as it is a legacy feature you should avoid using and use networks instead :
You can try this network implementation added at the bottom of your compose file, just make sure you don't forget to add the network: to the sql1 portion as well
networks:
spring-boot-api-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 182.16.0.1/24
gateway: 182.16.0.1
name: spring-boot-api-network

How do I have my jar re-deployed and put into docker image every time I run compose?

So I know that there are a lot of tutorials on the topics, both docker and maven, but I'm having some confusion in combining them alltogether.
I created a multi-module Maven project with 2 modules, 2 spring applications, let's call them application 1 and application 2.
Starting each other via IntelliJ IDEA green "run" button works fine, now I'd like to automate things and run via docker.
I have Dockerfiles that looks the same in both cases:
(in both modules it's the same, only JAR name's different)
FROM adoptopenjdk:11-jre-hotspot
MAINTAINER *my name here lol*
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar
ENTRYPOINT ["java","-jar","/application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar"]
CMD /wait && /*.jar
I also have docker-compose:
version: '2.1'
services:
application1:
container_name: app1
build:
context: ../app1
image: docker.io/myname/app1:latest
hostname: app1
ports:
- "8080:8080"
networks:
- spring-cloud-network-app1
application2:
container_name: app2
build:
context: ../app2
depends_on:
application1:
condition: service_started
links:
- application1
image: docker.io/myname/app2:latest
environment:
WAIT_HOSTS: application1:8080
ports:
- "8070:8070"
networks:
- spring-cloud-network-app2
networks:
spring-cloud-network-app1:
driver: bridge
spring-cloud-network-app2:
driver: bridge
What I do currently is:
I run maven package for each module and receive files like "application1(-2)-0.0.1-SNAPSHOT-jar-with-dependencies.jar" in both target folders.
"docker build -t springio/app1 ."
"docker-compose up --build"
And it works, but I feel I do some extra steps.
How can I do the project so that I ONLY have to run docker compose?
(after each time I change things in the code)
Again, I know it's a quite simple thing but I kinda lost the logic.
Thanks!
P.S
Ah, and about the "...docker-compose-wait/releases/download/2.9.0/wait /wait"
It's important that app start one after another, tried different solutions, unfortunately, doesn't really work as good as I would like to. But I guess I'll leave it as is.
So, again, if anyone ever wonders how to do the things I asked, here's the answer: you need multi-stage build Dockerfile.
It'll look like this:
#
# Build stage
#
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
#
# Package stage
#
FROM openjdk:11-jre-slim
COPY --from=build /home/app/target/demo-0.0.1-SNAPSHOT.jar /usr/local/lib/demo.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/demo.jar"]
What it does is it basically first creates a jar file, copies it into package stage and eventually runs.
That's allow you to run your app in docker by running only docker compose.

can `bootBuildImage` create writeable volumes?

Given a spring boot app that writes files to /var/lib/app/files.
I create an docker image with the gradle task:
./gradlew bootBuildImage --imageName=app:latest
Then, I want to use it in docker-compose:
version: '3.5'
services:
app:
image: app:latest
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This will fail, because the folder is created during docker-compose up and is owned by root and the app, hence, has no write access to the folder.
The quick fix is to run the image as root by specifying user: root:
version: '3.5'
services:
app:
image: app:latest
user: root # <------------ required
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This works fine, but I do not want to run it as root. I wonder how to achieve it? I normally could create a Dockerfile that creates the desired folder with correct ownership and write permissions. But as far as I know build packs do not use a custom Dockerfile and hence bootBuildImage would not use it - correct? How can we create writable volumes then?
By inspecting the image I found that the buildpack uses /cnb/lifecycle/launcher to launch the application. Hence I was able to customize the docker command and fix the owner of the specific folder before launch:
version: '3.5'
services:
app:
image: app:latest
# enable the app to write to the storage folder (docker will create it as root by default)
user: root
command: "/bin/sh -c 'chown 1000:1000 /var/lib/app/files && /cnb/lifecycle/launcher'"
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
Still, this is not very nice, because it is not straight forward (and hence my future self will need to spent time on understand it again) and also it is very limited in its extensibility.
Update 30.10.2020 - Spring Boot 2.3
We ended up creating another Dockerfile/layer so that we do not need to hassle with this in the docker-compose file:
# The base_image should hold a reference to the image created by ./gradlew bootBuildImage
ARG base_image
FROM ${base_image}
ENV APP_STORAGE_LOCAL_FOLDER_PATH /var/lib/app/files
USER root
RUN mkdir -p ${APP_STORAGE_LOCAL_FOLDER_PATH}
RUN chown ${CNB_USER_ID}:${CNB_GROUP_ID} ${APP_STORAGE_LOCAL_FOLDER_PATH}
USER ${CNB_USER_ID}:${CNB_GROUP_ID}
ENTRYPOINT /cnb/lifecycle/launcher
Update 25.11.2020 - Spring Boot 2.4
Note that the above Dockerfile will result in this error:
ERROR: failed to launch: determine start command: when there is no default process a command is required
The reason is that the default entrypoint by the paketo builder changed. Changing the entrypoint from /cnb/lifecycle/launcher to the new one fixes it:
ENTRYPOINT /cnb/process/web
See also this question: ERROR: failed to launch: determine start command: when there is no default process a command is required

Weird behaviour passing build-args to Dockerfile through docker-compose

I'm facing a strange problem (or better: two different, weird problems) trying to pass build-args to my Dockerfile through docker-compose up.
My files - initial setup
Dockerfile:
ARG NODE_VERSION
FROM node:${NODE_VERSION}
ARG NPM_REGISTRY_TOKEN
RUN echo "=====> token ${NPM_REGISTRY_TOKEN}"
... ... ...
docker-compose.yml:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
With this initial setup in place, I have the following behaviour (on Linux Mint 20, docker-compose version 1.26.2, build eefe0d31):
running docker build --build-arg NPM_REGISTRY_TOKEN=xyz123 produces in output =====> token xyz123: the NPM_REGISTRY_TOKEN arg flows to the Dockerfile
running docker-compose build --build-arg NPM_REGISTRY_TOKEN=xyz123 myservice produces in output =====> token xyz123: the NPM_REGISTRY_TOKEN arg flows to the Dockerfile
running NPM_REGISTRY_TOKEN=xyz123 docker-compose up myservice produces in output =====> token : the NPM_REGISTRY_TOKEN env arg should flow to the Dockerfile due to - NPM_REGISTRY_TOKEN (according to https://docs.docker.com/compose/compose-file/#args: You can omit the value when specifying a build argument, in which case its value at build time is the value in the environment where Compose is running) but it seems to not be available during build
My files - reloaded
Simply changing my docker-compose.yml file to
version: '3'
services:
myservice:
build:
context: ./myservice
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
dockerfile: ../Dockerfile
seems to solve the problem: switching args and dockerfile entries in yml file unlocks the capability to pass environment variables to Dockerfile as build-args through docker-compose up, too. Problem solved. Or not?
Changing OS, getting new problem
So, developers in my team use a bunch of different operating systems: Linux, Mac Os, and Windows, too.
Running the same commands on the same version (1.26.2) of docker-compose on Windows 10 Professional 1909 we're getting the same problem we faced initially, both using the initial version of the docker-compose.yml file and using the version that works on Linux.
We tried passing env var from command line, setting them in the command prompt, setting them as system variables through GUI... we tried launching docker-compose up for git-bash, too, but we're not able to get the variable value in Dockerfile.
I googled a bit aaround but I've not found any reference to known bugs or limitation of the Windows version of docker-compose.
Anyone have any idea what the problem might be? Thank you very much in advance!
So, finally, after some try-and-fail on different OSs and with different configurations, I ended up with an explanation of my problem - and therefore with a viable workaround, which allowed me to reach a satisfactory configuration for my docker-compose-yml file.
Short answer: it wasn't a matter of OSs nor env var passing nor order of context / dockerfile sections - it was a matter of clash between different services in my compose file.
More in detail: my docker-compose.yml file contained an additional service, too, whose job was to initialize the database the application was pointing to:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
depends_on:
- persistence
- db_initializer
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- npm run start:dev'
persistence:
# Setting up the DBMS here
db_initializer:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
depends_on:
- persistence
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- npm run db:migrate'
So, the problem was that I was configuring two services based on the same, self-build image, launching it with different commands (npm run db:migrate for the db_initializer service, npm run start:dev for the application service). Apparently compose took the configuration provided for the first initialized service (db_initializer, because myservice was dependant on it) and used that configuration for both services, ignoring the (different) args section I was providing for the second container: so I was able to solve (this time really!) the problem simply merging services declaration, including all args I needed:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
depends_on:
- persistence
- db_initializer
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- run db:migrate && npm run start:dev'
persistence:
# Setting up the DBMS here
So, after a bunch of months without collecting answers, I think it's time to share my experience, hoping it can help someone encountering this weird behaviour.

Docker - Problem with java netty_tcnative

I am trying to dockerize 4 services and I have a problem with one of the services. Particularly, this service is implemented is spring boot service and uses google vision API. When building the images and starting the containers everything works fine, until it gets to the part where the google vision API code is used. I then have the following runtime errors when running the containers:
netty-tcnative unavailable (this may be normal)
java.lang.IllegalArgumentException: Failed to load any of the given libraries: [netty_tcnative_linux_x86_64, netty_tcnative_linux_x86_64_fedora, netty_tcnative_x86_64, netty_tcnative]
at io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:104) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSsl.loadTcNative(OpenSsl.java:526) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSsl.<clinit>(OpenSsl.java:93) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:244) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:171) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:120) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:385) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:435) [grpc-core-1.18.0.jar!/:1.18.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:223) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:164) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:156) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157) [gax-1.42.0.jar!/:1.42.0]
at com.google.cloud.vision.v1.stub.GrpcImageAnnotatorStub.create(GrpcImageAnnotatorStub.java:84) [google-cloud-vision-1.66.0.jar!/:1.66.0]
at com.google.cloud.vision.v1.stub.ImageAnnotatorStubSettings.createStub(ImageAnnotatorStubSettings.java:120) [google-cloud-vision-1.66.0.jar!/:1.66.0]
at com.google.cloud.vision.v1.ImageAnnotatorClient.<init>(ImageAnnotatorClient.java:136) [google-cloud-vision-1.66.0.jar!/:na]
at com.google.cloud.vision.v1.ImageAnnotatorClient.create(ImageAnnotatorClient.java:117) [google-cloud-vision-1.66.0.jar!/:na]
at com.google.cloud.vision.v1.ImageAnnotatorClient.create(ImageAnnotatorClient.java:108) [google-cloud-vision-1.66.0.jar!/:na]
Complete log file of the error can be found in this link:
Complete Log File.
Here are my docker-compose.yml file and the Dockerfile of the service causing problem:
DockerFile
FROM maven:3.6.0-jdk-8-alpine
WORKDIR /app/back
COPY src src
COPY pom.xml .
RUN mvn clean package
FROM openjdk:8-jdk-alpine
RUN apk add --no-cache curl
WORKDIR /app/back
COPY --from=0 /app/back/target/imagescanner*.jar ./imagescanner.jar
COPY --from=0 /app/back/target/classes/API-Key.json .
ENV GOOGLE_APPLICATION_CREDENTIALS ./API-Key.json
EXPOSE 8088
ENTRYPOINT ["java", "-jar", "./imagescanner.jar"]
docker-compose.yml
version: '3'
services:
front:
container_name: demoLab_front
build: ./front
image: demolab/front:latest
expose:
- "3000"
ports:
- "8087:3000"
restart: always
back:
container_name: demoLab_backGCV
build: ./backGCV
image: demolab/backgcv:latest
depends_on:
- lab
ports:
- "8088:8088"
restart: always
lab:
container_name: demoLab_labGCV
build: ./lab
image: demolab/labgcv:latest
expose:
- "8089"
ports:
- "8089:8089"
restart: always
sift:
container_name: demoLab_labSIFT
build: ./detect-label-service
image: demolab/labsift:latest
expose:
- "5000"
ports:
- "5000:5000"
restart: always
EDIT
After some googling I found out that: GRPC Java examples are not working on Alpine Linux since required libnetty-tcnative-boringssl-static depends on glibc. Alpine is using musl libc and application startup will fail with message similar to mine.
I found this project that try to build the right images but it seems broken for a lot of pepole (the build didn't work for my case)
Problem solved by replacing this line of the Dockerfile:
FROM openjdk:8-jdk-alpine
with this line:
FROM koosiedemoer/netty-tcnative-alpine
The problem: Suppressed: java.lang.UnsatisfiedLinkError: no netty_tcnative in java.library.path
On alpine container.
There is a simple workaround:
apk add libressl
apk add openssl
ln -s /lib/ld-musl-x86_64.so.1 /lib/libcrypt.so.1

Resources