How to create containerized Spring Boot App on Raspberry Pi 4? - spring-boot

What am I wanting to do?
I want to convert to run my containerized of Spring-Boot's app which is can run on amd64 systems, on Raspberry Pi 4's Docker. How can I fix the my problem?
What's the running OS on Raspberry Pi?
I've installed Ubuntu Server 20.04.2 LTS for arm64 architectures using via Raspberry Pi Imager v1.6.1 on Raspberry Pi 4.
What are the steps I have done to fix the problem?(problem is still continue)
In default version does not work on my Raspberry Pi. Thus, I've tried some couple of things at below.
I changed row of "FROM" from openjdk:15-jdk-slim to arm64v8/openjdk:17 at my dockerfile as you can see:
FROM arm64v8/openjdk:17
COPY . /projects/red-dir
WORKDIR /projects/red-dir
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} /projects/red-dir/my_red_app.jar
ENTRYPOINT ["java","-jar","/projects/red-dir/my_red_app.jar"]
Afterwards, I've built and pushed my app using with Maven and Docker:
$ ./mvnw clean package -Dmaven.test.skip=true && java -jar target/my_red_app.jar
$ docker build -t user/my_red_app:v1.0.0 .
Then, My app's depend on a MongoDB, so I changed this image from mongo to arm64v8/mongo on docker-compose.yml as you can see at below:
version: "3"
services:
mongodb:
container_name: mongodb
image: arm64v8/mongo
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: 12345
networks:
- shared-net
colour_app:
container_name: my_red_app
image: user/my_red_app
restart: always
ports:
- 7070:7070
depends_on:
- mongodb
networks:
- shared-net
networks:
shared-net:
driver: bridge
Things that work and things that don't work
MongoDB is running. I can reach using via MongoDBCompass, but my Spring-Boot app doesn't work. In docker-compose up command, I've encountered an output like this:
...
...
colour_app | standard_init_linux.go:219: exec user process caused: exec format error
colour_app | standard_init_linux.go:219: exec user process caused: exec format error
colour_app | standard_init_linux.go:219: exec user process caused: exec format error
colour_app | standard_init_linux.go:219: exec user process caused: exec format error
Let me know what things I didn't catch. Thank you for reading.

I've solved my problem and tried to create simple guide for ones who are encountered this problem.
BUILD FOR OTHER OS (multi-architecture / buildx command)
Turn On "experimental" feature's on Docker Settings.
For Linux OS:
sudo nano /etc/docker/daemon.json
Add below content to it:
{
"experimental": true
}
For Win10 OS / MAC OS:
Open Docker Desktop application.
Go to Settings.
Select Docker Engine Tab on the left-side.
Find "experimental": false.
Change it to "experimental": true.
Restart Docker.
Create multi-architecture os. (Important: The image you are using should contain the OS you want to create it)
Creating a just SINGLE OS :
You can use load or push commands.
--load means that docker saves it to local disk:
docker buildx build --load --platform linux/arm64 -t <dockerhub_username>/<repository_name>:<tag_name> .
--push means that docker doesn't save it to local disk but save it to cloud (docker hub):
docker buildx build --push --platform linux/arm64 -t <dockerhub_username>/<repository_name>:<tag_name> .
--load and --push can not be set together.
Creating a MANY OS :
You can just use push command with many OS.
--push means that docker doesn't save it to local disk but save it to cloud (docker hub):
docker buildx build --push --platform linux/amd64,linux/arm64,linux/ppc64le -t <dockerhub_username>/<repository_name>:<tag_name> .

Related

Create docker image with named/host volume for spring boot application

I have spring boot application which I am trying to dockerize for the first time. I am using docker version 20.10.1 and my host pc is ubuntu 20.04
for this spring boot application, I have a data directory , which has data created when the application is running. I want to access this data from the host operating system. That is why I am using volume.
When I try to mount my container to named volume or to a host volume, but it always create anonymous volume regardless of the command I type.
Here is my docker file.
FROM openjdk:15
COPY target/lib/* /usr/src/app/lib/
COPY target/core-api-7.3.6.jar /usr/src/app/lib/core-api-7.3.6.jar
COPY config/application.properties /usr/src/app/config/application.properties
COPY data/poscms/config/* /usr/src/app/data/poscms/config/
WORKDIR /usr/src/app
ENTRYPOINT ["java", "-jar", "lib/core-api-7.3.6.jar"]
VOLUME /usr/src/app/data
/usr/src/app/data this is the directory where core-app.jar application will create its runtime data, I need to access these data from my host pc
Following is the command for building the image
docker build -t core-app:5.0 .
then I create image using following command
docker run -it -d -p 7071:7071 core-app:5.0 -v /home/bob/data/:/usr/src/app/data
when I check the volumes by running following command
docker volume ls
I can see anonymous volume being created by this container
and my host path which is /home/kapila/data/ is empty and container data is not written to host path.
I experience the same behaviour with named volume as well.
I created a named volume using following command
docker volume create tmp
docker run -it -d -p 7071:7071 core-app:5.0 -v tmp:/usr/src/app/data
and still docker create anonymous volume and data is not written to tmp volume
my host PC is ubuntu pc. Could someone point out what I am doing wrong here
I do something like this:
In your project root , have these files pertaining to docker as required:
1. DockerFile 2.docker-compose.yml 3. docker-env-preview.env
DockerFile content
FROM openjdk:8-jdk-alpine
ARG jarfilepath
RUN mkdir /src
WORKDIR /src
VOLUME /src/tomcat
ADD $jarfilepath yourprojectname.jar
docker-compose.yml content
version: '3'
services:
project-name:
container_name: project-name-service
build:
context: .
args:
jarfilepath: ./target/project-0.0.1.jar
env_file:
- docker-env-preview.env
ports:
- "8831:8831"
- '5005:5005'
networks:
- projectname_subnet
command: java -jar -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 projectname.jar --spring.profiles.active=preview
networks:
project-name_subnet:
external: true
docker-env-preview.env
This file will contain your environment variables values. The applicaiton.properties can read this file to fetch the values, like buildserver.ip=${BUILD_SERVER_DOMAIN}. Basically you define what you want need . Like the example below.
GARBABE_SERVER_DOMAIN=h-db-preview
GARBABE_SERVER_PORT=5422
GARBABE_DB=projectdb
GARBABE_USER=user
GARBABE_PASSWORD=pwd
JPA_DDL_AUTO=validate
JPA_DIALECT=org.hibernate.dialect.PostgreSQLDialect
JPA_SHOW_SQL=false
JPA_SE_SQL_COMMENTS=true
JPA_FORMAT_SQL=false
JPA_NON_CONTEXTUAL_CREATION=true
APP_NAME=project-name-service
BUILD_SERVER_METHOD=http
BUILD_SERVER_DOMAIN=7.8.9.4
Commands to execute :
mvn clean package (if you use maven )
docker-compose up -d --build ( execute docker ps -> check the details on the running container),
To view the logs : sudo docker logs <project-name-service> -f
To get into the container console, docker exec -it <project-name-service> bash
I was able to fix the issue, and only change I did, to make it work, is that, to change the base image from
FROM openjdk:15
to
FROM adoptopenjdk/openjdk15:ubi
and now named and host volume mounts are working as expected. I am not sure what is wrong with official openjdk:15 image.

Docker does not create a new container when using docker-compose build

I've set up two windows container for ASP.NET and MSSQL server. On the first docker-compose build everything works as expected. Then after I've made some changes to the custom dockerfile and run docker-compose build again it uses the old container again, not making any changes.
I assumed that when i did a build it created a new container. Am i misunderstanding how docker works?
This is the docker-compose.yml
version: '3'
services:
db:
image: microsoft/mssql-server-windows-developer
environment:
sa_password: "Password1234!"
ACCEPT_EULA: "Y"
ports:
- "8003:1433"
build:
context: .
dockerfile: mssql.dockerfile
web:
build:
context: .
dockerfile: web.dockerfile
image: mcr.microsoft.com/dotnet/framework/aspnet:4.8
#volumes:
# - .:C:/inetpub/wwwroot
ports:
- "8080:80"
- "8081:431"
This is the mssql.dockerfile
# escape=`
FROM microsoft/mssql-server-windows-developer
#set shell
SHELL ["powershell.exe", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
#make temp folder
RUN mkdir C:\temp
#copy script to temp folder
COPY DownloadDatabase.ps1 C:\temp
COPY RestoreDatabase.ps1 C:\temp
#run script to retrieve production database
WORKDIR C:\temp
RUN .\DownloadDatabase.ps1 -sourcefile <url> -destinationfile <target>
CMD .\RestoreDatabase.ps1
It is very easy to tell if the image has been re-used because the mkdir C:\temp errors out saying the directory already exists.
EDIT: I've already tried all the options on docker compose. no-cache, force-rm
docker-compose build
Only builds images but does not start containers.
That's why your changes in dockerfile are not applied. You have rebuilded the image but not the container. It's the reason why the container previoulsy launched is based on the older version of the image.
docker-compose up
From Docker documentation :
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
In order to make shure that both of your image and container are rebuilded you have to add this flags :
docker-compose up --force-recreate --build
That way your containers are based on the correct image version.
Explanation on flags from Docker documentation :
--build Build images before starting containers.
--force-recreate Recreate containers even if their configuration
and image haven't changed.
If you want to do this for a specific service just add the service name at the end of command line :
docker-compose up --force-recreate --build serviceName
Another flag useful if you want a clear output is the -d flag :
-d, --detach Detached mode: Run containers in the background,
print new container names. Incompatible with
It turns out i simply had to do docker-compose pull before docker-compose build to refresh the dockerfile! Now it builds a fresh image every time!

Docker compose yml "volume" not working on Ubuntu OS

I am running a simple Hello World world container with docker-compose.
There should be a mounted folder (with my files) at /root/sharedFolder but this folder is empty.
I am running Docker on Ubuntu OS (on top of my Windows server). And this works on a normal Ubuntu machine.
Any ideas?
docker-compose.yaml
version: '3'
services:
web:
build: .
volumes:
- ".:/root/sharedFolder"
Dockerfile:
#FROM - Image to start building on.
FROM ubuntu:14.04
#MAINTAINER - Identifies the maintainer of the dockerfile.
MAINTAINER ian.miell#gmail.com
#RUN - Runs a command in the container
RUN echo "Hello world" > /root/hello_world.txt
#CMD - Identifies the command that should be used by default when running the image as a container.
CMD ["sleep", "400"]
Instead of doing echo in RUN, do it in CMD or ENRTYPOINT. RUN happens during the image build phase where as CMD happens when your container is up and running with the volumes.
RUN is done during image build step
CMD specifies the command executed by default when you run a built
image.
You can also achieve the same with ENTRYPOINT
Google docker RUN vs CMD for more details.

How to use docker run with a Meteor image?

I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

How to mount windows folder using docker compose volumes?

How to mount windows folder using docker compose volumes?
I am trying to set up docker container using docker-compose.
My docker-compose.yml file looks as follow:
php-fpm:
build: php-fpm
container_name: php-fpm
volumes:
- ../project:/var/www/dev
When i enter to the container like this:
docker exec -it php-fpm bash
And display content with ls command the /var/www/dev directory is empty.
Does enyone know the solution for this ?
$ docker -v
Docker version 1.12.0, build 8eab29e
$ docker-compose -v
docker-compose version 1.8.0, build d988a55
I have Windows 10 and docker is installed via Docker ToolBox 1.12.0
#edit
The mounted directory is also empty under Linux enviroment
I fixed it by going to: Local Security Policy > Network List Manager Policies and Double-clicked unidentified Networks then change the location type to private and restarted Docker. Source

Resources