Our application landscape consists of Spring Boot apps hosted on docker containers managed by Kubernetes.
In Spring Boot, we use the property "spring.config.location" to specify the external location of the property files. The java command is as follows:
java -jar myproject.jar --spring.config.location={file://some file path}
Now instead of using the local file path, can I create a Kube persistent volume and give that path in the above command?
What Kube volume type should I use to allow for the same semantics of file://{file path} ?
We could successfully read application.properties file from a Kubernetes Persistent Volume using the "spring.config.location" command line argument. The steps we followed were as follows:
Created a persistent volume on Kubernetes (PVC was set to ReadWriteMany option, as multiple microservices will use the same properties file).
Mounted the volume to the Pod of each microservice (changes in the pod script file). Mounted file system accessible via - '/shared/folder/properties' path.
Added the "spring.config.location" parameter to the "java -jar" command - e.g. java -Dspring.profiles.active=$spring_profile -Xmx4096m -XX:+HeapDumpOnOutOfMemoryError -jar myJar.jar --spring.config.location=file:/shared/folder/properties
Related
How can my spring boot application running inside a container access docker secrets mounted as a volume inside that same container?
Commonly suggested methods I DO NOT want to use:
Echo the secret into an environment variable - this is insecure.
Pass the secret in as a command line argument - this is messy with multiple secrets and hurts the local dev experience
Manually read the secret from a file by implementing my own property loader - Spring must have a native way to do this
Spring boot 2.4 introduced this feature as part of Volume mounted config trees using the spring.config.import property.
To read docker secrets mounted as a volume from the default location set:
spring.config.import = configtree:/run/secrets/
The property name will be derived from the secret filename and the value from the file contents.
This can then be accessed like any other spring property.
In my spring app, I externalized flyway migration
spring.flyway.locations=filesystem:/var/migration
When using docker run I specify
docker run --name papp -p 9000:2000 -v "C:...\db\migration":/var/migration
How can I specify this in Cloud Run >> VAriables and Secrets
What values do I need to supply for -v "C:...\db\migration":/var/migration
(I created a bucket and uploaded file in Cloud Storage..assuming files should be there)
The form you have there is being described as:
Mount each secret as a volume, which makes the secret available to the container as files.
If you cannot explain in how far /var/migration would be a "secret", this might be the wrong approach. One likely cannot mount that volume as a secret; just mount with docker run -v.
But there are secrets for Flyway, which you could mount eg. as flyway.properties file:
flyway.user=databaseUser
flyway.password=databasePassword
flyway.schemas=schema1,schema2
And generally all the other configuration parameters, as well:
https://flywaydb.org/documentation/configuration/parameters/
I'm using Netflix Conductor over Rest API. I'm able to create a workflow and run it but I would like to know how to use the workflowStatusListener feature.
I'm running Conductor on my localhost with Docker and I saw that the server is a simple jar, possibly a SpringBoot app. So, how to pass my on jar with my Listener or Simple Tasks in this scenario?
I found how to deploy it using the Docker image.
I copied the folder /app from my docker container, changed the startup.sh script and mounted my local folder.
I copied my jar into /app/libs
java -cp libs/*.jar com.netflix.conductor.bootstrap.Main $config_file $log4j_file
I have a spring boot application which uses log4j2 to write logs. When I deploy this application to a Docker container, I want the logs to be written to a file at a specified location outside the container. How can I do this ? I tried providing the path of the log folder using an environment variable on startup but its of no use. No logs are being written. Please help
You have to mount the directory on the host file system inside the container. A simple web search will yield many results on how to do this such as How to mount a host directory in a Docker container
I deployed spring cloud application in docker.The spring cloud application use accessing picture file.When I deployed spring cloud application in docker by dockerfile. In a local development environment,I can access the picture file.When I deployed the application in docker,it cast bug.It shows that the spring cloud application not find the file on the host computer. What do I need to deal with the problem?
I have tried to copy the host computer picture file to docker volumne path. But it can not work.
My host computer of picture file path in my application yml file is like this.
originImgPath: /tmp/tcps/klanalyze/originImg
captchaImgPath: /tmp/tcps/klanalyze/captchaImg
The picture saved on the host computer path is like this.
/tmp/tcps/klanalyze/originImg
/tmp/tcps/klanalyze/captchaImg
My docker file by package is like this.
FROM jdk-8u191:20190321
MAINTAINER beigai_liyang
VOLUME /tmp
ADD target/klanalyze-gateway-0.0.1-SNAPSHOT.jar app.jar
RUN bash -c 'touch /app.jar'
EXPOSE 8888
ENTRYPOINT [ "java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar" ]
My code is like this.
//read yml setting
#Autowired
private GatewayProperty gatewayProperty;
public void loadPicture(){
……
//load file
File file = new File(gatewayProperty.getOriginImgPath());
……
}
My docker version is 17.12.1-ce.
My spring cloud version is Finchley.SR1.
My Spring boot version is 2.0.3.RELEASE.
My host computer is cent-os 7.
Seems you need use the -v to mount your host direct to /tmp volume.
docker run -v /tmp:/tmp image_name
Concerns:
Not very make sense about "copy the host computer picture file to docker volumne path"
Past your docker run command here, so we could see how you run the container