How to share large set of files between kuberntes pods - spring-boot

I have a large set of read-only configuration files (around 4k) which is used by the microservice to process some XML files and supposed to be read via Apache Commons Configuration.
These files are of the following types:
Properties file
XML
dtd
xfo
xslt
5 of these files will need some environment variables to be substituted in their content, such as third party software location, or different services URL based on the environment the files are deployed in.
Now, I need to make these files available for 4 microservices at run time.
I'm using fabric8.io maven docker plugin with dockerfile for image generation.
Kubernetes, helm, Jenkinsfile, and ArgoCD for the spring-boot microservices CD/CI.
The 2 challenges that I'm facing is how to substitute the variables inside of these static files, and how to make these files available for each pod.
I have three solutions in mind but I would like to know what is the best/optimal 12-factor approach for this problem.
Solution 1: is to deploy the files as a separate pod and allow other pods to access to some volume mount that it provides.
Solution 2: Add the files to the microservice image during the docker image build.
Solution 3: Add the files as a container of each microservice pod.

You could upload this file to a kubernetes ConfigMap.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
apiVersion: v1
kind: ConfigMap
data:
haproxy.cfg: "complete file contents"
It can contain entire file, and mount this file in a pod directory
volumeMounts:
- mountPath: /usr/local/etc/haproxy
name: config
volumes:
- name: config
configMap:
name: env-config-haproxy
items:
- key: haproxy.cfg
path: haproxy.cfg

There is another solution: mounting the files as a volume (backed by cloud storage) and substituting values on-access within the service and that's a solution I'd go with by default.
Solutions 1 and especially 3 add a lot of complexity. Solution 2 may be a good choice too, however to choose the best option you really need to answer another question - how are the config files and env substitutions changing with respect to the application container versions?
i.e. When you change the files, should all of the services get new versions?

Related

Spring Config Server - How to i access files inside folders per environment?

I have a folder like this in my gitlab
test-project/
DEV/application-testconfig.properties
SYS/application-testconfig.properties
PROD/application-testconfig.properties
appication-commonconfig.properties
These testconfig.properties file contents differ per environment
How do i access the files based on the environment in my spring config server?
Ex:
I would like to use the url like the below for prod, dev:
https://my.configserver.com/my-project/prod/testconfig/master,
https://my.configserver.com/my-project/dev/testconfig/master
also I should be able to access appication-commonconfig.properties
The problem is i am migrating around 30 config files per each environment. the filename in one environment is similar in the other. I can't rename all the files to have "-dev" or "-prod" in it
I've tried with different paths but no luck

Why does docker-compose create the source directory in one syntax but not the other?

Consider the following YAML code in my docker-compose.yml file that sets up volume mounting (using version 3.7), using short form syntax as specified in the docs:
volumes:
- ./logging:/var/log/cron
This maps the relative path logging on my host machine to the /var/log/cron folder inside the container. When I run docker-compose up, if the logging folder doesn't exist on my host machine, Docker creates it. All good there.
Now, if I change the above to long-form syntax:
volumes:
- type: bind
source: ./logging
target: /var/log/cron
Now when I run docker-compose up, it DOES NOT create logging folder if it doesn't exist on my host machine. I get
Cannot create container for service app: b'Mount denied:\nThe source path "C:/Users/riptusk331/logging"\ndoesn\'t exist and is not known to Docker'
Does anyone know why the short form syntax creates the host path if it doesn't exist, but the long form does not and gives an error?
Using Docker Desktop for Windows.

Docker container with two jar files , run on demand not as entry point

In my use case I would like to have two jar files in the containers. In typical docker image, I see an entry point, which basically starts the jar file. In my case, I will not know which program to be started, till the time the container is getting used in the K8s services. In my example, I have a jar file that applies the DDLs and the second Jar file is my application. I want the k8s to deploy my DDL application first and upon completion it will deploy my spring boot application (from a different jar but from same container ) next. There by I cannot give an entry point for my container, rather I need to run the specific jar file using command and argument from my yaml file. In all the examples I have come across, I see an entry point being used to start my java process.
The difference here from the post referred here is- i want to have the container to have two jar files and when I load the container through k8s, I want to decide which program to run from command prompt. One option I am exploring is to have a parametrized shell script, so I can pass the jar name as parameter and the shell will run java -jar . I will update here once I find something
solution update
Add two jars in the docker file and have a shell script that uses parameter. Use the below sample to invoke the right jar file form the K8s yaml file
spec: containers:
- image: URL
imagePullPolicy: Always name: image-name
command: ["/bin/sh"]
args: ["-c", "/home/md/javaCommand.sh jarName.jar"]
ports: - containerPort: 8080
name: http
A docker image doesn't have to run a java jar when starting, it has to run something.
You can simply make this something a bash script that will make these decisions and start the jar you like
Try to add the per-requisites in the Init Containers while deploying it to kubernetes and in the regular container you can place your application, it will make DDLs container to be initialized first and then the following application container can be executed.

Symlink Secret in Kubernetes

I'm trying to use the google sheets and gmail APIs, and I'd like to access the credentials file as a K8s secret (which seem to be mounted as symlinks).
However, the google oauth2 python client specifically says that credential files cannot be symbolic links.
Is there a workaround for this?
Is there a workaround for this?
There are at least two that I can think of off-hand: environment variables, or an initialization mechanism through which the symlinks are copied to files
Hopefully the first one is straightforward, using env: valueFrom: secretKeyRef: etc.
And for the second approach, I lumped them into "initialization mechanism" because it will depend on your preference between the 3 ways I can immediately think of to do this trick.
Using an initContainer: and a Pod-scoped volume: emptyDir: would enable you to copy the secret to a volume that is shared amongst your containers, and that directory will be cleaned up by kubernetes on the destruction of your Pod
Using an explicit command: to run some shell before launching your actual application:
command:
- bash
- -ec
- |
cp /path/to/my/secret/* ./my-secret-directory/
./bin/launch-my-actual-server
Or, finally (and I would guess you have already considered this), have the application actually read in the contents and then write them back to a file of your choice

docker-compose.vs.release.yml Volume binding on VS 2017

This is a docker file for .net core web application project.
I am trying to understand what these lines means.
What does ~/clrdbg:/clrdbg:ro means.
When I create files they are stored in root of my project folder as well. arent they suppose to be stored in container volumes.
How do I map volumes properly and delete the contents of these volumes.
version: '2'
services:
is.mvcclient:
build:
args:
source: ${DOCKER_BUILD_SOURCE}
volumes:
- ~/clrdbg:/clrdbg:ro
entrypoint: tail -f /dev/null
labels:
- "com.microsoft.visualstudio.targetope ratingsystem=linux"
~/clrdbg:/clrdbg:ro basically means that local folder ~/clrdbg will be available in the container under /clrdbg and local changes will be also reflected in the container without the need to rebuild the image. RO means that it is read-only so the container can't change the files in that folder.
Your volume is mounted to a host folder (in this case I assume your projects root). Like mentioned in the previous point, in that case changes in local filesystem are reflected in the container.
First you have to get your project into the container, so I guess you can COPY/ADD it to the container on image build. After that you have to do something along the lines of:
services:
is.mvcclient:
volumes:
- data-volume:/clrdbg
volumes:
data-volume:
By doing that, all the changes to the files in the container will only be reflected in those files, not the local ones. Of course, that goes both ways - changes to local files won't be reflected in the container files.

Resources