Is it possible to do parameter substitution in k8s configMaps? - spring

I have configMap that are loading properties files for my spring boot application.
My configMap is mounted as a volume and my springboot app is reading from that volume.
my typical property files are:
application-dev1.yml has
integrations-queue-name=integration-dev1
search-queue-name=searchindex-dev1
application-dev2.yml
integrations-queue-name=integration-dev2
search-queue-name=searchindex-dev1
application-dev3.yml
integrations-queue-name=integration-dev3
search-queue-name=searchindex-dev1
My goal is to have 1 properties file
application-env.yml
integrations-queue-name=integration-{env}
search-queue-name=searchindex-{env}
I want to do parameter substitution of env with the profile that is active for my service.
Is it possible to do parameter substitution in configMaps from my spring boot application running in the pod? I am lookin for something similar to maven-resource-plugin that can be done run time.

If it's just those two, then likely you will get more mileage out of using the SPRING_APPLICATION_JSON environment variable, which should supersede anything in the configmap:
containers:
- name: my-spring-app
image: whatever
env:
- name: ENV_NAME
value: dev2
- name: SPRING_APPLICATION_JSON
value: |
{"integrations-queue-name": "integration-$(ENV_NAME)",
"search-queue-name": "searchindex-$(ENV_NAME)"}
materializes as:
$ kubectl exec my-spring-pod -- printenv
ENV_NAME=dev2
SPRING_APPLICATION_JSON={"integrations-queue-name": "integration-dev2",
"search-queue-name": "searchindex-dev2"}

Related

openshift set environment variable from a file

I have a mount volume has a file urls.txt with database source url, like
databasesource: mysql://xxxx
and in my springboot application which will be running as a container in a openshift pod, and in the application I need to change the SPRING_DATASOURCE_URL as mentioned in the file above, here is what I want to achieve in my template file
env:
- name: SPRING_DATASOURCE_URL
valueFrom:
mount:
name: my-volume
key: databasesource
volumeMounts:
- name: my-volume
mountPath: /someDir
I know we can valueFrom configMap or secret, but I want to achieve via a volumeMount
if you can use below format in
urls.txt
databasesource=mysql://xxxx
as part of your container start you run
source /somedir/urls.txt
which will load the key & values in env. which can be further used.
The problem is resolved by a Springboot2.0 feature: https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config.files.importing-extensionless

How to inject deployment yaml env variable in springboot application yaml

I am trying to read environment variable declared in deployment yaml of Kubernetes into springboot application.yaml
Below is sample in deployment.yaml
spec:
containers:
env:
- name: SECRET_IN
value: dev
Below is sample in application.yaml
innovation:
in: ${SECRET_IN:demo}
But on localhost when I try to print innovation.in (#Configuration is created correctly) I am not getting "dev" in output, it always prints demo, it appears the link between deployment, application yaml is not happening ,could someone please help.
You can store the whole application.YAML config file into the config map or secret and inject it with the deployment only
For example :
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yaml: |-
pool:
size:
core: 1
max:16
if your application.properties is something like
example:
spring.datasource.url=jdbc:mysql://${MYSQL_HOST:localhost}:3306/dbname
spring.datasource.username=user
spring.datasource.password=password
You can replace it with
jdbc:mysql://${MYSQL_HOST:localhost}:3306/dbname
Deployment.yaml will be something like
spec:
containers:
- name: demowebapp
image: registry.gitlab.com/unicorn/unicornapp:1.0
ports:
- containerPort: 8080
imagePullPolicy: Always
env:
- name: MYSQL_HOST
value: mysql-prod
You can save more config into the config map & secret also based on the requirement.
Read more at : https://pushbuildtestdeploy.com/spring-boot-application.properties-in-kubernetes/
I think you did everything right, I have a similar working setup, although without a default 'demo'.
A couple of clarification from the spring boot's standpoint that might help.
application.yml can contain placeholders that can be resolved from the environment variables indeed.
Make sure that this application.yml is not "changed" (rewritten, filtered by maven whatever) during the compilation of the spring boot application artifact.
The most important: spring boot knows nothing about the k8s setup. If the environment variable exists - it will pick it. So the same could be checked even locally - define the env. variable on your local machine and run the spring boot application.
The chances are that somehow when the application runs (with the user/group) the environment variables are not accessible - check it by printing the environment variables (or this specific one) right before starting the spring boot application. Or you can do it in java in the main method:
Map<String, String> env = System.getenv();
env.entrySet().forEach(System.out::println);

How to inject secret from Google Secret Manager into Kubernetes Pod as environment variable with Spring Boot?

For the life of Bryan, how do I do this?
Terraform is used to create an SQL Server instance in GCP.
Root password and user passwords are randomly generated, then put into the Google Secret Manager.
The DB's IP is exposed via private DNS zone.
How can I now get the username and password to access the DB into my K8s cluster? Running a Spring Boot app here.
This was one option I thought of:
In my deployment I add an initContainer:
- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
Okay, what now? How do I get it into my application container from here?
There are also options like bitnami/sealed-secrets, which I don't like since the setup is using Terraform already and saving the secrets in GCP. When using sealed-secrets I could skip using the secrets manager. Same with Vault IMO.
On top of the other answers and suggestion in the comments I would like to suggest two tools that you might find interesting.
First one is secret-init:
secrets-init is a minimalistic init system designed to run as PID 1
inside container environments and it`s integrated with
multiple secrets manager services, e.x. Google Secret Manager
Second one is kube-secrets-init:
The kube-secrets-init is a Kubernetes mutating admission webhook,
that mutates any K8s Pod that is using specially prefixed environment
variables, directly or from Kubernetes as Secret or ConfigMap.
It`s also support integration with Google Secret Manager:
User can put Google secret name (prefixed with gcp:secretmanager:) as environment variable value. The secrets-init will resolve any environment value, using specified name, to referenced secret value.
Here`s a good article about how it works.
How do I get it into my application container from here?
You could use a volume to store the secret and mount the same volume in both init container and main container to share the secret with the main container from the init container.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
You can use spring-cloud-gcp-starter-secretmanager to load secrets from Spring application itself.
Documentation - https://cloud.spring.io/spring-cloud-gcp/reference/html/#secret-manager
Using volumes of emptyDir with medium: Memory to guarantee that the secret will not be persisted.
...
volumes:
- name: scratch
emptyDir:
medium: Memory
sizeLimit: "1Gi"
...
If one has control over the image, it's possible to change the entry point and use berglas.
Dockerfile:
FROM adoptopenjdk/openjdk8:jdk8u242-b08-ubuntu # or whatever you need
# Install berglas, see https://github.com/GoogleCloudPlatform/berglas
RUN mkdir -p /usr/local/bin/
ADD https://storage.googleapis.com/berglas/main/linux_amd64/berglas /usr/local/bin/berglas
RUN chmod +x /usr/local/bin/berglas
ENTRYPOINT ["/usr/local/bin/berglas", "exec", "--"]
Now we build the container and test it:
docker build -t image-with-berglas-and-your-app .
docker run \
-v /host/path/to/credentials_dir:/root/credentials \
--env GOOGLE_APPLICATION_CREDENTIALS=/root/credentials/your-service-account-that-can-access-the-secret.json \
--env SECRET_TO_RESOLVE=sm://your-google-project/your-secret \
-ti image-with-berglas-and-your-app env
This should print the environment variables with the sm:// substituted by the actual secret value.
In K8s we run it with Workload Identity, so the K8s service account on behalf of which the pod is scheduled needs to be bound to a Google service account that has the right to access the secret.
In the end your pod description would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: your-app
spec:
containers:
- name: your-app
image: image-with-berglas-and-your-app
command: [start-sql-server]
env:
- name: AXIOMA_PASSWORD
value: sm://your-google-project/your-secret

config-map kubernetes multiple environments

I am trying to deploy a Spring Boot application using configuration data from Kubernetes cluster. I have a simple RestController that prints a message by reading from a Kubernetes cluster.
private String message = "Message not coming from Kubernetes config map";
#RequestMapping(value="/echo", method=GET)
public String printKubeConfig() {
return message;
}
Specified the name of the config map in my application.yml
spring:
application:
name: echo-configmap
echo-configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: echo-configmap
data:
application.properties: |-
message=Hello from dev Kubernetes Configmap
application_qa.properties: |-
message=Hello from qa Kubernetes Configmap
I have several environments like qa, int, test etc
What's the best way to specify environment specific properties in the config map? And how to access them in Spring boot application?
Ex: if the application is deployed in qa, my service should return the message "Hello from qa Kubernetes Configmap"
We also have plans to read these configuration files from GIT in future. How to handle that usecase?
Let me try and provide an answer which I think gives you what you need, without using any tools beyond what you'll have installed on most boxes. Maybe try this first, and if you find the approach becomes difficult to manage and scale, move onto something more sophisticated.
Step 1: Version control configmaps per environment
Create a folder like k8s/configmaps or something, and create one configmap per environment:
k8s/configmaps/properties.dev.yaml
k8s/configmaps/properties.qa.yaml
k8s/configmaps/properties.sit.yaml
k8s/configmaps/properties.uat.yaml
Each configmap should contain your environment specific settings.
Step 2: Have a namespace per environment
Create a k8s namespace per environment, such as:
application-dev
application-qa
application-sit
application-uat
Step 3: Create the configmap per environment
A little bash will help here:
#!/usr/bin/env bash
# apply-configmaps.sh
namespace="application-${ENVIRONMENT}"
for configmap in ./k8s/configmaps/*.${ENVIRONMENT}.yml; do
echo "Processing ConfigMap $configmap"
kubectl apply -n ${namespace} -f $configmap
done
Now all you need to do to create or update configmaps for any environment is:
ENVIRONMENT=dev ./update-configmaps.sh
Step 4: Finish the job with CI/CD
Now you can create a CI/CD pipeline - if your configmap source changes just run the command shown above.
Summary
Based on primitive commands and no special tools you can:
Version control config
Manage config per environment
Update or create config when the config code changes
Easily apply the same approach in a CI/CD pipeline if needed
I would strongly recommend you follow this basic 'first principles' approach before jumping into more sophisticated tools to solve the same problems, in many cases you can do it yourself without much effort, learn the key concepts and save the more sophisticated tooling till later if you really need it.
Hope that helps!
This sounds like a good use case for Helm. You could deploy your application as a Helm Chart, which would basically allow you to generate your Kubernetes resources (like ConfigMaps, Deployments, and whatever else you need) from templates.
You can use the documentation on Helm Charts to get started with Helm. After having created a Chart with helm create, you will get a templates/ directory, in which you might place the following YAML template for your ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" .Release.Name .Chart.Name }}
labels:
app: {{ .Chart.Name | trunc 63 | trimSuffix "-" }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
application.properties: |-
message={{ .Values.properties.message }}
You can add a second YAML template for your Deployment object (actually, helm create will already create a sensible default deployment). Simply add your ConfigMap as volume, there:
containers:
- name: {{ .Chart.Name }}
# [...]
volumes:
- name: property-volume
mountPath: /etc/your-app/properties
volumes:
- name: property-volume
configMap:
name: {{ printf "%s-%s" .Release.Name .Chart.Name }}
Each Helm chart has a values.yaml file, in which you can define default values that are then used to fill in your templates. This default file might look like this (remember that the ConfigMap template above contained a {{ .Values.properties.message }} expression):
replicaCount: 1
image:
repository: your-docker-image
tag: your-docker-tag
properties:
message: Hello!
Next, use this Helm chart and the helm install command to deploy your application as many times as you want with different configurations. You can supply different YAML files in which you override specific values from your values.yaml file, or override individual values using --set:
$ helm install --name dev --set image.tag=latest --set replicaCount=1 path/to/chart
$ helm install --name prod --set image.tag=stable --set replicaCount=3 --set properties.message="Hello from prod" path/to/chart
As to your second question: Of course you should put your Helm Chart into version control. You can then use the helm upgrade command to apply changes to an already deployed application.
I would use this tool for each of your git projects for non secret data.
https://github.com/kubernetes-sigs/kustomize
In the meta data you can filter your pods
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mycomponent
env: dev
tier: backend
name: mycomponent
namespace: myapplication
kubectl get pods -n myapplication -l env=dev,tier=backend,app=mycomponent
Typically you want that applications running on qa don't interfere with applications running on production. Using Kubernetes, you can get this kind of isolation using different namespaces for different environments.
That way, objects on the prod namespace are different from objects on the qa namespace. Another, more expensive approach would be to use different k8s clusters for different environments.
Having this setup, you would deploy your application in the namespace for the specific environment where you want to deploy to, creating the Deployment object on that namespace.
This Deployment would make use of a ConfigMap object containing your Spring Boot properties. Let's call this ConfigMap echo-properties for example.
That way, every namespace would have a unique copy of the echo-properties ConfigMap. Each containing the specific configuration for the environment where it belongs.
The Deployment object consumes the ConfigMap properties by either using environment variables or reading files. The important bit here is that if you change the echo-properties ConfigMap data, your application won't see those new values, by default. Kubernetes doesn't have this feature so far. You can't compare ConfigMaps to Spring Cloud Config, which is a dynamic configuration solution.
An approach that would get you a similar behaviour (but not quite the same) would be using the fabric8 ConfigMap Controller on your cluster. This controller is a process that runs on your cluster, and it would restart your application whenever the ConfigMap changes, so that it reads the new configuration values.
If you don't want to restart your application whenever a configuration changes, you should probably stick to Spring Cloud Config for values that will potentially change, and use ConfigMaps for other properties that won't change, like application name or port.
Your use case sounds very much like you should take look at spring-cloud-config - https://cloud.spring.io/spring-cloud-config/
The config-server is an infrastructure component that serves configuration that could be located in a git repository.
A config-client application would connect to config-server at startup and loads the configuration applicable to the current profiles.
You could have different branches for different environments - or use profiles per environment. In your kubernetes deployment manifest you could set the profile by setting SPRING_PROFILES_ACTIVE environment variable.

Pass environment variable through service in Kubernetes?

Is there a way to pass environment variables through the services in Kubernetes?
I tried passing it in to my service yaml like this:
apiVersion: v1
kind: Service
metadata:
labels:
name: kafka
name: kafka
spec:
ports:
- port: 9092
selector:
name: kafka
env:
- name: BROKER_ID
value: "1"
The service is being consumed by kubectl, and is created.
I've confirmed the service is connected to my container through env | grep KAFKA and the output of variables greatly increase, as expected when my service is up.
However, I would like to pass in custom environment-variables that have to be different depending on which instance of the container it is in.
Is this possible?
The way that Kubernetes is designed has Services decoupled from Pods. You can not inject a Secret or an env var into a running Pod. What you want is to configure the Pod to use the env var or Secret.
This is the best way I've found so far: (reading required)
https://github.com/kubernetes/kubernetes/issues/4710
Roughly, create a secret in a file that's mounted and source it before you execute your script.

Resources