Openshift: Configmap not picked up by the application - spring-boot

I have a springboot application deployed in openshift with application.properties having
greeting.constant = HelloWorld.SpringProp
I have also defined the fabric8/configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: sampleappconfig
data:
greeting.constant: Hellowrold.Poc.ConfigMap.Test
and fabric8/deployment.yml
spec:
template:
spec:
containers:
- name: sampleappcontainer
env:
- name: greeting.constant
valueFrom:
configMapKeyRef:
name: sampleappconfig
key: greeting.constant
envFrom:
- configMapRef:
name: sampleappconfig
resources:
requests:
cpu: "0.2"
# memory: 256Mi
limits:
cpu: "1.0"
# memory: 256Mi
On deploying the application using fabric8, it creates the Configmap in the Openshift and I also see "greeting.constant" in the "Environment" tab of the Application in openshift webconsole.
The issue is I would expect the application to pick up the values given in the Configmap instead of Spring application.properties as Env variables takes precendence. But, running the application logs "HelloWorld.SpringProp" instead of "Hellowrold.Poc.ConfigMap.Test".
How do I make my application to refer the properties from Configmap?

ConfigMap changes are only reflected in the container automatically if mounting the ConfigMap as a file and the application can detect changes to the file and re-read it.
If the ConfigMap is used to populate environment variables, it is necessary to trigger a new deployment for the environment variables to be updated. There is no way to update live the values of environment variables that the application sees by changing the ConfigMap.

Related

Spring Cloud Kubernetes is not loading secret keys with pattern like xx.yy

I am trying to learn about Spring Cloud Kubernetes for loading secrets and what I have observed is if a property has yml like structure, then it doesn't get loaded in app.
Ex:
kind: Secret
metadata:
name: activemq-secrets
labels:
broker: activemq
type: Opaque
data:
amqusername: bXl1c2VyCg==
amq.password: MWYyZDFlMmU2N2Rm
K8 Manifest
template:
spec:
volumes:
- name: secretvolume
secret:
secretName: activemq-secrets
containers:
-
volumeMounts:
- name: secretvolume
readOnly: true
mountPath: /etc/secrets/
jvm args:
-Dspring.cloud.kubernetes.secrets.paths=/etc/secrets/
-Dspring.cloud.kubernetes.secrets.enabled=true
Trying to load #Value("${amqusername}")works
But when I try to read this property with #Value("${amq.password}") I get error with placeholder not found. I have tried printing all spring configs and it doesn't show up. How can I fix this.
Try changing the variable name in the secret to amq_password
Update:
If you use environment variables rather than system properties, most operating systems disallow period-separated key names, but you can use underscores instead (e.g. SPRING_CONFIG_NAME instead of spring.config.name).
https://docs.spring.io/spring-boot/docs/1.5.6.RELEASE/reference/html/boot-features-external-config.html

How to populate application.properties file value from kubernetes Secrets mounted as file

I am working on Springboot and Kubernetes and I have really simple application that connects to Postgres database. I want to get the value of datasource from configmap and password from secrets as mount file.
Configmap file :
apiVersion: v1
kind: ConfigMap
metadata:
name: customer-config
data:
application.properties: |
server.forward-headers-strategy=framework
spring.datasource.url=jdbc:postgresql://test/customer
spring.datasource.username=postgres
Secrets File :
apiVersion: v1
kind: Secret
metadata:
name: secret-demo
data:
spring.datasource.password: cG9zdGdyZXM=
deployment file :
spec:
containers:
- name: customerc
image: localhost:8080/customer
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8282
volumeMounts:
- mountPath: /workspace/config/default
name: config-volume
- mountPath: /workspace/secret/default
name: secret-volume
volumes:
- name: config-volume
configMap:
name: customer-config
- name: secret-volume
secret:
secretName: secret-demo
items:
- key: spring.datasource.password
path: password
If I move spring.datasource.password prop from secret to configmap then it works fine or If I populate its value as env variable then also work fine.
But as we know both are not secure way to do so, can someone tell me what's wrong with file mounting for secrets.
Spring Boot 2.4 added support for importing a config tree. This support can be used to consume configuration from a volume mounted by Kubernetes.
As an example, let’s imagine that Kubernetes has mounted the following volume:
etc/
config/
myapp/
username
password
The contents of the username file would be a config value, and the contents of password would be a secret.
To import these properties, you can add the following to your application.properties file:
spring.config.import=optional:configtree:/etc/config/
This will result in the properties myapp.username and myapp.password being set . Their values will be the contents of /etc/config/myapp/username and /etc/config/myapp/password respectively.
By default, consuming secrets through the API is not enabled for security reasons.Spring Cloud Kubernetes requires access to Kubernetes API in order to be able to retrieve a list of addresses of pods running for a single service. The simplest way to do that when using Minikube is to create default ClusterRoleBinding with cluster-admin privilege.
Example on how to create one :-
$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default
You need to give secret type in manifest file. Hope it will work.
apiVersion: v1
kind: Secret
metadata:
name: secret-demo
type: Opaque
data:
spring.datasource.password: cG9zdGdyZXM

Spring Boot - read container environment variables in properties file

I use:
Spring Boot
Microservices (containerized)
Docker
Kubernetes
My case is as follows:
I have to generate link:
https://dev-myapp.com OR https://qa-myapp.com
depending on the environment in which my service is running (DEV, QA)
Depending on the environment (DEV, QA). I have one Spring profile BUT under this profile my app can run in kubernetes on 2 types of environment: DEV or QA. I want to generate proper link - read it from my properties file:
#Value("${email.body}")
private String emailBody;
application.yaml:
email:
body: Click on the following URL: ${ENVIRONMENT_URL:}/edge/invitation?code={0}&email={1}
DEVOPS(Kubernetes):
Manifest in workloads folder (DEV branch, the same for qa branch nut this time with https://qa-myapp.com):
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
...
...
containers:
env:
- name: ENVIRONMENT_URL
value: https://dev-myapp.com
So is i possible to read that value from kubernetes container in my Spring properties file? I want to get email.body property depending on the container my service is running on.
Yes this is possible and have corrected the syntax of the yaml
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
command: ["/bin/sh", "-c", "env | grep ENVIRONMENT_URL"]
env:
- name: ENVIRONMENT_URL
value: https://myapp.com. #Indedntation Changed
ports:
- containerPort: 80

How to properly configure the environment in kubernetes cluster?

I have a spring boot application with two profiles, dev and prod, my docker file is:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-Dspring.profiles.active=dev","-cp","app:app/lib/*","com.my.Application"]
please not that, when building the image, I specify the entrypoint as command line argument.
This is the containers section of my kubernetes deployment where I use this image:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
ports:
- containerPort: 8080
name: myapp
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
It works but has a major flaw: how can I now switch to the production environment without rebuilding the image?
The best would be to remove that ENTRYPOINT in my docker file and give this configuration in my kubernetes yml so that I could always use the same image...is this possible?
edit: I saw that there is a lifecycle istruction but note that I have a readiness probe based on the spring boot's actuator. It would always fail if I used this construct.
You can override an image's ENTRYPOINT by using the command property of a Kubernetes Pod spec. Likewise, you could override CMD by using the args property (also see the documentation):
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-Dspring.profiles.active=prod","-cp","app:app/lib/*","com.my.Application"]
ports:
- containerPort: 8080
name: myapp
Alternatively, to provide a higher level of abstraction, you might write your own entrypoint script that reads the application profile from an environment variable:
#!/bin/sh
PROFILE="${APPLICATION_CONTEXT:-dev}"
exec java "-Dspring.profiles.active=$PROFILE" -cp app:app/lib/* com.my.Application
Then, you could simply pass that environment variable into your pod:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
env:
- name: APPLICATION_CONTEXT
value: prod
ports:
- containerPort: 8080
name: myapp
Rather than putting spring.profiles.active in dockerfile in the entrypoint.
Make use of configmaps and application.properties.
Your ENTRYPOINT in dockerfile should look like:
ENTRYPOINT ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
Create a configmap that acts as application.properties for your springboot application
---
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=myapp
server.port=8080
spring.profiles.active=dev
NOTE: Here we have specified spring.profiles.active.
In containers section of my kubernetes deployment mount the configmap inside container that will act as application.properties.
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
ports:
- containerPort: 8080
name: myapp
volumeMounts:
- name: myapp-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: myapp-application-config
configMap:
name: myapp-config
items:
- key: application-dev.properties
path: application-dev.properties
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
NOTE: --spring.config.additional-location points to location of application.properties that we created in configmaps.
So making use of configmaps and application.properties one can override any configuration of your application wihtout rebuilding the image.
If you want to add a new config or update value of existing config, just make appropriate changes in configmap and kubectl apply it. Then scale down and scale up your application pod, to bring the new config in action.
Hope this helps.
There are many many ways to set Spring configuration values. With some rules, you can use ordinary environment variables to specify individual property values. You might see if you can use this instead of having a separate Spring profile control.
Using environment variables has two advantages here: it means you (or your DevOps team) can change deploy-time settings without recompiling the application; and if you're using a deployment manager like Helm where some details like hostnames are intrinsically unpredictable, this lets you specify values that can't be known until deploy time.
For example, let's say you have a Redis dependency:
cache:
redis:
url: redis://localhost:6379/0
You could override this at deploy time by setting
containers:
- name: myapp
env:
- name: CACHE_REDIS_URL
value: "redis://myapp-redis.default.svc.cluster.local:6379/0"
One way to do this is using spring cloud Kubernetes as described here
https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#configmap-propertysource
You can define your profiles in a configmap like below
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yml: |-
greeting:
message: Say Hello to the World
farewell:
message: Say Goodbye
---
spring:
profiles: development
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
---
spring:
profiles: production
greeting:
message: Say Hello to the Ops
And can then select the desired profile by passing an environment variable in your Kubernetes deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: deployment-name
spec:
replicas: 1
selector:
matchLabels:
app: deployment-name
template:
metadata:
labels:
app: deployment-name
spec:
containers:
- name: container-name
image: your-image
env:
- name: SPRING_PROFILES_ACTIVE
value: "development"

Unable to set proxy in SonarQube running in OpenShift(OKD)

I'm running the sonarqube-openshift-docker build of sonarqube. I need to set the proxy Sonar uses so it can get to the Marketplace and pull down a Java profile.
I've tried setting a deployment a config env name/value pair:
JAVA_TOOLS_OPTIONS = "-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort="
I've also tried setting HTTP_PROXY_HOST, HTTP_PROXY_PORT, HTTPS_PROXY_HOST, HTTPS_PROXY_PORT name/value pairs.
All of these make it through to the environment on the container side, but Sonar isn't using those.
Changing the sonar.properties file in the container doesn't work since it's not persistent and gets stomped on with a restart.
I also tried adding it here, but that didn't work.
template:
metadata:
annotations:
openshift.io/container.sonarqube.image.entrypoint: '["./bin/run.sh -Dhttp.proxyHost=<myProxy:port>"]'
I am guessing I need to pass it in somewhere in the YAML file, but I can't figure out where.
AFAIK you have to provide host and port in separate properties:
http.proxyHost=
http.proxyPort=
Take a look at the sonar.properties file here.
Running SonarQube on OpenShift, I use a template that ubstalls a ConfigMap setting HTTP proxies configuration.
apiVersion: v1
kind: Template
metadata:
name: sonarqube-template
objects:
[...]
- apiVersion: v1
kind: ConfigMap
metadata:
name: ${APPLICATION_NAME}-conf
data:
sonar.properties: |-
http.nonProxyHosts=${PROXY_EXCLUDE}
http.proxyHost=${PROXY_HOST}
http.proxyPort=${PROXY_PORT}
https.proxyHost=${PROXY_HOST}
https.proxyPort=${PROXY_PORT}
wrapper.conf: |-
wrapper.java.command=java
wrapper.java.additional.1=-Dsonar.wrapped=true
wrapper.java.additional.2=-Djava.awt.headless=true
[...]
- apiVersion: v1
kind: DeploymentConfig
[...]
volumeMounts:
- mountPath: /opt/sonarqube/conf
name: ${APPLICATION_NAME}-conf
[....]
volumes:
- configMap:
defaultMode: 420
name: ${APPLICATION_NAME}-conf
[...]
parameters:
- name: APPLICATION_NAME
value: sonarqube
- name: PROXY_HOST
value: proxy.example.com
- name: PROXY_PORT
value: "3128"
- name: PROXY_EXCLUDE
value: "*.internal.domain.example.com"

Categories

Resources