Spring Boot + google kubernetes + Google SQL Cloud not working - spring-boot

I am trying to push spring boot application in google kubernetes(Google Container Engine).
I have performed all the step which given in below link.
https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..%2Findex#0
When i am trying to perform step 9 http://:8080 in browser that is not reachable.
Yes i got external ip address.
I am able to ping that ip address
let me know if any other information is require.
In Logging that does not able to connect database
Error:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.

I hope you have created cluster in google container engine
Follow the first 5 step given in this link
https://cloud.google.com/sql/docs/mysql/connect-container-engine
change database configuration in your application
hostname: 127.0.0.1
port: 3306 or your mysql port
username: proxyuser
should be same as link step - 3
mvn package -Dmaven.test.skip=true
Create File with name "Dockerfile" and below content
FROM openjdk:8
COPY target/SpringBootWithDB-0.0.1-SNAPSHOT.jar /app.jar
EXPOSE 8080/tcp
ENTRYPOINT ["java", "-jar", "/app.jar"]
docker build -t gcr.io//springbootdb-java:v1 .
docker run -ti --rm -p 8080:8080 gcr.io//springbootdb-java:v1
gcloud docker -- push gcr.io//springbootdb-java:v1
Follow the 6th step given in link and create yaml file
kubectl create -f cloudsql_deployment.yaml
run kubectl get deployment and copy name of deployment
kubectl expose deployment --type=LoadBalancer
My Yaml File
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: conversationally
spec:
replicas: 1
template:
metadata:
labels:
app: conversationally
spec:
containers:
- image: gcr.io/<project ID>/springbootdb-java:v1
name: web
env:
- name: DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
# Change the [PORT] to the port number used by your database
# (e.g. 3306).
value: 127.0.0.1:3306
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
# [END cloudsql_secrets]
ports:
- containerPort: 8080
name: conv-cluster
# Change [INSTANCE_CONNECTION_NAME] here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# Insert the port number used by your database.
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=<instance name>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
# [END volumes]
===========

Related

How to properly use environment variables from k8s inside application.properties in springboot

I’m trying to set some environment variables in k8s deployment and use them within the application.properties in my spring boot application, but it looks like I'm doing something wrong because spring is not reading those variables, although when checking the env vars on the pod, all the vars are set correctly.
The error log from the container:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port...
Any help will be appreciated.
application.properties:
spring.datasource.url=jdbc:postgresql://${DB_URL}:${DB_PORT}/${DB_NAME}
spring.datasource.username=${DB_USER_NAME}
spring.datasource.password=${DB_PASSWORD}
DockerFile
FROM openjdk:11-jre-slim-buster
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: .../api
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
env:
- name: DB_URL
value: "posgres"
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: postgres-config
key: dbName
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: db-secret
key: dbUserName
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: dbPassword
The DockerFile was wrong.
Everything is working fine after changing the DockerFile to this:
FROM maven:3.6.3-openjdk-11-slim as builder
WORKDIR /app
COPY pom.xml .
COPY src/ /app/src/
RUN mvn install -DskipTests=true
FROM adoptopenjdk/openjdk11:jre-11.0.8_10-alpine
COPY --from=builder /app/target/*.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
You miss the env: in your deployment.yaml, see here : https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
if you have kubectl installed you can check how env vars must be declared by doing an explain like follow : kubectl explain deployment.spec.template.spec.containers

Kubernetes pplication run error when using env from ConfigMaps

I have an application written in Go which reads environmental variables from a config.toml file.
The config.toml file contains the key value as
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
In my application am reading the all the varialbes from the .toml file to my application as
// Represents database and server credentials
type Config struct {
Server string
Database string
NRFAddrPort string
}
var NRFAddrPort string
// Read and parse the configuration file
func (c *Config) Read() {
if _, err := toml.DecodeFile("config.toml", &c); err != nil {
log.Print("Cannot parse .toml configuration file ")
}
NRFAddrPort = c.NRFAddrPort
}
I would like to deploy my application in my Kubernetes cluster (3 VMs,a master and 2 worker nodes). After creating a docker and pushed to docker hub, when deploy my application using configMaps to parse the variables, my application runs for a few seconds and then gives Error.
It seems the application cannot read the env variable from the configMap. Below is my configMap and the deployemnt.
apiVersion: v1
kind: ConfigMap
metadata:
name: nrf-config
namespace: default
data:
config-toml: |
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
apiVersion: apps/v1
kind: Deployment
metadata:
name: nrf-instance
spec:
selector:
matchLabels:
app: nrf-instance
replicas: 1
template:
metadata:
labels:
app: nrf-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: nrf-instance
image: grego/appapi:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /home/ubuntu/appapi
volumes:
- name: config-volume
configMap:
name: nrf-config
Also one thing I do not understand is the mountPath in volumeMounts. Do I need to copy the config.toml to this mountPath?
When i hard code these variable in my application and deploy the docker image in kubernetes, it run without error.
My problem now is how to parse these environmental variable to my application using kubernetes configMap or any method so it can run in my Kubernetes cluster instead of hard code them in my application. Any help.
Also attached is my Dockerfile content
# Dockerfile References: https://docs.docker.com/engine/reference/builder/
# Start from the latest golang base image
FROM golang:latest as builder
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download all dependencies. Dependencies will be cached if the go.mod and go.sum files are not changed
RUN go mod download
# Copy the source from the current directory to the Working Directory inside the container
COPY . .
# Build the Go app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
######## Start a new stage from scratch #######
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy the Pre-built binary file from the previous stage
COPY --from=builder /app/main .
# Expose port 9090 to the outside world
EXPOSE 9090
# Command to run the executable
CMD ["./main"]
Any problem about the content?
Passing the values as env values as
apiVersion: apps/v1
kind: Deployment
metadata:
name: nrf-instance
spec:
selector:
matchLabels:
app: nrf-instance
replicas: 1
template:
metadata:
labels:
app: nrf-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: nrf-instance
image: grego/appapi:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
env:
- name: Server
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
- name: Database
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
- name: NRFAddrPort
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
You cannot pass those values as separate environment variables as it is because they are read as one text blob instead of separate key:values. Current configmap looks like this:
Data
====
config.toml:
----
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
To pass it as environment variables you have to modify the configmap to read those values as key: value pair:
kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap
data:
Server: mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
Database: nrfdb
NRFAddrPort: :9090
This way those values will be separated and can be passed as env variables:
Data
====
Database:
----
nrfdb
NRFAddrPort:
----
:9090
Server:
----
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
When you pass it to pod:
[...]
spec:
containers:
- name: nrf-instance
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
envFrom:
- configMapRef:
name: example-configmap
You can see that it was passed correctly, for example by executing env command inside the pod:
kubectl exec -it env-6fb4b557d7-zw84w -- env
NRFAddrPort=:9090
Server=mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
Database=nrfdb
The values are read as separate env variables, for example Server value:
kubectl exec -it env-6fb4b557d7-zw84w -- printenv Server
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
What you currently have will create a file in the mountpoint for each key in your config map. Your code is looking for "config.toml" but the key is "config-toml" so it isn't finding it.
If you want the keep the key as-is, you can control what keys are written where (within the mount) like this:
volumes:
- name: config-volume
configMap:
name: nrf-config
items:
- key: config-toml
path: config.toml

Debugging uWSGI in kubernetes

I have a pair of kubernetes pods, one for nginx and one for Python Flask + uWSGI. I have tested my setup locally in docker-compose, and it has worked fine, however after deploying to kubernetes somehow it seems there is no communication between the two. The end result is that I get 502 Gateway Error when trying to reach my location.
So my question is not really about what is wrong with my setup, but rather what tools can I use to debug this scenario. Is there a test-client for uwsgi? Can I use ncat? I don't seem to get any useful log output from nginx, and I don't know if uwsgi even has a log.
How can I debug this?
For reference, here is my nginx location:
location / {
# Trick to avoid nginx aborting at startup (set server in variable)
set $upstream_server ${APP_SERVER};
include uwsgi_params;
uwsgi_pass $upstream_server;
uwsgi_read_timeout 300;
uwsgi_intercept_errors on;
}
Here is my wsgi.ini:
[uwsgi]
module = my_app.app
callable = app
master = true
processes = 5
socket = 0.0.0.0:5000
die-on-term = true
uid = www-data
gid = www-data
Here is the kubernetes deployment.yaml for nginx:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: nginx
name: nginx
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: nginx
strategy:
type: Recreate
template:
metadata:
labels:
service: nginx
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: nginx
image: <custom image url>
imagePullPolicy: Always
env:
- name: APP_SERVER
valueFrom:
secretKeyRef:
name: my-environment-config
key: APP_SERVER
- name: FK_SERVER_NAME
valueFrom:
secretKeyRef:
name: my-environment-config
key: SERVER_NAME
ports:
- containerPort: 80
- containerPort: 10443
- containerPort: 10090
resources:
requests:
cpu: 1m
memory: 200Mi
volumeMounts:
- mountPath: /etc/letsencrypt
name: my-storage
subPath: nginx
- mountPath: /dev/shm
name: dshm
restartPolicy: Always
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-storage-claim-nginx
- name: dshm
emptyDir:
medium: Memory
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: nginx
name: nginx
spec:
type: LoadBalancer
ports:
- name: "nginx-port-80"
port: 80
targetPort: 80
protocol: TCP
- name: "nginx-port-443"
port: 443
targetPort: 10443
protocol: TCP
- name: "nginx-port-10090"
port: 10090
targetPort: 10090
protocol: TCP
selector:
service: nginx
Here is the kubernetes deployment.yaml for python flask:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: my-app
name: my-app
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: my-app
strategy:
type: Recreate
template:
metadata:
labels:
service: my-app
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: my-app
image: <custom image url>
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
cpu: 1m
memory: 100Mi
volumeMounts:
- name: merchbot-storage
mountPath: /app/data
subPath: my-app
- name: dshm
mountPath: /dev/shm
- name: local-config
mountPath: /app/secrets/local_config.json
subPath: merchbot-local-config-test.json
restartPolicy: Always
volumes:
- name: merchbot-storage
persistentVolumeClaim:
claimName: my-storage-claim-app
- name: dshm
emptyDir:
medium: Memory
- name: local-config
secret:
secretName: my-app-local-config
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: my-app
name: my-app
spec:
ports:
- name: "my-app-port-5000"
port: 5000
targetPort: 5000
selector:
service: my-app
Debugging in kubernetes is not very different from debugging outside, there's just some concepts that need to be overlaid for the kubernetes world.
A Pod in kubernetes is what you would conceptually see as a host in the VM world. Every container running in a Pod will see each others services on localhost. From there, a Pod to anything else will have a network connection involved (even if the endpoint is node local). So start testing with services on localhost and work your way out through pod IP, service IP, service name.
Some complexity comes from having the debug tools available in the containers. Generally containers are built slim and don't have everything available. So you either need to install tools while a container is running (if you can) or build a special "debug" container you can deploy on demand in the same environment. You can always fall back to testing from the cluster nodes which also have access.
Where you have python available you can test with uswgi_curl
pip install uwsgi-tools
uwsgi_curl hostname:port /path
Otherwise nc/curl will suffice, to a point.
Pod to localhost
First step is to make sure the container itself is responding. In this case you are likely to have python/pip available to use uwsgi_curl
kubectl exec -ti my-app-XXXX-XXXX sh
nc -v localhost 5000
uwsgi_curl localhost:5000 /path
Pod to Pod/Service
Next include the kubernetes networking. Start with IP's and finish with names.
Less likely to have python here, or even nc but I think testing the environment variables is important here:
kubectl exec -ti nginx-XXXX-XXXX sh
nc -v my-app-pod-IP 5000
nc -v my-app-service-IP 5000
nc -v my-app-service-name 5000
echo $APP_SERVER
echo $FK_SERVER_NAME
nc -v $APP_SERVER 5000
# or
uwsgi_curl $APP_SERVER:5000 /path
Debug Pod to Pod/Service
If you do need to use a debug pod, try and mimic the pod you are testing as much as possible. It's great to have a generic debug pod/deployment to quickly test anything, but if that doesn't reveal the issue you may need to customise the deployment to mimic the pod you are testing more closely.
In this case the environment variables play a part in the connection setup, so that should be emulated for a debug pod.
Node to Pod/Service
Pods/Services will be available from the cluster nodes (if you are not using restrictive network policies) so usually the quick test is to check Pods/Services are working from there:
nc -v <pod_ip> <container_port>
nc -v <service_ip> <service_port>
nc -v <service__dns> <service_port>
In this case:
nc -v <my_app_pod_ip> 5000
nc -v <my_app_service_ip> 5000
nc -v my-app.svc.<namespace>.cluster.local 5000

How can I use the port of a server running on localhost in kubernetes running spring boot app

I am new to Kubernetes and kubectl. I am basically running a GRPC server in my localhost. I would like to use this endpoint in a spring boot app running on kubernetes using kubectl on my mac. If I set the following config in application.yml and run in kubernetes, it doesn't work. The same config works if I run in IDE.
grpc:
client:
local-server:
address: static://localhost:6565
negotiationType: PLAINTEXT
I see some people suggesting port-forward, but it's the other way round (It works when I want to use a port that is already in kubernetes from localhost just like the tomcat server running in kubernetes from a browser on localhost)
apiVersion: apps/v1
kind: Deployment
metadata:
name: testspringconfigvol
labels:
app: testspring
spec:
replicas: 1
selector:
matchLabels:
app: testspringconfigvol
template:
metadata:
labels:
app: testspringconfigvol
spec:
initContainers:
# taken from https://gist.github.com/tallclair/849601a16cebeee581ef2be50c351841
# This container clones the desired git repo to the EmptyDir volume.
- name: git-config
image: alpine/git # Any image with git will do
args:
- clone
- --single-branch
- --
- https://github.com/username/fakeconfig
- /repo # Put it in the volume
securityContext:
runAsUser: 1 # Any non-root user will do. Match to the workload.
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /repo
name: git-config
containers:
- name: testspringconfigvol-cont
image: username/testspring
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /usr/local/lib/config/
name: git-config
volumes:
- name: git-config
emptyDir: {}
What I need in simple terms:
Ports having some server in my localhost:
localhost:6565, localhost:6566, I need to access these ports some how in my kubernetes. Then what should I set it in application.yml config? Will it be the same localhost:6565, localhost:6566 or how-to-get-this-ip:6565, how-to-get-this-ip:6566.
We can get the vmware host ip using minikube with this command minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'". For me it's 10.0.2.2 on Mac. If using Kubernetes on Docker for mac, it's host.docker.internal.
By using these commands, I managed to connect to the services running on host machine from kubernetes.
1) Inside your application.properties define
server.port=8000
2) Create Dockerfile
# Start with a base image containing Java runtime (mine java 8)
FROM openjdk:8u212-jdk-slim
# Add Maintainer Info
LABEL maintainer="vaquar.khan#gmail.com"
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file (when packaged)
ARG JAR_FILE=target/codestatebkend-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} codestatebkend.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/codestatebkend.jar"]
3) Make sure docker is working fine
docker run --rm -p 8080:8080
4)
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
use following command to find the pod name
kubectl get pods
then
kubectl port-forward <pod-name> 8080:8080
Useful links :
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
https://developer.okta.com/blog/2019/05/22/java-microservices-spring-boot-spring-cloud

How to properly configure the environment in kubernetes cluster?

I have a spring boot application with two profiles, dev and prod, my docker file is:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-Dspring.profiles.active=dev","-cp","app:app/lib/*","com.my.Application"]
please not that, when building the image, I specify the entrypoint as command line argument.
This is the containers section of my kubernetes deployment where I use this image:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
ports:
- containerPort: 8080
name: myapp
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
It works but has a major flaw: how can I now switch to the production environment without rebuilding the image?
The best would be to remove that ENTRYPOINT in my docker file and give this configuration in my kubernetes yml so that I could always use the same image...is this possible?
edit: I saw that there is a lifecycle istruction but note that I have a readiness probe based on the spring boot's actuator. It would always fail if I used this construct.
You can override an image's ENTRYPOINT by using the command property of a Kubernetes Pod spec. Likewise, you could override CMD by using the args property (also see the documentation):
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-Dspring.profiles.active=prod","-cp","app:app/lib/*","com.my.Application"]
ports:
- containerPort: 8080
name: myapp
Alternatively, to provide a higher level of abstraction, you might write your own entrypoint script that reads the application profile from an environment variable:
#!/bin/sh
PROFILE="${APPLICATION_CONTEXT:-dev}"
exec java "-Dspring.profiles.active=$PROFILE" -cp app:app/lib/* com.my.Application
Then, you could simply pass that environment variable into your pod:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
env:
- name: APPLICATION_CONTEXT
value: prod
ports:
- containerPort: 8080
name: myapp
Rather than putting spring.profiles.active in dockerfile in the entrypoint.
Make use of configmaps and application.properties.
Your ENTRYPOINT in dockerfile should look like:
ENTRYPOINT ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
Create a configmap that acts as application.properties for your springboot application
---
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=myapp
server.port=8080
spring.profiles.active=dev
NOTE: Here we have specified spring.profiles.active.
In containers section of my kubernetes deployment mount the configmap inside container that will act as application.properties.
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
ports:
- containerPort: 8080
name: myapp
volumeMounts:
- name: myapp-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: myapp-application-config
configMap:
name: myapp-config
items:
- key: application-dev.properties
path: application-dev.properties
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
NOTE: --spring.config.additional-location points to location of application.properties that we created in configmaps.
So making use of configmaps and application.properties one can override any configuration of your application wihtout rebuilding the image.
If you want to add a new config or update value of existing config, just make appropriate changes in configmap and kubectl apply it. Then scale down and scale up your application pod, to bring the new config in action.
Hope this helps.
There are many many ways to set Spring configuration values. With some rules, you can use ordinary environment variables to specify individual property values. You might see if you can use this instead of having a separate Spring profile control.
Using environment variables has two advantages here: it means you (or your DevOps team) can change deploy-time settings without recompiling the application; and if you're using a deployment manager like Helm where some details like hostnames are intrinsically unpredictable, this lets you specify values that can't be known until deploy time.
For example, let's say you have a Redis dependency:
cache:
redis:
url: redis://localhost:6379/0
You could override this at deploy time by setting
containers:
- name: myapp
env:
- name: CACHE_REDIS_URL
value: "redis://myapp-redis.default.svc.cluster.local:6379/0"
One way to do this is using spring cloud Kubernetes as described here
https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#configmap-propertysource
You can define your profiles in a configmap like below
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yml: |-
greeting:
message: Say Hello to the World
farewell:
message: Say Goodbye
---
spring:
profiles: development
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
---
spring:
profiles: production
greeting:
message: Say Hello to the Ops
And can then select the desired profile by passing an environment variable in your Kubernetes deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: deployment-name
spec:
replicas: 1
selector:
matchLabels:
app: deployment-name
template:
metadata:
labels:
app: deployment-name
spec:
containers:
- name: container-name
image: your-image
env:
- name: SPRING_PROFILES_ACTIVE
value: "development"

Resources