Spring Boot - read container environment variables in properties file - spring-boot

I use:
Spring Boot
Microservices (containerized)
Docker
Kubernetes
My case is as follows:
I have to generate link:
https://dev-myapp.com OR https://qa-myapp.com
depending on the environment in which my service is running (DEV, QA)
Depending on the environment (DEV, QA). I have one Spring profile BUT under this profile my app can run in kubernetes on 2 types of environment: DEV or QA. I want to generate proper link - read it from my properties file:
#Value("${email.body}")
private String emailBody;
application.yaml:
email:
body: Click on the following URL: ${ENVIRONMENT_URL:}/edge/invitation?code={0}&email={1}
DEVOPS(Kubernetes):
Manifest in workloads folder (DEV branch, the same for qa branch nut this time with https://qa-myapp.com):
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
...
...
containers:
env:
- name: ENVIRONMENT_URL
value: https://dev-myapp.com
So is i possible to read that value from kubernetes container in my Spring properties file? I want to get email.body property depending on the container my service is running on.

Yes this is possible and have corrected the syntax of the yaml
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
command: ["/bin/sh", "-c", "env | grep ENVIRONMENT_URL"]
env:
- name: ENVIRONMENT_URL
value: https://myapp.com. #Indedntation Changed
ports:
- containerPort: 80

Related

.Net core microservice with HTTPS on AWS EKS

We have lunched .Net microservice on container and published it on EKS Cluster.
It's working fine on http.
We follow the link to deploy .Net microservice as deploy as a container.
https://dotnet.microsoft.com/en-us/learn/aspnet/microservice-tutorial/docker-file
We used below deploy.yaml
**---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mymicroservice
spec:
replicas: 1
template:
metadata:
labels:
app: mymicroservice
spec:
containers:
- name: mymicroservice
image: [YOUR DOCKER ID]/mymicroservice:latest
ports:
- containerPort: 80
env:
- name: ASPNETCORE_URLS
value: http://*:80
selector:
matchLabels:
app: mymicroservice
apiVersion: v1
kind: Service
metadata:
name: mymicroservice
spec:
type: LoadBalancer
ports:
port: 80
selector:
app: mymicroservice**
This exposed our microservices behind classic load balancer. It's working fine on Http.
but we facing challenges on HTTPS. How can this be achieved? If we need to use Nginx Ingress Controller how that yaml we can tune according to our deployment.yaml

How to read spring boot configuration file in Kubernetes deployment

Im new in Kubernetes and having a hard time making to read application.properties in the deployment. I have attached our ConfigMap as a mounted volume under the /config path.
This is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 34343434.dkr.ecr.asia-2.amazonaws.com/myapp:latest
ports:
- containerPort: 80
volumeMounts:
- name: application-properties
mountPath: /config
volumes:
- name: application-properties
configMap:
name: application-properties
I have created configmap using kubectl command from a file that is located in my local computer.
kubectl create configmap application-properties -–from-file=/users/me/application.properties
Now the issue is the application.property file which i am setting it using the kubectl configmap is not getting picked up. Can you help me on this?
Based on the discussion, the issue was the configmap, instead of the property file, it was rendered as a string in the configmap.
kubectl get configmap application-properties -o yaml
>shows the contents but with all in oneline format. separated by \n
Converting it to YAML application.yml did the trick.

Kubernetes deployment object throws the 405 errors

I try to make Kubernetes test cluster with Minikube on Windows 10. I use my Spring Boot image which contains Tomcat middleware and Thymeleaf. First I make Pod manifest:
apiVersion: v1
kind: Pod
metadata:
name: app-boot
labels:
deploy: boot-app
spec:
containers:
- name: boot-app
image: app:latest # This image is generated by local docker machine and it works successfully. It contains tomcat and thymeleaf
imagePullPolicy: Never
ports:
- containerPort: 8080
args: ["-t", "-i"]
---
apiVersion: v1
kind: Service
metadata:
name: app-boot-svc
spec:
selector:
deploy: boot-app
ports:
- port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: boot.aaa.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-boot-svc
port:
number: 8080
The above Kubernetes manifest works successfully without errors. Then I change the Pod object to Deployment like below,
apiVersion: apps/v1
kind: Deployment
metadata:
name: boot-deploy
spec:
replicas: 2
selector:
matchLabels:
deploy: boot-app
template:
metadata:
labels:
deploy: boot-app
spec:
containers:
- name: boot-app
image: app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
args: ["-t", "-i"]
But the ingress hostname throws the errors,
[nio-8080-exec-4] .w.s.m.s.DefaultHandlerExceptionResolver : Resolved [org.springframework.web.HttpRequestMethodNotSupportedException: Request method 'POST' not supported]
The Spring Boot codes which bring out this issue has only http get method only.
#GetMapping("/list")
public void list(#ModelAttribute("pageVO") PageVO vo, Model model) {
....
....
However when I use my Spring Boot Pod objects, the ingress hosts throws no errors. In the case of using Deployment objects, the web browser throws the following errors.
There was an unexpected error (type=Method Not Allowed, status=405)
Is there another option which configure the default web method of Kubernetes pod, service or ingress? If there is, I want to know how to set the default web method of ingress host.
Update
I set the replicas number of Deployment object to 1, Then no errors are thrown from Minikube. And below codes are the Service codes.
apiVersion: v1
kind: Service
metadata:
name: app-boot-svc
spec:
selector:
deploy: boot-app
ports:
- port: 8080
I am afraid my service object codes would contain some errors. Any idea?

How to properly configure the environment in kubernetes cluster?

I have a spring boot application with two profiles, dev and prod, my docker file is:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-Dspring.profiles.active=dev","-cp","app:app/lib/*","com.my.Application"]
please not that, when building the image, I specify the entrypoint as command line argument.
This is the containers section of my kubernetes deployment where I use this image:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
ports:
- containerPort: 8080
name: myapp
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
It works but has a major flaw: how can I now switch to the production environment without rebuilding the image?
The best would be to remove that ENTRYPOINT in my docker file and give this configuration in my kubernetes yml so that I could always use the same image...is this possible?
edit: I saw that there is a lifecycle istruction but note that I have a readiness probe based on the spring boot's actuator. It would always fail if I used this construct.
You can override an image's ENTRYPOINT by using the command property of a Kubernetes Pod spec. Likewise, you could override CMD by using the args property (also see the documentation):
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-Dspring.profiles.active=prod","-cp","app:app/lib/*","com.my.Application"]
ports:
- containerPort: 8080
name: myapp
Alternatively, to provide a higher level of abstraction, you might write your own entrypoint script that reads the application profile from an environment variable:
#!/bin/sh
PROFILE="${APPLICATION_CONTEXT:-dev}"
exec java "-Dspring.profiles.active=$PROFILE" -cp app:app/lib/* com.my.Application
Then, you could simply pass that environment variable into your pod:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
env:
- name: APPLICATION_CONTEXT
value: prod
ports:
- containerPort: 8080
name: myapp
Rather than putting spring.profiles.active in dockerfile in the entrypoint.
Make use of configmaps and application.properties.
Your ENTRYPOINT in dockerfile should look like:
ENTRYPOINT ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
Create a configmap that acts as application.properties for your springboot application
---
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=myapp
server.port=8080
spring.profiles.active=dev
NOTE: Here we have specified spring.profiles.active.
In containers section of my kubernetes deployment mount the configmap inside container that will act as application.properties.
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
ports:
- containerPort: 8080
name: myapp
volumeMounts:
- name: myapp-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: myapp-application-config
configMap:
name: myapp-config
items:
- key: application-dev.properties
path: application-dev.properties
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
NOTE: --spring.config.additional-location points to location of application.properties that we created in configmaps.
So making use of configmaps and application.properties one can override any configuration of your application wihtout rebuilding the image.
If you want to add a new config or update value of existing config, just make appropriate changes in configmap and kubectl apply it. Then scale down and scale up your application pod, to bring the new config in action.
Hope this helps.
There are many many ways to set Spring configuration values. With some rules, you can use ordinary environment variables to specify individual property values. You might see if you can use this instead of having a separate Spring profile control.
Using environment variables has two advantages here: it means you (or your DevOps team) can change deploy-time settings without recompiling the application; and if you're using a deployment manager like Helm where some details like hostnames are intrinsically unpredictable, this lets you specify values that can't be known until deploy time.
For example, let's say you have a Redis dependency:
cache:
redis:
url: redis://localhost:6379/0
You could override this at deploy time by setting
containers:
- name: myapp
env:
- name: CACHE_REDIS_URL
value: "redis://myapp-redis.default.svc.cluster.local:6379/0"
One way to do this is using spring cloud Kubernetes as described here
https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#configmap-propertysource
You can define your profiles in a configmap like below
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yml: |-
greeting:
message: Say Hello to the World
farewell:
message: Say Goodbye
---
spring:
profiles: development
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
---
spring:
profiles: production
greeting:
message: Say Hello to the Ops
And can then select the desired profile by passing an environment variable in your Kubernetes deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: deployment-name
spec:
replicas: 1
selector:
matchLabels:
app: deployment-name
template:
metadata:
labels:
app: deployment-name
spec:
containers:
- name: container-name
image: your-image
env:
- name: SPRING_PROFILES_ACTIVE
value: "development"

How to expose deployment as a service in 2.1-ee?

I created a service and use nodeport etc but couldn't access the service.
I created a web-service.yaml file with the following content and used kubectl to create the Service:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: web-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: webserver
and the webserver.yaml file with the following Deployment details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 3
template:
metadata:
labels:
run: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
In your deployment, label is run=webserver, but in your service, label is app=webserver. The service uses app=webserver as a Selector, through which it selects three pods that have the label "app" set to "webserver". In this case none of the pods has the label "app" so the deployment is not successfully exposed as a service. The label names and values in the deployment and service should match.

Resources