I have a spring boot application with two profiles, dev and prod, my docker file is:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-Dspring.profiles.active=dev","-cp","app:app/lib/*","com.my.Application"]
please not that, when building the image, I specify the entrypoint as command line argument.
This is the containers section of my kubernetes deployment where I use this image:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
ports:
- containerPort: 8080
name: myapp
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
It works but has a major flaw: how can I now switch to the production environment without rebuilding the image?
The best would be to remove that ENTRYPOINT in my docker file and give this configuration in my kubernetes yml so that I could always use the same image...is this possible?
edit: I saw that there is a lifecycle istruction but note that I have a readiness probe based on the spring boot's actuator. It would always fail if I used this construct.
You can override an image's ENTRYPOINT by using the command property of a Kubernetes Pod spec. Likewise, you could override CMD by using the args property (also see the documentation):
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-Dspring.profiles.active=prod","-cp","app:app/lib/*","com.my.Application"]
ports:
- containerPort: 8080
name: myapp
Alternatively, to provide a higher level of abstraction, you might write your own entrypoint script that reads the application profile from an environment variable:
#!/bin/sh
PROFILE="${APPLICATION_CONTEXT:-dev}"
exec java "-Dspring.profiles.active=$PROFILE" -cp app:app/lib/* com.my.Application
Then, you could simply pass that environment variable into your pod:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
env:
- name: APPLICATION_CONTEXT
value: prod
ports:
- containerPort: 8080
name: myapp
Rather than putting spring.profiles.active in dockerfile in the entrypoint.
Make use of configmaps and application.properties.
Your ENTRYPOINT in dockerfile should look like:
ENTRYPOINT ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
Create a configmap that acts as application.properties for your springboot application
---
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=myapp
server.port=8080
spring.profiles.active=dev
NOTE: Here we have specified spring.profiles.active.
In containers section of my kubernetes deployment mount the configmap inside container that will act as application.properties.
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
ports:
- containerPort: 8080
name: myapp
volumeMounts:
- name: myapp-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: myapp-application-config
configMap:
name: myapp-config
items:
- key: application-dev.properties
path: application-dev.properties
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
NOTE: --spring.config.additional-location points to location of application.properties that we created in configmaps.
So making use of configmaps and application.properties one can override any configuration of your application wihtout rebuilding the image.
If you want to add a new config or update value of existing config, just make appropriate changes in configmap and kubectl apply it. Then scale down and scale up your application pod, to bring the new config in action.
Hope this helps.
There are many many ways to set Spring configuration values. With some rules, you can use ordinary environment variables to specify individual property values. You might see if you can use this instead of having a separate Spring profile control.
Using environment variables has two advantages here: it means you (or your DevOps team) can change deploy-time settings without recompiling the application; and if you're using a deployment manager like Helm where some details like hostnames are intrinsically unpredictable, this lets you specify values that can't be known until deploy time.
For example, let's say you have a Redis dependency:
cache:
redis:
url: redis://localhost:6379/0
You could override this at deploy time by setting
containers:
- name: myapp
env:
- name: CACHE_REDIS_URL
value: "redis://myapp-redis.default.svc.cluster.local:6379/0"
One way to do this is using spring cloud Kubernetes as described here
https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#configmap-propertysource
You can define your profiles in a configmap like below
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yml: |-
greeting:
message: Say Hello to the World
farewell:
message: Say Goodbye
---
spring:
profiles: development
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
---
spring:
profiles: production
greeting:
message: Say Hello to the Ops
And can then select the desired profile by passing an environment variable in your Kubernetes deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: deployment-name
spec:
replicas: 1
selector:
matchLabels:
app: deployment-name
template:
metadata:
labels:
app: deployment-name
spec:
containers:
- name: container-name
image: your-image
env:
- name: SPRING_PROFILES_ACTIVE
value: "development"
Related
Im new in Kubernetes and having a hard time making to read application.properties in the deployment. I have attached our ConfigMap as a mounted volume under the /config path.
This is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 34343434.dkr.ecr.asia-2.amazonaws.com/myapp:latest
ports:
- containerPort: 80
volumeMounts:
- name: application-properties
mountPath: /config
volumes:
- name: application-properties
configMap:
name: application-properties
I have created configmap using kubectl command from a file that is located in my local computer.
kubectl create configmap application-properties -–from-file=/users/me/application.properties
Now the issue is the application.property file which i am setting it using the kubectl configmap is not getting picked up. Can you help me on this?
Based on the discussion, the issue was the configmap, instead of the property file, it was rendered as a string in the configmap.
kubectl get configmap application-properties -o yaml
>shows the contents but with all in oneline format. separated by \n
Converting it to YAML application.yml did the trick.
I’m trying to set some environment variables in k8s deployment and use them within the application.properties in my spring boot application, but it looks like I'm doing something wrong because spring is not reading those variables, although when checking the env vars on the pod, all the vars are set correctly.
The error log from the container:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port...
Any help will be appreciated.
application.properties:
spring.datasource.url=jdbc:postgresql://${DB_URL}:${DB_PORT}/${DB_NAME}
spring.datasource.username=${DB_USER_NAME}
spring.datasource.password=${DB_PASSWORD}
DockerFile
FROM openjdk:11-jre-slim-buster
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: .../api
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
env:
- name: DB_URL
value: "posgres"
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: postgres-config
key: dbName
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: db-secret
key: dbUserName
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: dbPassword
The DockerFile was wrong.
Everything is working fine after changing the DockerFile to this:
FROM maven:3.6.3-openjdk-11-slim as builder
WORKDIR /app
COPY pom.xml .
COPY src/ /app/src/
RUN mvn install -DskipTests=true
FROM adoptopenjdk/openjdk11:jre-11.0.8_10-alpine
COPY --from=builder /app/target/*.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
You miss the env: in your deployment.yaml, see here : https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
if you have kubectl installed you can check how env vars must be declared by doing an explain like follow : kubectl explain deployment.spec.template.spec.containers
I use:
Spring Boot
Microservices (containerized)
Docker
Kubernetes
My case is as follows:
I have to generate link:
https://dev-myapp.com OR https://qa-myapp.com
depending on the environment in which my service is running (DEV, QA)
Depending on the environment (DEV, QA). I have one Spring profile BUT under this profile my app can run in kubernetes on 2 types of environment: DEV or QA. I want to generate proper link - read it from my properties file:
#Value("${email.body}")
private String emailBody;
application.yaml:
email:
body: Click on the following URL: ${ENVIRONMENT_URL:}/edge/invitation?code={0}&email={1}
DEVOPS(Kubernetes):
Manifest in workloads folder (DEV branch, the same for qa branch nut this time with https://qa-myapp.com):
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
...
...
containers:
env:
- name: ENVIRONMENT_URL
value: https://dev-myapp.com
So is i possible to read that value from kubernetes container in my Spring properties file? I want to get email.body property depending on the container my service is running on.
Yes this is possible and have corrected the syntax of the yaml
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
command: ["/bin/sh", "-c", "env | grep ENVIRONMENT_URL"]
env:
- name: ENVIRONMENT_URL
value: https://myapp.com. #Indedntation Changed
ports:
- containerPort: 80
I have an application written in Go which reads environmental variables from a config.toml file.
The config.toml file contains the key value as
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
In my application am reading the all the varialbes from the .toml file to my application as
// Represents database and server credentials
type Config struct {
Server string
Database string
NRFAddrPort string
}
var NRFAddrPort string
// Read and parse the configuration file
func (c *Config) Read() {
if _, err := toml.DecodeFile("config.toml", &c); err != nil {
log.Print("Cannot parse .toml configuration file ")
}
NRFAddrPort = c.NRFAddrPort
}
I would like to deploy my application in my Kubernetes cluster (3 VMs,a master and 2 worker nodes). After creating a docker and pushed to docker hub, when deploy my application using configMaps to parse the variables, my application runs for a few seconds and then gives Error.
It seems the application cannot read the env variable from the configMap. Below is my configMap and the deployemnt.
apiVersion: v1
kind: ConfigMap
metadata:
name: nrf-config
namespace: default
data:
config-toml: |
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
apiVersion: apps/v1
kind: Deployment
metadata:
name: nrf-instance
spec:
selector:
matchLabels:
app: nrf-instance
replicas: 1
template:
metadata:
labels:
app: nrf-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: nrf-instance
image: grego/appapi:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /home/ubuntu/appapi
volumes:
- name: config-volume
configMap:
name: nrf-config
Also one thing I do not understand is the mountPath in volumeMounts. Do I need to copy the config.toml to this mountPath?
When i hard code these variable in my application and deploy the docker image in kubernetes, it run without error.
My problem now is how to parse these environmental variable to my application using kubernetes configMap or any method so it can run in my Kubernetes cluster instead of hard code them in my application. Any help.
Also attached is my Dockerfile content
# Dockerfile References: https://docs.docker.com/engine/reference/builder/
# Start from the latest golang base image
FROM golang:latest as builder
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download all dependencies. Dependencies will be cached if the go.mod and go.sum files are not changed
RUN go mod download
# Copy the source from the current directory to the Working Directory inside the container
COPY . .
# Build the Go app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
######## Start a new stage from scratch #######
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy the Pre-built binary file from the previous stage
COPY --from=builder /app/main .
# Expose port 9090 to the outside world
EXPOSE 9090
# Command to run the executable
CMD ["./main"]
Any problem about the content?
Passing the values as env values as
apiVersion: apps/v1
kind: Deployment
metadata:
name: nrf-instance
spec:
selector:
matchLabels:
app: nrf-instance
replicas: 1
template:
metadata:
labels:
app: nrf-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: nrf-instance
image: grego/appapi:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
env:
- name: Server
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
- name: Database
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
- name: NRFAddrPort
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
You cannot pass those values as separate environment variables as it is because they are read as one text blob instead of separate key:values. Current configmap looks like this:
Data
====
config.toml:
----
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
To pass it as environment variables you have to modify the configmap to read those values as key: value pair:
kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap
data:
Server: mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
Database: nrfdb
NRFAddrPort: :9090
This way those values will be separated and can be passed as env variables:
Data
====
Database:
----
nrfdb
NRFAddrPort:
----
:9090
Server:
----
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
When you pass it to pod:
[...]
spec:
containers:
- name: nrf-instance
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
envFrom:
- configMapRef:
name: example-configmap
You can see that it was passed correctly, for example by executing env command inside the pod:
kubectl exec -it env-6fb4b557d7-zw84w -- env
NRFAddrPort=:9090
Server=mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
Database=nrfdb
The values are read as separate env variables, for example Server value:
kubectl exec -it env-6fb4b557d7-zw84w -- printenv Server
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
What you currently have will create a file in the mountpoint for each key in your config map. Your code is looking for "config.toml" but the key is "config-toml" so it isn't finding it.
If you want the keep the key as-is, you can control what keys are written where (within the mount) like this:
volumes:
- name: config-volume
configMap:
name: nrf-config
items:
- key: config-toml
path: config.toml
I'm beginning with kubernetes and docker and facing an issue.
Deployed a springboot app on minikube after converting it to docker image (using minikube's docker)... the app is online and receiving request so well as you can see in the below screenshots, but doesn't reply as expected.
For example, when i deploy the app normally (on my computer like usually) everything works well i can go on all html pages etc, but once deployed inside minikube it doesn't reply correctly. (all working part is the receiving of favicon of spring)
YAMLs used to deploy the app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: esse-deployment-1
labels:
app: esse
spec:
replicas: 1
selector:
matchLabels:
app: esse-1
template:
metadata:
labels:
app: esse-1
spec:
containers:
- image: mysql:5.7
name: esse-datasource
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: esse_password
- image: esse_application
name: esse-app-1
imagePullPolicy: Never
ports:
- containerPort: 8080
volumes:
- name: esse-1-mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-persistent-storage-claim
---
apiVersion: v1
kind: Service
metadata:
name: esse-service-1
spec:
selector:
app: esse-1
ports:
- protocol: TCP
port: 8080
type: NodePort
----
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-persistent-storage
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/docker/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-persistent-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
Docker file to contruct image:
FROM openjdk:8
ADD ESSE_Application.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
I can see you have .yml files to define the deployment and the service, but I can see no "Ingress". A .yml file of kind: Route is needed in order to tell kubernetes that there should be an external url pointing to your service; a minimal Ingress resource example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
**Please don't take this code literally, I'm just trying to draft some example code here and may not match perfectly your naming/data :)
Finally solved the problem.
Everything was working fine because i was launching the app from Eclipse IDE, and when packaging a .jar file, the jps files included inside the /webapp/WEB-INF/jsps folder were not included inside the .jar file and even including them throw the <resources> tag in the pom.xml file didn't solve the problem since the jar packaging isn't suitable for jsp file.
I fixed the problem by adding <packaging>war</packaging> inside the pom.xml file to change the packaging method as the .jsp files feel confortable within a .war file.
Thanks to #Marc for the help.