My DockerFile looks like :
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
and my yml file looks like :
apiVersion: apps/v1
kind: Deployment
metadata:
name: imagename
namespace: default
spec:
replicas: 3
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: imagename
image: imagename:1.1
imagePullPolicy: Never
env:
- name: MYSQL_USER
value: root
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: imagename
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
i have build docker image using below command :
docker build -t dockerimage:1.1 .
and running the docker image like :
docker run -p 8080:8080 --network=host dockerimage:1.1
When i deploy this image in kubernetes environment i am getting error :
ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
Also i have done port forwarding :
Forwarding from 127.0.0.1:13306 -> 3306
Any suggestion what is wrong with the above configuration ?
you need to add a service type clusterIP to your database like that:
MySQL Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
tier: mysql
clusterIP: None
MySQL PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: my-db-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql-deployment
spec:
selector:
matchLabels:
app: mysql-deployment
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-deployment
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Now on your Spring application what you need to access to the database is :
Spring Boot deployment
apiVersion: apps/v1 # API version
kind: Deployment # Type of kubernetes resource
metadata:
name: order-app-server # Name of the kubernetes resource
labels: # Labels that will be applied to this resource
app: order-app-server
spec:
replicas: 1 # No. of replicas/pods to run in this deployment
selector:
matchLabels: # The deployment applies to any pods mayching the specified labels
app: order-app-server
template: # Template for creating the pods in this deployment
metadata:
labels: # Labels that will be applied to each Pod in this deployment
app: order-app-server
spec: # Spec for the containers that will be run in the Pods
imagePullSecrets:
- name: testXxxxxsecret
containers:
- name: order-app-server
image: XXXXXX/order:latest
ports:
- containerPort: 8080 # The port that the container exposes
env: # Environment variables supplied to the Pod
- name: MYSQL_ROOT_USERNAME # Name of the environment variable
valueFrom: # Get the value of environment variable from kubernetes secrets
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_USERNAME
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_ROOT_URL
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
Create your Secret :
apiVersion: v1
kind: Secret
data:
MYSQL_ROOT_USERNAME: <BASE64-ENCODED-PASSWORD>
MYSQL_ROOT_URL: <BASE64-ENCODED-DB-NAME>
MYSQL_ROOT_USERNAME: <BASE64-ENCODED-DB-USERNAME>
MYSQL_ROOT_PASSWORD: <BASE64-ENCODED-DB-PASSWORD>
metadata:
name: mysql-secret
Spring Boot Service:
apiVersion: v1 # API version
kind: Service # Type of the kubernetes resource
metadata:
name: order-app-server-service # Name of the kubernetes resource
labels: # Labels that will be applied to this resource
app: order-app-server
spec:
type: LoadBalancer # The service will be exposed by opening a Port on each node and proxying it.
selector:
app: order-app-server # The service exposes Pods with label `app=polling-app-server`
ports: # Forward incoming connections on port 8080 to the target port 8080
- name: http
port: 8080
Related
I have been trying to create a statefulset of service-registry (eureka-server) in a springboot application. The reason i am doing this because i want to attach pre-defined name to the service-registry pod so that its able to communicate with all the eureka clients even after it restarts. Even though i have been able to create the services (headless and nodeport) with the configuration, but it doesn't create the pod/deployment and the PersistentVolumeClaim itself. Please check the below deployment yaml and suggest the changes.
# Define a 'Persistent Volume Claim'(PVC) for Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: rtb
name: service-registry-pv-claim # name of PVC essential for identifying the storage data
labels:
app: eureka
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: rtb
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: rtb
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: service-registry-persistent-storage
mountPath: /var/lib/eureka #This is the path in the container on which the mounting will take place.
volumes:
- name: service-registry-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: service-registry-pv-claim
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
and below is the application.yml file
server:
port: 8761
eureka:
instance:
hostname: "${HOSTNAME}.eureka"
client:
register-with-eureka: false
fetch-registry: false
serviceUrl:
defaultZone: ${EUREKA_SERVER_ADDRESS}
this is how the eureka client apps are referring to eureka server
eureka:
instance:
preferIpAddress: true
hostname: eureka-0
I am new to Kubernetes, so please suggest the changes.
Configuration after adding the PesistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: rtb
name: my-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostpath:
path: /run/desktop/mnt/host/c/Users/User/Documents/kubernetesbkp
---
# Define a 'Persistent Volume Claim'(PVC) for Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: rtb
name: service-registry-pv-claim # name of PVC essential for identifying the storage data
labels:
app: eureka
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: rtb
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: rtb
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: service-registry-persistent-storage
mountPath: /var/lib/eureka #This is the path in the container on which the mounting will take place.
volumes:
- name: service-registry-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: service-registry-pv-claim
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
If you are using kubernetes locally then first you have to create a PersistentVolume. Only then you can use the PersistentVolumeClaim to retrive the storage from the PV you created. Otherwise your PVC claim will be in a pending state. Because without PV the PersistentVolumeClaim did not know that from where it needs to pick up the volume.
So try creating the PersistentVolume like this
PersistentVolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostpath:
path: /tmp/ #Path where you want to allocate your PV in local
Now you can create the PVC and statefulset as per your requirement.
NOTE: Make Sure the PV storage is always greater or equal to the claim.
If you are using the docker dekstop kubernetes then hostpath will be different than mentioned above, refer this SO for it.
For more detailed information. Refer these links link1 link2
This is the below deployment configuration which worked for me, but just one thing, i applied all of the configurations one by one, otherwise it doesn't work on a single apply and the statefulset isn't created then. Also, i am mentioning it as a single configuration file here for convenience.
Would be helpful if someone can point out why it doesn't work in a single apply command. Thanks!
apiVersion: v1
kind: ConfigMap
metadata:
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
---
apiVersion: v1
kind: Service
metadata:
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
I have deployed elastic APM server into kubernetes and was trying to expose it through nginx ingress controller. Following is my configuration:
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: elastic
name: apm-server-config
labels:
k8s-app: apm-server
data:
apm-server.yml: |-
apm-server:
host: "0.0.0.0:8200"
setup.kibana:
enabled: "true"
host: "kibana:5601"
output.elasticsearch:
hosts: ["elastic:9200"]
---
#Deployment Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: apm-server
env: msprod
state: common
name: apm-server
namespace: elastic
spec:
replicas: 1
minReadySeconds: 10
selector:
matchLabels:
app: apm-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: apm-server
spec:
containers:
- image: docker.elastic.co/apm/apm-server:7.12.1
imagePullPolicy: Always
env:
- name: output.elasticsearch.hosts
value: "http://elastic:9200"
name: apm-server
ports:
- name: liveness-port
containerPort: 8200
volumeMounts:
- name: apm-server-config
mountPath: /usr/share/apm-server/apm-server.yml
readOnly: true
subPath: apm-server.yml
resources:
limits:
cpu: 250m
memory: 1024Mi
requests:
cpu: 100m
memory: 250Mi
volumes:
- name: apm-server-config
configMap:
name: apm-server-config
nodeSelector:
env: prod
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
#Service Configuration
apiVersion: v1
kind: Service
metadata:
labels:
app: apm-server
name: apm-server
namespace: elastic
spec:
ports:
- port: 8200
targetPort: 8200
name: http
nodePort: 31000
selector:
app: apm-server
sessionAffinity: None
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: elastic
name: gateway-ingress-apm
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /apm
backend:
serviceName: apm-server
servicePort: 8200
The pod is running and I am able to hit APM server using kubectl port-forward.
But when I am accessing the apm server with https://my.domain.com/apm then I am getting page not found error in browser and following error in APM pod:
{"log.level":"error","#timestamp":"2021-10-21T06:22:00.198Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":60},"message":"404 page not found","url.original":"/apm","http.request.method":"GET","user_agent.original":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36","source.address":"10.148.7.7","http.request.body.bytes":0,"http.request.id":"9294124a-5356-4b2c-ba8e-c0a589b23571","event.duration":110881,"http.response.status_code":404,"error.message":"404 page not found","ecs.version":"1.6.0"}
The error is coming because there is no context path configured in APM. I have gone through the APM documentation and couldn't find a way to configure context path in the apm server. Please help.
Posting this as answer out of comments.
Initial ingress rule passes the same path /apm to the APM service, which is confirmed by error in APM pod's logs - "message":"404 page not found","url.original":"/apm"
To fix it, nginx ingress has rewrite annotation. The way it works is described in the link with example.
Final ingress.yaml should look like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: elastic
name: gateway-ingress-apm
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2 # adding captured group
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /apm(/|$)(.*) # to have captured group works correctly
backend:
serviceName: apm-server
servicePort: 8200
What happens here is requests sent to my.domain.com/apm goes to the service on / path.
Captured group allows to preserve correct paths, for instance if the request goes to my.domain.com/apm/something, ingress will translate it to /something which will be passed to the service.
I am currently working ona kubernetes deployment,My application is running in Kubernetes cluster while my DB is running in a different VM.
apiVersion: apps/v1
kind: Deployment
metadata:
name: dcalln
spec:
selector:
matchLabels:
app: dcalln
replicas: 1
template:
metadata:
labels:
app: dcalln
spec:
containers:
- name: dcalln
image: "xxx.io/registry:1.0.88-ad3c142-2108190744"
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
labels:
app: dcalln
name: dcalln
namespace: testnamespace
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 1512
externalIPs:
- XXX.XXX.XXX.XXX
XXX.XXX.XXX.XXX is my oracle DB server. Its not part of kubernetes cluster.But I see the DB connection is not happening. Is there anything I am missing. How do I change my deployment specification to correctly point to DB
I'm new in kubernetes and docker world :)
I try to deploy our application in docker in kubernetes, but i can't connect to external mysql database..
my steps:
1, Install kubernetes with kubeadm in our new server.
2, Create a docker image from our application with mvn spring-boot:build-image
3, I create a deployment and service yaml to use image.
Deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
app: demo-app
name: demo-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: demo-app
spec:
containers:
- image: demo/demo-app:0.1.05-SNAPSHOT
imagePullPolicy: IfNotPresent
name: demo-app-service
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://mysqldatabase/DBDEV?serverTimezone=Europe/Budapest&useLegacyDatetimeCode=false
ports:
- containerPort: 4000
volumeMounts:
- name: uploads
mountPath: /uploads
- name: ssl-dir
mountPath: /ssl
volumes:
- name: ssl-dir
hostPath:
path: /var/www/dev.hu/backend/ssl
- name: uploads
hostPath:
path: /var/www/dev.hu/backend/uploads
restartPolicy: Always
Service YAML:
apiVersion: v1
kind: Service
metadata:
labels:
app: demo-app
name: demo-app
namespace: default
spec:
ports:
- port: 4000
name: spring
protocol: TCP
targetPort: 4000
selector:
app: demo-app
sessionAffinity: None
type: LoadBalancer
4, Create an endpoints and Service YAML, to communicate to outside:
kind: Endpoints
apiVersion: v1
metadata:
name: mysqldatabase
subsets:
- addresses:
- ip: 10.10.0.42
ports:
- port: 3306
---
kind: Service
apiVersion: v1
metadata:
name: mysqldatabase
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
But it's not working, when i going to see logs i see spring cant connect to database.
Caused by: java.net.UnknownHostException: mysqldatabase
at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:132)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63)
thanks for any helps
hold on. you don't create endpoints yourself. endpoints are registered by kubernetes when a service has matching pods. right now, you have deployed your application and exposed it via a service.
if you want to connect to your mysql database via service it needs to be deployed and kubernetes as well. if it is not hosted on kubernetes you will need a hostname or the ip address of the database and adapt your SPRING_DATASOURCE_URL accordingly!
I have a springboot app which I want to deploy on Kubernetes (I'm using minikube) with a custom context path taken from the environment variables.
I've compiled an app.war file. exported an environment variable in Linux as follow:
export SERVER_SERVLET_CONTEXT_PATH=/app
And then started my app on my machine as follow:
java -jar app.war --server.servlet.context-path=$(printenv CONTEXT_PATH)
and it works as expected, I can access my app throw browser using the url localhost:8080/app/
I want to achieve the same thing on minikube so I prepared those config files:
Dockerfile:
FROM openjdk:8
ADD app.war app.war
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.war", "--server.servlet.context-path=$(printenv CONTEXT_PATH)"]
deployment config file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: esse-deployment-1
labels:
app: esse-1
spec:
replicas: 1
selector:
matchLabels:
app: esse-1
template:
metadata:
labels:
app: esse-1
spec:
containers:
- image: mysql:5.7
name: esse-datasource
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- image: esse-application
name: esse-app
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: server.servlet.context-path
value: /esse-1
volumes:
- name: esse-1-mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-persistent-storage-claim
---
apiVersion: v1
kind: Service
metadata:
name: esse-service-1
labels:
app: esse-1
spec:
selector:
app: esse-1
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
However, the java container inside the pod fails to start and here's the exception is thrown by spring:
Initialization of bean failed; nested exception is
java.lang.IllegalArgumentException: ContextPath must start with '/'
and not end with '/'
Make use of configmaps.
The configmap will holds application.properties of your springboot application.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: esse-config
data:
application-dev.properties: |
spring.application.name=my-esse-service
server.port=8080
server.servlet.context-path=/esse-1
NOTE: server.servlet.context-path=/esse-1 will override context-path of your springboot application.
Now refer this configmap in your deployment yaml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: esse-deployment-1
labels:
app: esse-1
spec:
replicas: 1
selector:
matchLabels:
app: esse-1
template:
metadata:
labels:
app: esse-1
spec:
containers:
- image: mysql:5.7
name: esse-datasource
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- image: esse-application
name: esse-app
imagePullPolicy: Never
command: [ "java", "-jar", "app.war", "--spring.config.additional-location=/config/application-dev.properties" ]
ports:
- containerPort: 8080
volumeMounts:
- name: esse-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: esse-application-config
configMap:
name: esse-config
items:
- key: application-dev.properties
path: application-dev.properties
- name: esse-1-mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-persistent-storage-claim
NOTE: Here we are mounting configmap inside your springboot application container at /config folder. Also --spring.config.additional-location=/config/application-dev.properties is pointing to the application.properties config file.
In future if you want to add any new config or update the value of existing config that just make the change in configmap and kubectl apply it. Then to reflect those new config changes, just scale down and scale up the deployment.
Hope this helps.
Finally, I found a solution.
I configured my application to startup with a value for the context path taken from the environment variables by adding this line inside my application.properties:
server.servlet.context-path=${ESSE_APPLICATION_CONTEXT}
And the rest remains as it was, means I'm giving the value of the variable ESSE_APPLICATION_CONTEXT throw the config
env:
- name: ESSE_APPLICATION_CONTEXT
value: /esse-1
And then starting the application without the --server.servlet.context-path parameeter, which means like that:
java -jar app.war
NOTE: as pointed by #mchawre's answer, it's also possible to make use of ConfigMap as documented in Kubernetes docs.
Looks like what you want is SERVER_SERVLET_CONTEXT_PATH variable defined in your container spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: esse-deployment-1
labels:
app: esse-1
spec:
replicas: 1
selector:
matchLabels:
app: esse-1
template:
metadata:
labels:
app: esse-1
spec:
containers:
- image: mysql:5.7
name: esse-datasource
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- image: esse-application
name: esse-app
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: SERVER_SERVLET_CONTEXT_PATH <== HERE
value: /esse-1
volumes:
- name: esse-1-mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-persistent-storage-claim
Note that in your Pod spec you are using /esse-1 while on your local setup you have /app