Kubernetes spring application in docker connect external service - spring-boot

I'm new in kubernetes and docker world :)
I try to deploy our application in docker in kubernetes, but i can't connect to external mysql database..
my steps:
1, Install kubernetes with kubeadm in our new server.
2, Create a docker image from our application with mvn spring-boot:build-image
3, I create a deployment and service yaml to use image.
Deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
app: demo-app
name: demo-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: demo-app
spec:
containers:
- image: demo/demo-app:0.1.05-SNAPSHOT
imagePullPolicy: IfNotPresent
name: demo-app-service
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://mysqldatabase/DBDEV?serverTimezone=Europe/Budapest&useLegacyDatetimeCode=false
ports:
- containerPort: 4000
volumeMounts:
- name: uploads
mountPath: /uploads
- name: ssl-dir
mountPath: /ssl
volumes:
- name: ssl-dir
hostPath:
path: /var/www/dev.hu/backend/ssl
- name: uploads
hostPath:
path: /var/www/dev.hu/backend/uploads
restartPolicy: Always
Service YAML:
apiVersion: v1
kind: Service
metadata:
labels:
app: demo-app
name: demo-app
namespace: default
spec:
ports:
- port: 4000
name: spring
protocol: TCP
targetPort: 4000
selector:
app: demo-app
sessionAffinity: None
type: LoadBalancer
4, Create an endpoints and Service YAML, to communicate to outside:
kind: Endpoints
apiVersion: v1
metadata:
name: mysqldatabase
subsets:
- addresses:
- ip: 10.10.0.42
ports:
- port: 3306
---
kind: Service
apiVersion: v1
metadata:
name: mysqldatabase
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
But it's not working, when i going to see logs i see spring cant connect to database.
Caused by: java.net.UnknownHostException: mysqldatabase
at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:132)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63)
thanks for any helps

hold on. you don't create endpoints yourself. endpoints are registered by kubernetes when a service has matching pods. right now, you have deployed your application and exposed it via a service.
if you want to connect to your mysql database via service it needs to be deployed and kubernetes as well. if it is not hosted on kubernetes you will need a hostname or the ip address of the database and adapt your SPRING_DATASOURCE_URL accordingly!

Related

How to create the deployment, statefulset of service-registry(eureka-server) in kubernetes?

I have been trying to create a statefulset of service-registry (eureka-server) in a springboot application. The reason i am doing this because i want to attach pre-defined name to the service-registry pod so that its able to communicate with all the eureka clients even after it restarts. Even though i have been able to create the services (headless and nodeport) with the configuration, but it doesn't create the pod/deployment and the PersistentVolumeClaim itself. Please check the below deployment yaml and suggest the changes.
# Define a 'Persistent Volume Claim'(PVC) for Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: rtb
name: service-registry-pv-claim # name of PVC essential for identifying the storage data
labels:
app: eureka
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: rtb
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: rtb
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: service-registry-persistent-storage
mountPath: /var/lib/eureka #This is the path in the container on which the mounting will take place.
volumes:
- name: service-registry-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: service-registry-pv-claim
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
and below is the application.yml file
server:
port: 8761
eureka:
instance:
hostname: "${HOSTNAME}.eureka"
client:
register-with-eureka: false
fetch-registry: false
serviceUrl:
defaultZone: ${EUREKA_SERVER_ADDRESS}
this is how the eureka client apps are referring to eureka server
eureka:
instance:
preferIpAddress: true
hostname: eureka-0
I am new to Kubernetes, so please suggest the changes.
Configuration after adding the PesistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: rtb
name: my-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostpath:
path: /run/desktop/mnt/host/c/Users/User/Documents/kubernetesbkp
---
# Define a 'Persistent Volume Claim'(PVC) for Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: rtb
name: service-registry-pv-claim # name of PVC essential for identifying the storage data
labels:
app: eureka
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: rtb
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: rtb
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: service-registry-persistent-storage
mountPath: /var/lib/eureka #This is the path in the container on which the mounting will take place.
volumes:
- name: service-registry-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: service-registry-pv-claim
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
If you are using kubernetes locally then first you have to create a PersistentVolume. Only then you can use the PersistentVolumeClaim to retrive the storage from the PV you created. Otherwise your PVC claim will be in a pending state. Because without PV the PersistentVolumeClaim did not know that from where it needs to pick up the volume.
So try creating the PersistentVolume like this
PersistentVolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostpath:
path: /tmp/ #Path where you want to allocate your PV in local
Now you can create the PVC and statefulset as per your requirement.
NOTE: Make Sure the PV storage is always greater or equal to the claim.
If you are using the docker dekstop kubernetes then hostpath will be different than mentioned above, refer this SO for it.
For more detailed information. Refer these links link1 link2
This is the below deployment configuration which worked for me, but just one thing, i applied all of the configurations one by one, otherwise it doesn't work on a single apply and the statefulset isn't created then. Also, i am mentioning it as a single configuration file here for convenience.
Would be helpful if someone can point out why it doesn't work in a single apply command. Thanks!
apiVersion: v1
kind: ConfigMap
metadata:
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
---
apiVersion: v1
kind: Service
metadata:
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761

Kubernetes poniting to oracle DB in separate VM

I am currently working ona kubernetes deployment,My application is running in Kubernetes cluster while my DB is running in a different VM.
apiVersion: apps/v1
kind: Deployment
metadata:
name: dcalln
spec:
selector:
matchLabels:
app: dcalln
replicas: 1
template:
metadata:
labels:
app: dcalln
spec:
containers:
- name: dcalln
image: "xxx.io/registry:1.0.88-ad3c142-2108190744"
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
labels:
app: dcalln
name: dcalln
namespace: testnamespace
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 1512
externalIPs:
- XXX.XXX.XXX.XXX
XXX.XXX.XXX.XXX is my oracle DB server. Its not part of kubernetes cluster.But I see the DB connection is not happening. Is there anything I am missing. How do I change my deployment specification to correctly point to DB

How to run spring boot mysql application docker image on kubernetes?

My DockerFile looks like :
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
and my yml file looks like :
apiVersion: apps/v1
kind: Deployment
metadata:
name: imagename
namespace: default
spec:
replicas: 3
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: imagename
image: imagename:1.1
imagePullPolicy: Never
env:
- name: MYSQL_USER
value: root
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: imagename
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
i have build docker image using below command :
docker build -t dockerimage:1.1 .
and running the docker image like :
docker run -p 8080:8080 --network=host dockerimage:1.1
When i deploy this image in kubernetes environment i am getting error :
ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
Also i have done port forwarding :
Forwarding from 127.0.0.1:13306 -> 3306
Any suggestion what is wrong with the above configuration ?
you need to add a service type clusterIP to your database like that:
MySQL Service:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
tier: mysql
clusterIP: None
MySQL PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: my-db-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
MySQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql-deployment
spec:
selector:
matchLabels:
app: mysql-deployment
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-deployment
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Now on your Spring application what you need to access to the database is :
Spring Boot deployment
apiVersion: apps/v1 # API version
kind: Deployment # Type of kubernetes resource
metadata:
name: order-app-server # Name of the kubernetes resource
labels: # Labels that will be applied to this resource
app: order-app-server
spec:
replicas: 1 # No. of replicas/pods to run in this deployment
selector:
matchLabels: # The deployment applies to any pods mayching the specified labels
app: order-app-server
template: # Template for creating the pods in this deployment
metadata:
labels: # Labels that will be applied to each Pod in this deployment
app: order-app-server
spec: # Spec for the containers that will be run in the Pods
imagePullSecrets:
- name: testXxxxxsecret
containers:
- name: order-app-server
image: XXXXXX/order:latest
ports:
- containerPort: 8080 # The port that the container exposes
env: # Environment variables supplied to the Pod
- name: MYSQL_ROOT_USERNAME # Name of the environment variable
valueFrom: # Get the value of environment variable from kubernetes secrets
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_USERNAME
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_ROOT_URL
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
Create your Secret :
apiVersion: v1
kind: Secret
data:
MYSQL_ROOT_USERNAME: <BASE64-ENCODED-PASSWORD>
MYSQL_ROOT_URL: <BASE64-ENCODED-DB-NAME>
MYSQL_ROOT_USERNAME: <BASE64-ENCODED-DB-USERNAME>
MYSQL_ROOT_PASSWORD: <BASE64-ENCODED-DB-PASSWORD>
metadata:
name: mysql-secret
Spring Boot Service:
apiVersion: v1 # API version
kind: Service # Type of the kubernetes resource
metadata:
name: order-app-server-service # Name of the kubernetes resource
labels: # Labels that will be applied to this resource
app: order-app-server
spec:
type: LoadBalancer # The service will be exposed by opening a Port on each node and proxying it.
selector:
app: order-app-server # The service exposes Pods with label `app=polling-app-server`
ports: # Forward incoming connections on port 8080 to the target port 8080
- name: http
port: 8080

deploy Laravel in kubernetes

I'm trying to deploy a laravel application in kubernetes at Google Cloud Platform.
I followed couple of tutorials and was successful trying them locally on a docker VM.
https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way
https://blog.cloud66.com/deploying-your-laravel-php-applications-with-cloud-66/
But when tried to deploy in kubernetes using an ingress to assign a domain name to the application. I keep getting the 502 bad gateway page.
I'm using a nginx ingress controller with image k8s.gcr.io/nginx-ingress-controller:0.8.3 and my ingress is as following
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- domainname.com
secretName: sslcertificate
rules:
- host: domain.com
http:
paths:
- backend:
serviceName: service
servicePort: 80
path: /
this is my application service
apiVersion: v1
kind: Service
metadata:
name: service
labels:
name: demo
version: v1
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
name: demo
type: NodePort
this is my ingress controller
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
and here is my laravel application deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo-rc
labels:
name: demo
version: v1
spec:
strategy:
type: Recreate
template:
metadata:
labels:
name: demo
version: v1
spec:
containers:
- image: gcr.io/projectname/laravelapp:vx
name: app-pod
ports:
- containerPort: 8080
I tried to add the domain entry to the hosts file but with no luck !!
is there a specific configurations I have to add to the configmap.yaml file for the nginx ingress controller?
In short, to be able to reach your application via external domain name (singapore.smartlabplatform.com), you need to create a A DNS record for GCP L4 Load Balancer's external IP address (this is in other words EXTERNAL-IP of your default nginx-ingress-controller's Service), here seen as pending:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP
nginx-ingress-controller LoadBalancer 10.7.248.226 pending
nginx-ingress-default-backend ClusterIP 10.7.245.75 none
how to do this? it's explained on the GKE tutorials page here.
In the current state of your environment you can only reach your application in two ways:
From outside, via Load Balancer EXTERNAL-IP:
From inside, your Kubernetes cluster using laravel-kubernetes-demo service dns name:
$ curl laravel-kubernetes-demo.default.svc.cluster.local
<title>Laravel Kubernetes Demo :: LearnK8s</title>
If you want all that magic, like the automatic creation of DNS records, happen along with appearance of host: domain.com in your ingress resource spec, you should use external-dns (makes Kubernetes resources discoverable via public DNS servers), and here is the tutorial on how to set it up specifically for GKE.

No service dependencies found in Jaeger UI

I am new to jaeger and I am facing issues with finding the services list in the jaeger UI.
Below are the .yaml configurations I prepared to run jaeger with my spring boot app on Kubernetes using minikube locally.
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/elasticsearch.yml --namespace=kube-system
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/jaeger-production-template.yml --namespace=kube-system
Created deployment for my spring boot app and jaeger agent to run on the same container
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tax-app-deployment
spec:
template:
metadata:
labels:
app: tax-app
version: latest
spec:
containers:
- image: tax-app
name: tax-app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- image: jaegertracing/jaeger-agent
imagePullPolicy: IfNotPresent
name: jaeger-agent
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 5778
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
command:
- "/go/bin/agent-linux"
- "--collector.host-port=jaeger-collector.jaeger-infra.svc:14267"
And the spring boot app service yaml
apiVersion: v1
kind: Service
metadata:
name: tax
labels:
app: tax-app
jaeger-infra: tax-service
spec:
ports:
- name: tax-port
port: 8080
protocol: TCP
targetPort: 8080
clusterIP: None
selector:
jaeger-infra: jaeger-tax
I am getting
No service dependencies found
Service graph data must be generated in Jaeger. Currently it's possible with via a Spark job here: https://github.com/jaegertracing/spark-dependencies

Resources