Connecting spring boot to postgres statefulset in Kubernetes - spring

I'm new to Kubernetes and I'm learning about statefulsets. For stateful applications, where the identity of pods matter, we use statefulsets instead of simple deployments so each pod can have its own persistent volume. The writes need to be pointed to the master pod, while the reading operations can be pointed to the slaves. So pointing to the ClusterIP service attached to the statefulset won't guarantee the replication, instead we need to use a headless service that will be pointing to the master.
My questions are the following :
How to edit the application.properties in spring boot project to use the slaves for reading operations ( normal ClusterIP service ) and the master for writing/reading operations ( Headless service )?
In case that is unnecessary and the headless service does this work for us, how does it work exactly since it's pointing to the master ?

Related

Google Kubernetes Engine Spring Boot App Cant Connect To Database Within Same Network

I have a spring boot app deployed to GKE in the us-central1 region. There is a postgres database that runs on a compute engine VM instance. Both are a part of the 'default' VPC network. I can ping this database by its hostname from within one of the GKE pods. However when the spring boot app launches and attempts connection to the database using the same hostname like so in the properties file, I get a connection timeout error and the app fails to startup:
spring.datasource.url=jdbc:postgresql://database01:5432/primary
We have similar connections to this database from other VM instances that work fine. There is aalso a similar setup with Kafka and the app is also unable to resolve the broker hostnames. What am I missing?
If your database is installed as external database on GKE, then you need to create "ExternalName" service
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
type: ExternalName
externalName: usermgmtdb.csw6ylddqss8.us-east-1.rds.amazonaws.com
In this case, you need to know the "externalName" of your external PostgreSQL service, which is external dns name.
Otherwise, if you deploy PostgreSQL to your Kubernetes Cluster, you need to create PersistentVolumeClaim, StorageClass, PostgreSQL Deployment and PostgreSQL ClusterIP service.
See the manifest examples here: https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/tree/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-03-UserManagement-MicroService-with-MySQLDB/kube-manifests

Hazelcast management center on kuberntes

How can we run the Hazelcast management center on Kubernetes?
What i did so far is - Deployed the hazelcast/management-center Docker image on our Kubernetes cluster. However as seen on the management-center Pod log, it has started on localhost:8080 by default. I did this with a 'Deployment' on K8s pointing to Docker hazelcast/management-center image. Hazelcast Server is running as the Spring Boot application, also other applications are able to connect to this as Hazelcast client.
Question is how can we run/connect the management center to our Hazelcast cluster running on the same namespace in the Kubernetes (with 2 members in the cluster)?
I suggest using the official helm chart, it includes Management Center by default.
If it doesn't work for you, first I would check if there is a k8s Service defined for the deployment, and if yes, if there is a k8s Ingress exposing that service.

How to deploy Jaeger on Kubernetes GKE

I have added these fields in application.yml of microservices and dependency in pom.xml.Jaeger running on my local is abl to identify the services as well
opentracing.jaeger.udp-sender.host=localhost
opentracing.jaeger.udp-sender.port=6831
I have deployed all my microservices on kubernetes. Please help me in deploying jaeger on kubernetes.
UPDATE:
I have reached this step. I have a load balancer IP for jaeger-query. But on which host and port will my microservice send the logs to ??
You can use Jaeger Operator to deploy Jaeger on kubernetes.The Jaeger Operator is an implementation of a Kubernetes Operator. Operators are pieces of software that ease the operational complexity of running another piece of software. More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application
Follow this link for steps to deploy JAEGER on kubernetes .
https://www.upnxtblog.com/index.php/2018/07/16/kubernetes-tutorial-distributed-tracing-with-jaeger/
make following changes in application.properties
opentracing.jaeger.udp-sender.host=<load_balancer_ip> of your jaeger service
opentracing.jaeger.http-sender.url=http://<jaeger-collector-service-name>:port/api/traces
You can use this link for a better understanding - https://www.upnxtblog.com/index.php/2018/07/16/kubernetes-tutorial-distributed-tracing-with-jaeger/

How to configure Redis Kubernetes deployment to make slave redis pod takeover when master is down?

I have followed a tutorial to deploy Redis master and slave deployment.
Both slave and master have its own services. I have Spring boot app that has master host in its configuration to save/read the data from it.
So when I terminate redis-master pod the Spring boot app is going down as it doesn't know that it should connect to slave. How to solve that?
I was thinking about creating a common service for both master and slave, but this way the spring boot app will at some point try to save data to a slave pod instead of master.
Use StatefulSets for redis deployment In HA. Use sentinel as sidecar container to manage failover

Is there a way to register custom metadata for a service in K8S Service Discovery?

I have my service (spring boot java application) running in a K8S cluster with 3 replicas(pods). My use-case requires me to deploy application contexts dynamically.
And i need to know which context is deployed on which of the 3 Pods through service discovery. Is there a way to register custom metadata for a service in K8S Service Discovery, like we do in Eureka using eureka.instance.metadata-map?
In terms of Kubernetes, we have Deployments and Services.
Deployment is a description of state for ReplicaSet, which creates Pods. Pod consists of one or more containers.
Service is an abstraction which defines a logical set of Pods and a policy by which to access them.
In Eureka, you can set a configuration of Pods dynamically and reconfigure them on-fly, which does not match with Kubernetes design.
In Kubernetes, when you use 1 Deployment with 3 Replicas, all 3 Pods should be the same. It has no options or features to export any metadata for separate Pods under the Services into different groups because ReplicaSet, which contain Pods with same labels, is a group itself.
Therefore, the better idea is to use 3 different Deployments with 1 Replica for each, all of them with the same configuration. Or use some Springboot’s features, like its service discovery if you want to reload application context on-fly.

Resources