Do Something after a health check call on Kubernetes Pod - spring

I have a spring boot microservice running on K8s. I have configured liveness and readiness probes in my yaml file.
For my specific case I want to save a record in database with timestamps whenever the probes get executed in Kubernetes. How can I do this in Spring. Is there a listener kind of thing for this that listens for probe calls and take action based on that?

Related

How to implement axon server connection check to readiness probe

I need to implement readiness and liveness probes for every microservice in my project.
In my readiness probe I need to check if our springboot application is connected to axon server.
How can I manually test if axon server is connected?
I was thinking that it is enough to add axon-spring-boot-starter dependency and it will be attached to
/actuator/health. But it did't happen.

How to pause and resume RMQ consumer in Java

RMQConsumer in my code is a spring bean which is initialised at the time of Spring container initialisation.
It starts listening to RMQ messages on the queue as soon that Spring bean is ready irrespective of the whole Spring container is ready or not.
I want to start the consumption of messages only when the kubernates pod status is 1/1. As during rolling deployment, old pods are not terminated unless new pods are 1/1 status, which causes almost double the amount of pod count and spikes the consumption rate which is not desired in my case of some reasons.
What i tried
I removed the channel.basicConsume method of RMq api which was executed at Bean postConstruct. And added the code in a Rest controller method under endpoint ‘’’/postStart’’’ which i configured in kubernates deployment file under lifecycle hook of postStart, which doesnt seems to work. The hook is called before spring application even prints its first log(verified with time stamps).
Can you check if readinessProbe will help in this use case.
Please refer below article which can help.
Promoting Liveness and Readiness as core Spring Boot concepts
https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot

Liveliness Probe and Readiness Probe for spring batch

I have a spring batch application which is to be deployed in kubernetes. it doesn't include spring-boot-starter-web since it's just running cron jobs. is there any way to expose spring-actuator health end points without adding the starter-web dependency.
Batch applications are ephemeral by nature. It does not make sense to expose an actuator endpoint for the duration of the job (would the job still be running when you query the endopoint?). This has been discussed here: https://github.com/spring-projects/spring-boot/issues/21024.
Liveliness Probe and Readiness Probe for spring batch
Same here. A readiness probe is typically used to see if a service is ready to accept requests. I'm not sure if it makes sense to have a readiness probe for batch jobs. A liveliness probe however could make sense (to see if a job is "live", in which case one needs to define what "live" means), but I've never seen an implementation for such a probe in practice as there are other means to report if a job is running or not (such as live metrics with micrometer for instance).

How to restart kubernetes pod when issue because of Rabbit MQ connectivity in logs

I have a Spring Boot 2 standalone application( not REST service) which connect to rabbit MQ and process message. The application is deployed in kubernetes. While it work great, but when Rabbit MQ remain down for little longer and in logs I see hearbeat exception 60sec and eventually connection get drop even if the rabbit mq comes up after certain time:
Automatic retry connection to broker by spring-rabbitmq
https://www.rabbitmq.com/heartbeats.html
While I try to manage above issue by increasing number of retry :https://stackoverflow.com/questions/45385119/how-configure-timeouts-retries-or-max-attempts-in-differents-queues-with-spring
but after expiry of retry still above issue comes.
How can I reboot/delete-recreate pod if I see above issue in logs from kubernetes.
The easiest way is to use actuator, which has a /actuator/health endpoint. (Note that the recent version also add /actuator/health/liveness and /actuator/health/readiness).
You can assign the endpoint to livenessProbe property of k8s. Then it will automatically restart when it is necessary. You can parameterize, when your app is down if necessary.
See the docs:
Kubernetes liveness probe
Spring actuator health

Spring boot actuator health check in Openshift/Kubernetes

We have a situation where we have an abundance of Spring boot applications running in containers (on OpenShift) that access centralized infrastructure (external to the pod) such as databases, queues, etc.
If a piece of central infrastructure is down, the health check returns "unhealthy" (rightfully so). The problem is that the liveliness check sees this, and restarts the pod (the readiness check then sees it's down too, so won't start the app). This is fine when only a few are available, but if many (potentially hundreds) of applications are using this, it forces restarts on all of them (crash loop).
I understand that central infrastructure being down is a bad thing. It "should" never happen. But... if it does (Murphy's law), it throws containers into a frenzy. Just seems like we're either doing something wrong, or we should reconfigure something.
A couple questions:
If you are forced to use centralized infrastructure from a Spring boot app running in a container on OpenShift/Kubernetes, should all actuator checks still be enabled for backend? (bouncing the container really won't fix the backend being down anyway)
Should the /actuator/health endpoint be set for both the liveliness probe and the readiness probe?
What common settings do folk use for the readiness/liveliness probe in a spring boot app? (timeouts/interval/etc).
Using actuator checks for liveness/readiness is the de-facto way to check for healthy app in Spring Boot Pod. Your application, once up, should ideally not go down or become unhealthy if a central piece, such as DB or Queueing service goes down , ideally you should add some sort of a resiliency which will either connect to alternate DR site or wait for certain time period for central service to come back up and app to reconnect. This is more of a technical failure on backend side causing a functional failure of your Application after it was started up cleanly.
Yes , both liveness and readiness is required as they both serve different purposes. Read this
In one of my previous projects, the settings used for readiness was around 30 seconds and liveness at around 90, but to be honest this is completely dependent on your application , if your app takes 1 minute to start , that is what your readiness time should be configured at , and your liveness should factor in the same along with any time required for making failover switch of your backend services.
Hope this helps.

Resources