Can Readiness, Liveness check made active/inactive by configuration - quarkus

I am playing around with Quarkus and I am trying to create ingestion service, which sends data to kafka or another REST endpoint. I have added "quarkus-smallrye-reactive-messaging-kafka" and "quarkus-reactive-messaging-http" dependencies to the project. I wanted to have only one particular pipeline ie http->kafka or http->http at a time, but I should be able to change that using configuration update followed by restart. I could achieve this by adding 2 dependencies and configurations as shown below
## Rest service configuration
mp.messaging.outgoing.messages.connector=smallrye-http
mp.messaging.outgoing.messages.method=POST
mp.messaging.outgoing.messages.url=http://localhost:9009/messages
## Kafka Ingestion configuration
## ----------------------------
#mp.messaging.outgoing.messages.connector=smallrye-kafka
#kafka.bootstrap.servers=host.docker.internal:9092
#mp.messaging.outgoing.messages.topic=messages
#mp.messaging.outgoing.messages.value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
Now the problem is that even though I have Kafka connector commented out in my application.propertues, the health check for Kafka still runs and shows that Kafka is down. I expect that it should not have run the Kafka health check since I don't have configured as per the configuartion. Is this possible now and if not does it make sense to consider it as feature request and include it ?
Regards,

Health check for Kafka is disabled by default.
But the health check for reactive messaging is enabled by default, you can disable it via mp.messaging.outgoing.messages.heath-enabled=false.
Note that, for your use case, you can also use different channels and disabling the one you didn't use instead of commenting out the configuration.
Disabling a channel can be done simply via mp.messaging.outgoing.messages.enabled=false.

Related

How to change the default health endpoint used by spring boot admin

I'd like to have a new on-demand health check endpoint for my service through implementation of indicator. The problem was the on-demand one would be called by default `/actuator/health`, so I have split the default health endpoint into two health groups `/actuator/health/default & /actuator/health/on-demand` as I didn't find any way to remove the on-demand directly from `/actuator/health`.
Now a new issue emerged, by default, spring boot admin will hit /actuator/health to get corresponding info, I was wondering it's possible to ask him to hit /actuator/health/default instead?
BTW, I only have admin client, without any recovery service
haha, this config is the answer: spring.boot.admin.client.instance.health-url

HealthCheck in MassTransit say Not ready: not started

Using AddMassTransitHostedService a healthcheck configuration is added, but it always report unhealth even after I configured the endpoints as in the example. My project is a WebApi where I don't have my consumers in a separated startup.
You're using AddBus, which has been deprecated in favor of UsingRabbitMQ in v7.
When using AddBus, you have to configure the health check manually (it is done automatically when using the new v7 syntax). The previous syntax documentation shows how to configure it, in short:
cfg.UseHealthCheck(context);
Must be added so that the health checks are reported to the hosted service.

ActiveMQ Artemis - Spring Boot Throttling

Set up - ActiveMQ Artemis 2.14.0 and Spring Boot.
Problem statement: I want to achieve throttling in terms of reading / limiting the messages to be read from ActiveMQ.
This can be achieved by configuring the consumerMaxRate during the start time and that works fine too. I want to change this parameter on the fly to increase / decrease the rate of consumption without stopping my application. I have tried by re-initializing the beans, setting the activemqconnectionfactories again but somehow the connection is maintained with the initial value only.
Any suggestion would be helpful.
I have tried searching the documentation but it only says about the parameter but with no examples.
The consumerMaxRate cannot be changed while the connection to the broker is active. You'd need to close the connection, set a new consumerMaxRate, and then create a connection with the new configuration.

Spring boot actuator health check in Openshift/Kubernetes

We have a situation where we have an abundance of Spring boot applications running in containers (on OpenShift) that access centralized infrastructure (external to the pod) such as databases, queues, etc.
If a piece of central infrastructure is down, the health check returns "unhealthy" (rightfully so). The problem is that the liveliness check sees this, and restarts the pod (the readiness check then sees it's down too, so won't start the app). This is fine when only a few are available, but if many (potentially hundreds) of applications are using this, it forces restarts on all of them (crash loop).
I understand that central infrastructure being down is a bad thing. It "should" never happen. But... if it does (Murphy's law), it throws containers into a frenzy. Just seems like we're either doing something wrong, or we should reconfigure something.
A couple questions:
If you are forced to use centralized infrastructure from a Spring boot app running in a container on OpenShift/Kubernetes, should all actuator checks still be enabled for backend? (bouncing the container really won't fix the backend being down anyway)
Should the /actuator/health endpoint be set for both the liveliness probe and the readiness probe?
What common settings do folk use for the readiness/liveliness probe in a spring boot app? (timeouts/interval/etc).
Using actuator checks for liveness/readiness is the de-facto way to check for healthy app in Spring Boot Pod. Your application, once up, should ideally not go down or become unhealthy if a central piece, such as DB or Queueing service goes down , ideally you should add some sort of a resiliency which will either connect to alternate DR site or wait for certain time period for central service to come back up and app to reconnect. This is more of a technical failure on backend side causing a functional failure of your Application after it was started up cleanly.
Yes , both liveness and readiness is required as they both serve different purposes. Read this
In one of my previous projects, the settings used for readiness was around 30 seconds and liveness at around 90, but to be honest this is completely dependent on your application , if your app takes 1 minute to start , that is what your readiness time should be configured at , and your liveness should factor in the same along with any time required for making failover switch of your backend services.
Hope this helps.

Is it possible for Spring-XD to listen to more than one JMS broker at a time?

I've managed to get Spring Xd working for a scenario where I have data coming in from one JMS broker.
I potentially am facing a scenario where data ingestion could happen from different sources thereby needing me to connect to different brokers.
Based on my current understanding, I'm not quite sure how to do this as there exists a JMS config file which allows you to setup only one broker.
Is there a workaround to this?
At the moment, you would have to create a separate jms-[provider]-infrastructure-context.xml for each broker (in modules/common), say call the provider activemq2.
Then use --provider=activemq2 in the module definition.
(I recently used this technique to test sonicmq and hornetq providers).

Resources