How Spring BOOT Logger Actuator behaves in clustered environment? - spring

I have a query related to Spring Boot Actuator. Through Actuator I can change the log level dynamically.
In clustered environment how it works?
If I do the REST (POST) call to change the log level then in which node it will be applied?
Or will it be applied to all the nodes?
If it gets applied to all the nodes in the cluster then how to restrict it to only a particular node?

You should use external configuration server (spring cloud config) and use spring cloud bus to reflect configuration changes into all the servers of your cluster.
Place your log configuration on the configuration server, on each change, a message will be sent to a message broker (like rabbitMq) to all the servers listening to the config.

Related

Adding prefix to RabbitMQ queues and exchanges in Spring Cloud dataflow backend

I would like to utilize my own RabbitMQ instance as the middleware broker for Spring Cloud Data Flow.
The problem is that we have a prefix and suffix policy on exchange and queue creation that has to be in place.
Is it possible to force Spring Cloud Data Flow to add this prefix and suffix?
Example:
RABBITMQ_QUEUE_PREFIX="TEAM1"
RABBITMQ_QUEUE_SUFFIX="IN"
RABBITMQ_EXCHANGE_PREFIX="TEAM1"
RABBITMQ_EXCHANGE_SUFFIX="OUT"
To result in queues and exchanges:
TEAM1.queuename.IN
TEAM1.exchangename.OUT
You can configure the prefix at the application level (e.g. in application.properties) or set as a deployer property in SCDF. See https://docs.spring.io/spring-cloud-stream-binder-rabbit/docs/current/reference/html/spring-cloud-stream-binder-rabbit.html#_rabbitmq_consumer_properties

How to get Kafka brokers in a cluster using Spring Boot Kafka?

I have a Spring Boot (2.3.3) service using spring-kafka to currently access a dedicated Kafka/Zookeeper configuration. I have been using the application.properties setting spring.kafka.bootstrap-servers=localhost:9092 to access my dev/test Apache Kafka service.
However, in production, we have a Cluster of Kafka Brokers (on many servers) configured in Zookeeper, and I have been asked to modify my service to query Zookeeper to get the list of brokers and use that list instead of the bootstrap servers configuration. Reason, our DevOps folks have been known to reconfigure servers/nodes and Kafka brokers.
Basically, I have been asked to make my service agnostic to where the Apache Kafka brokers are running. All my service needs to know is how to get the list of brokers (bootstrap server info including host and port) from Zookeeper.
Is there a way in spring-boot and spring-kafka to retrieve from Zookeeper the broker list and use that broker (aka bootstrap server) list in my service?
Spring delegates to the kafka-clients for all connections; for a long time now, the kafka-clients no longer connect to Zookeeper, only to the brokers themselves.
There is no built-in support in Spring for querying the Zookeeper to determine the broker list.
Furthermore, in a future Kafka version, Zookeeper is going away altogether; see KIP-500.

Liveness/Readiness set of health indicators for Spring Boot service running on top of Kafka Streams

How health indicators should be properly configured for Spring Boot service running on top of Kafka Streams with DB connection? We use Spring Cloud Streams and Kafka Streams binding, Spring-Data JPA, Kubernetes as a container hypervisor. We have let say 3 service replicas and 9 partitions for each topic. A typical service usually joins messages from two topics and persist data in a database and publish data back to another kafka topic.
After switching to Spring Boot 2.3.1 and changing K8s liveness/readiness endpoints to the new ones:
/actuator/health/liveness
/actuator/health/readiness
we discovered that by default they do not have any health indicators included.
According to documentation:
Actuator configures the "liveness" and "readiness" probes as Health
Groups; this means that all the Health Groups features are available
for them. (...) By default, Spring Boot does not add other Health
Indicators to these groups.
I believe that this is the right approach, but I have not tested that:
management.endpoint.health.group.readiness.include: readinessState,db,binders
management.endpoint.health.group.liveness.include: livenessState,ping,diskSpace
We try to cover the following use cases:
rolling update: not available consumption slot (idle instance) when new replica is added
stream has died (runtime exception has been thrown)
DB is not available during container start up / when service is running
broker is not available
I have found a similar question, however I believe the current one is specifically related to Kafka services. They are different in it's nature from REST services.
Update:
In spring boot 2.3.1 binders health indicator checks if streams are in RUNNING or REBALANCING state for Kafka 2.5 (before only RUNNING), so I guess that rolling update case with idle instance is handled by its logic.

Spring Boot ZooKeeper client

I want to use ZooKeeper in order to synchronize my distributed services via ZooKeeper ephemeral nodes.
The idea is the following - every node in the topology on the startup will create ZooKeeper session and ephemeral nodes. On the node restart or failure, these nodes will disappear.
I'm going to implement it using Spring Boot. Right now I'm in doubt what project and Maven dependency to use in order to have ZooKeeper client autoconfiguration, be able to create ZooKeeper session on the application startup, be able to create from this client - ZooKeeper ephemeral nodes and use ZooKeeper transactions.
Right now I'm looking on Spring Cloud Zookeeper/ but I'm not sure is it a right one for this purpose. Could you please point me to the right Spring Boot ZooKeeper project and show the small example how to achieve that I have described above.

Archaius Configuration Server setup

We are exploring Archaius for our microservices. We want to setup a Configuration Server as microservice and store configuration files of other microservices.
We have other microservices (Springboot based) say
a) Producer
b) Consumer
deployed in different environment/vm. We also deploy all these microservices as a cluster (ie., run multiple instances of Producer and Consumer) to support high availability.
Please let us know how to get dynamically changed values in Configuration Server to be available to other microservices (multiple Producer and Consumers).
Thanks

Resources