I use spring cloud config bus (rabbitmq) in my micro-service. Only purpose for me to use rabbitmq in my microservice is spring cloud bus... I have 2 questions below.
When I was experimenting, I found that spring expects rabbitmq to be UP and running during application start. Which is contrary to what Spring cloud evangelises... (Circuit breakers...) To be fair, even service discovery is not expected to be up and running before starting an application. Is there any sensible reason behind this...?
Say, I start my application when rabbitmq is up and running. For some reason, rabbitmq goes down... What I should be losing is just my ability to work with rabbitmq... instead, /health endpoint responds back as DOWN for my micro-service. Any eureka instance listening to heart beats from my micro-service is also marking the instance as down. Any reasons for doing this...?
To my knowledge, this is against the circuit breaker pattern that spring cloud has evangelised.
I personally feel that spring cloud config bus is not an important feature to mark an application as down...
Is there any alternatives to tell my spring boot micro-service that connection to rabbitmq is not a critical service?
Thanks in advance!
Related
We are trying to figure out how Srping Boot behvaes in a service which BOTH
a) pulls events from a rabbit queue
b) provides UI with REST API's
The problem is that we would like Spring Boot configured in a way prioritizes REST API's over Rabbit queue. I googled for things like Spring Boot Rest controller buffer etc. but haven't found anything viable.
Spring Boot should have some kind of method that, after processing an event (REST API call or Rabbit pull), checks if there is anything in REST buffer (if such a thing even exists), and only if that is empty, pulls another event from a queue.
We are not even sure if Spring Boot prioritizes Rabbit over REST, but after some UAT it seems it does.
Switching to push pattern with Rabbit seems like an option, but we would like something else.
Also another option was to create replica services: same business logic in two services, one just consuming rabbit, and another offering REST APIs for UI, but this of course adds to DevOps complexity
The two mechanisms are completely independent; the framework provides no coordination between them.
I am looking for controlling ActiveMQ connections after starting of application in cluster environment if I want to disconnect some slave machine through code.
Any help around this will be really appreciable.
I don't believe Spring has any direct integration with ActiveMQ. Spring offers JMS integration which, of course, uses the generic JMS API which every JMS provider implements.
To manage ActiveMQ from a remote application will you need to use something like JMX.
I'm thinking of developing a simulation of RabbitMQ that can be used in unit tests where it is not possible to start up an entire RabbitMQ server or not possible to connect to one. This RabbitMQ simulation would obviously have the same API as the RabbitMQ Java client. Question is now how to plug in this API of the RabbitMQ simulation into Spring Boot instead of the original one from RabbitMQ. Is there some hook in Spring Boot so that this could be done?
It's quite difficult to simulate RabbitMQ.
Some people have has some success using an embedded Apache QPID server running amqp 0.9.1.
However, it doesn't support any RabbitMQ extensions, if you are using those.
You'd be better off using something like TestContainers.
https://www.testcontainers.org/modules/rabbitmq/
How health indicators should be properly configured for Spring Boot service running on top of Kafka Streams with DB connection? We use Spring Cloud Streams and Kafka Streams binding, Spring-Data JPA, Kubernetes as a container hypervisor. We have let say 3 service replicas and 9 partitions for each topic. A typical service usually joins messages from two topics and persist data in a database and publish data back to another kafka topic.
After switching to Spring Boot 2.3.1 and changing K8s liveness/readiness endpoints to the new ones:
/actuator/health/liveness
/actuator/health/readiness
we discovered that by default they do not have any health indicators included.
According to documentation:
Actuator configures the "liveness" and "readiness" probes as Health
Groups; this means that all the Health Groups features are available
for them. (...) By default, Spring Boot does not add other Health
Indicators to these groups.
I believe that this is the right approach, but I have not tested that:
management.endpoint.health.group.readiness.include: readinessState,db,binders
management.endpoint.health.group.liveness.include: livenessState,ping,diskSpace
We try to cover the following use cases:
rolling update: not available consumption slot (idle instance) when new replica is added
stream has died (runtime exception has been thrown)
DB is not available during container start up / when service is running
broker is not available
I have found a similar question, however I believe the current one is specifically related to Kafka services. They are different in it's nature from REST services.
Update:
In spring boot 2.3.1 binders health indicator checks if streams are in RUNNING or REBALANCING state for Kafka 2.5 (before only RUNNING), so I guess that rolling update case with idle instance is handled by its logic.
I've been roaming the depths of the internet but I find myself unsatisfied by the examples I've found so far. Can someone point me or, show me, a good starting point to integrate zipkin tracing with jaxrs clients and amqp clients?
My scenario is quite simple and I'd expect this task to be trivial tbh. We have a micro services based architecture and it's time we start tracing our requests and have global perspective of our inter service dependencies and what the requests actually look like (we do have metrics but I want more!) . The communication is done via jax-rs auto generated clients and we use rabbit template for messaging.
I've seen brave integrations with jaxrs but they are a bit simplistic. My zipkin server is a spring boot mini app using stream-rabbit, so zipkin data is sent using rabbitmq.
Thanks in advance.
After some discussion with Marcin Grzejszczak and Adrien Cole (zipkin and sleuth creators/active developers) I ended up creating a Jersey filter that acts as bridge between sleuth and brave. Regarding AMQP integration, added a new #StreamListener with a conditional for zipkin format spans (using headers). Sending messages to the sleuth exchange with zipkin format will then be valid and consumed by the listener. For javascript (zipkin-js), I ended up creating a new AMQP Logger that sends zipkin spans to a determined exchange. If someone ends up reading this and needs more detail, you're welcome to reach out to me.