RabbitMQ on Kubernates Unacked messages in queue - spring-boot

We are having issue on rabbitmq that happens when we deploy the application on production, we are not able to reproduce the issue on our development environment.
We have a microservices architecture with multiple spring boot applications deployed on kubernates with autoscaler depends on the usage and we notice that after sometimes some Unacked messages are created in queue, the number of Unacked messages will increase with the time and after sometimes rabbitmq seems to stop working.
Is there something we can check in order to identify the problem?

Related

intermittent issue with kafka (aws msk) consumer

We are facing a strange issue in only one of our environment (with same consumer app).
Basically, it is observed that suddenly a lag starts to build up with only one of the topics on kafka broker (it has multiple topics), with 10 consumer members under a single consumer group.
Even after multiple restarts, adding another pod of consumer application, changing defaults configuration properties (max poll records, session timeout) so far have NOT helped much.
Looking for any suggestions, advice on how to possibly debug the issue (we tried enabling apache logs, cloud watch etc, but so we only saw that regular/periodic rebalancing is happening, even for very low load of 7k messages waiting for processing).
Below are env details:
App - Spring boot app on version 2.7.2 Platform
AWS Kafka - MSK
Kafka Broker - 3 brokers (version 2.8.x)
Consumer Group - 1 with 15 members (partition 8, Topic 1)

Create and cleanup instance specific rabbitMQ instances

I have a set of microservices using springboot rest. These microservices will be deployed in a autoscaled and load balanced environment. One of these services is responsible for managing the system's configuration. When other microservices startup, they obtain the configuration from this service. If and when the configuration is updated, I need to inform all currently running microservices instances to update their cached configuration.
I am considering using RabbitMQ with a fanout exchange. In this solution, each instance at startup will create its queue and bind that queue to the exchange. When there is a configuration change, the configuration service will publish an update to all queues currently bound to that exchange.
However, as service instances are deleted, I cannot figure out how would I delete the queue specific to that instance. I googled but could not find a complete working example of a solution.
Any help or advise?
The idea and solution is correct. What you just miss that those queues, created by your consumer services could be declared as auto-delete=true: https://www.rabbitmq.com/queues.html. As long as your service is UP, the queue is there as well. You stop your service, its consumers are stopped and unsubscribed. At the moment the last consumer is unsubscribed the queue is deleted from the broker.
On the other hand I would suggest to look into Spring Cloud Bus project which really is aimed for tasks like this: https://spring.io/projects/spring-cloud-bus.

A Topic lost a subscription node during run time

Spring boot Version : 2.5.0
Spring Cloud Version : 2020.0.3
I used Spring-Cloud-stream-binder - Kafka and Spring-cloud-stream-binder - Kafka-Streams for kafka production and consumption in the project.
In one project, I subscribed to N topics.
Two nodes were started for service using load balancing.
During run time, it was suddenly discovered that one of the topics had no subscription nodes.
This results in messages being backlogged and lost.
I have to restart these service nodes before I can subscribe to this Topic again.
What is the cause of this, or is there any way to help find some clues.
And is there a way to check at run time so that topics that have lost subscriptions can be re-subscribed?

Event Driven Architecture on Multi-instance services

We are using microservice architecture in our project. We deploy each service to a cluster by using Kubernetes. Services are developed by using Java programming language and Spring Boot framework.Three replicas exist for each service. Services communicate with each other using only events. RabbitMQ is used as a message queue. One of the services is used to send an email. The details of an email are provided by another service with an event. When a SendingEmail event is published by a service, three replicas of email service consume the event and the same email is sent three times.
How can I prevent that sending emails by other two services?
I think it depends on how you work with Rabbit MQ.
You can configure the rabbit mq with one queue for these events and make spring boot applications that represent the sending servers to be "Competing" Consumers.
If you configure it like this, only one replica will get an event at a time and only if it fails to process it the message will return to the queue and will become available to other consumers.
From what you've described all of them are getting the message, so it works like a pub-sub (which is also a possible way of work with rabbit mq, its just not good in this case).

messages published to all consumers with same consumer-group in spring-data-stream project

I got my zookeeper and 3 kafka broker running locally.
I started one producer and one consumer. I can see consumer is consuming message.
I then started three consumers with same consumer group name (different ports since its a spring boot project). but what I found is that all the consumers are now consuming (receiving) messages. But I expect the message to be load-balanced in that only messages are not repeated across the consumers. I don't know what the problem is.
Here is my property file
spring.cloud.stream.bindings.input.destination=timerTopicLocal
spring.cloud.stream.kafka.binder.zkNodes=localhost
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.bindings.input.group=timerGroup
Here the group is timerGroup.
consumer code : https://github.com/codecentric/edmp-sample-stream-sink
producer code : https://github.com/codecentric/edmp-sample-stream-source
Can you please update dependencies to Camden.RELEASE (and start using Kafka 0.9+) ? In Brixton.RELEASE, Kafka consumers were 0.8-based and required passing instanceIndex/instanceCount as properties in order to distribute partitions correctly.
In Camden.RELEASE we are using the Kafka 0.9+ consumer client, which does load-balancing in the way you are expecting (we also support static partition allocation via instanceIndex/instanceCount, but I suspect this is not what you want). I can enter into more details on how to configure this with Brixton, but I guess an upgrade should be a much easier path.

Resources