ActiveMQ Artemis cluster does not redistribute messages to idle spring boot consumers - spring-boot

We are setting up a cluster of 3 live nodes of Artemis 2.16.0.
They discover each other and form a cluster.
<cluster-connections>
<cluster-connection name="local-cluster">
<connector-ref>node0-connector</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
We often have a scenario where a Spring Boot application (2-3 pods/instances of the same microservice) produces a lot of messages (let's say 1M) to a queue A.
Another Spring Boot application is consuming queue A. It uses Spring Boot Artemis autoconfiguration to connect to the cluster. It consists of 3 instances so that the consumption rate can be higher.
spring.artemis.broker-url=(tcp://localhost:61616,tcp://localhost:61617,tcp://localhost:61618)?ha=true
spring.artemis.pool.enabled=true
spring.artemis.pool.max-connections=1
The above autoconfiguration will often lead to a scenario that 2 consumers pods are connected to broker B1 and one consumer pod will be connected to broker B2. (no connection towards broker B3)
Let's assume that the producer is much faster than the consumer pods. What we observe is that the producer will start producing and messages will be delivered (ON_DEMAND) on both the brokers that have active consumer connections.
Soon (because the producer is much faster than the consumer) we observe that a "backlog" is built and more and more messages wait to be consumed on the two brokers. The producer application has completed producing (one million messages) and 300k are in B1 queue and 600K are in B2 broker queue. (100K are already consumed).
A bit later we observe that the messages from B1 queue are consumed completely and there are still (a lot of) messages to be found in B2. This is logical as there were 2 pods consuming from B1 and 1 pod consuming from B2.
At this point we would like to see a behaviour where the two consumers (currently and so far connected to B1) automatically consume messages from B2.
But what we observe is that neither the clients (spring boot consumers) nor the cluster do something about it and eventually the third pod will have to continue processing the messages by itself (the 2 pods that worked hard on consuming messages from B1 now become idle).
According to the Artemis documentation the redistribution will not take place if there is at least one connection active on the broker.
This leads to a waste of resources and lower consumption rate. Is the only solution to programmatically discover such situations and reconnect?
It seems more error prone to put the responsibility for smarter consumption (for example try to reconnect to force Artemis redistribute) in the client and I would like to ask if you know of a way to overcome this problem. Either by achieving redistribution in Artemis cluster or an alternative configuration in the Spring Boot application (client).

Related

Avoid multiple listens to ActiveMQ topic with Spring Boot microservice instances

We have configured our ActiveMQ message broker as a Spring Boot project and there's another Spring Boot application (let's call it service-A) that has a listener configured to listen to some topics using #JmsListener annotation. It's a Spring Cloud microservice appilcation.
The problem:
It is possible that service-A can have multiple instances running.
If we have 2 instances running, then any message coming on topic gets listened to twice.
How can we avoid every instance listening to the topic?
We want to make sure that the topic is listened to only once no matte the number of service-A instances.
Is it possible to run the microservice in a cluster mode or something similar? I also checked out ActiveMQ virtual destinations but not too sure if that's the solution to the problem.
We have also thought of an approach where we can decide who's the leader node from the multiple instances, but that's the last resort and we are looking for a cleaner approach.
Any useful pointers, references are welcome.
What you really want is a shared topic subscription which was added in JMS 2. Unfortunately ActiveMQ 5.x doesn't support JMS 2. However, ActiveMQ Artemis does.
ActiveMQ Artemis is the next generation broker from ActiveMQ. It supports most of the same features as ActiveMQ 5.x (including full support for OpenWire clients) as well as many other features that 5.x doesn't support (e.g. JMS 2, shared-nothing high-availability using replication, last-value queues, ring queues, metrics plugins for integration with tools like Prometheus, duplicate message detection, etc.). Furthermore, ActiveMQ Artemis is built on a high-performance, non-blocking core which means scalability is much better as well.

Even Load Distribution of JMS Queue listeners not happening

I have 2 instances of my application (2 different machines) configured to listen on a single IBM MQ queue, each of them configured with 4 concurrent consumers in the cxf bean.
<bean id="TestConfig0" class="org.apache.cxf.transport.jms.JMSConfiguration"
p:sessionTransacted="false" p:connectionFactory-ref="jmsConnectionFactory0" p:concurrentConsumers="4"
p:targetDestination="TestQueue" p:deliveryMode="1"/>
When I run my application to read the data from the above queue. I see that 70% of the messages are picked up by server1 and only 30% by server2.
All the configurations are equal among both the app instances with respect to the JMS configuration.
Its bit strange to observe this pattern.
How do i ensure that both my app instance pick the messages from the Queues evenly ??
There is an answer to a similar question:
MQ's default behaviour is to give messages to the MOST RECENT getter.

How to do replication with Spring Boot and ActiveMQ Artemis?

I am looking for a structure or solution that can support spring boot microservices with multiple instances, ActiveMQ Artemis and Apache Camel.
For example:
I have an ActiveMQ Artemis instance and a Spring Boot JMS consumer with instance A (on machine A) and instance B (on machine B).
Both instances (A,B) are up, but by default the instance A is the master consumer, I mean must consume the JMS message and only in case of it's down or it throw some exceptions, the instance B start consuming messages and when A is OK then it take the ball.
Nb: Instance A and B of the Spring Boot microservice are on different machine and in my case i don't have any container like docker etc...
Have you any approach to solve this issue.
I think the closest you could get to the functionality you want is by using the "exclusive queue" feature. Both consumers A & B can be active at the same time, but the broker will only send messages to one of them. If the consumer which the broker has chosen goes away for whatever reason then the broker will choose another consumer.

messages published to all consumers with same consumer-group in spring-data-stream project

I got my zookeeper and 3 kafka broker running locally.
I started one producer and one consumer. I can see consumer is consuming message.
I then started three consumers with same consumer group name (different ports since its a spring boot project). but what I found is that all the consumers are now consuming (receiving) messages. But I expect the message to be load-balanced in that only messages are not repeated across the consumers. I don't know what the problem is.
Here is my property file
spring.cloud.stream.bindings.input.destination=timerTopicLocal
spring.cloud.stream.kafka.binder.zkNodes=localhost
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.bindings.input.group=timerGroup
Here the group is timerGroup.
consumer code : https://github.com/codecentric/edmp-sample-stream-sink
producer code : https://github.com/codecentric/edmp-sample-stream-source
Can you please update dependencies to Camden.RELEASE (and start using Kafka 0.9+) ? In Brixton.RELEASE, Kafka consumers were 0.8-based and required passing instanceIndex/instanceCount as properties in order to distribute partitions correctly.
In Camden.RELEASE we are using the Kafka 0.9+ consumer client, which does load-balancing in the way you are expecting (we also support static partition allocation via instanceIndex/instanceCount, but I suspect this is not what you want). I can enter into more details on how to configure this with Brixton, but I guess an upgrade should be a much easier path.

How to configure OpenMQ to not store all in-flight messages in memory?

I've load testing different JMS implementations for our notification service.
No one of ActiveMQ, HornetQ and OpenMQ behave as expected (issues with reliability and message prioritization). But as now I've best results with OpenMQ. Expect two issues that's probably just missconfiguration (I hope). One with JDBC storage
Test scenario:
2 producers with one queue send messages with different priority. 1 consumer consuming from queue with constant speed that is slightly lower than producers produce. OpenMQ is running standalone and uses PostgreSQL as persistence storage. All messages are sended and consumed from Apache Camel route and it's all persistent.
Issues:
After about 50000 messages I see warnings and errors in OpenMQ log about low memory (default cinfiguration with 256Mb Heap size). Producing is throutled by broker and after some time broker stop dispatching at all. Broker JVM memory usage is on maximum.
How I must configure broker to achieve that goal:
Broker do not depend on queue size (up to 1 000 000 msgs) and on memory limit. Performance is not issue - only reliability.
Thats possible?
I can not help with OpenMQ but perhaps with Camel and ActiveMQ. What issues do you face with ActiveMQ? Can you post your camel route and eventually spring context and the activemq config?

Resources