Spring RabbitMQ Connection and Resource Management Issues - spring

I need to consume messages from Rabbit HA cluster via HAProxy so I switched to CacheMode.CONNECTION as it is recommended in spring-amqp-documentation. Moreover I need to consume messages from many queues so I create for each queue SimpleMessageListenerContainer with 4 concurrent consumers and I have few questions:
After few tests it looks that my approach is not efficient because each time when a new queue is added also new SimpleMessageListenerContainer is created with 4 threads. So I could set more queues for the given SimpleMessageListenerContainer what looks more efficient but maybe there is another better way?
Why when I have switched to CacheMode.CONNECTION for each consumer in SimpleMessageListenerContainer is created new connection? Can I set in some way one connection for all consumers in a given SimpleMessageListenerContainer or maybe it is not recommended?
How to handle exception
"org.springframework.amqp.rabbit.connection.AutoRecoverConnectionNotCurrentlyOpenException:
Auto recovery connection is not currently open"
I received it when one RabbitMQ node is down. Even when node is up again SimpleMessageListenerContainer cannot reconnect.
Thanks in advance for help.

The upcoming 2.0 release has a new DirectMessageListenerContainer that shares threads across containers documentation here.
The 2.0.0.M4 milestone is available now; the GA release is expected to be mid July.
If you want a single connection per container use the default cache mode and a separate connection factory for each container.
Disable the client's connection factory auto recovery mechanism; it is enabled by default in the 4.x client; Spring AMQP has its own recovery mechanism that will generally recover faster. Since version 1.7.1 Spring AMQP disables it by default, unless you configure your own Rabbit ConnectionFactory.

Related

Performance issues with ActiveMQ Artemis and Spring JmsTemplate

While doing some load tests with the ActiveMQ Artemis broker and my Spring Boot application I am getting into performance issues.
What I am doing is, sending e.g. 12,000 messages per second to the broker with JMSeter and the application receives them and saves them to a DB. That works fine. But when I extend my application by a filter mechanism, which forwards events after saving to DB, back to the broker using jmsTemplate.send(destination, messageCreator) it goes very slow.
I first used ActiveMQ 5.x and there this mechanism works fine. There you could configure the ActiveMQConnectionFactory with setAsyncSend(true) to tune performance. For the ActiveMQ Artemis ConnectionFactory implementation there is no such a possibility. Is there another way to tune performance like in ActiveMQ 5.x?
I am using Apache ActiveMQ Artemis 2.16.0 (but also tried 2.15.0), artemis-jms-client 2.6.4, and Spring Boot 1.5.16.RELEASE.
The first thing to note is that you need to be very careful when using Spring's JmsTemplate to send messages as it employs a well-known anti-pattern that can really kill performance. It will actually create a new JMS connection, session, and producer for every message it sends. I recommend you use a connection pool like this one which is based on the ActiveMQ 5.x connection pool implementation but now supports JMS 2. For additional details about the danger of using JmsTemplate see the ActiveMQ documentation. This is also discussed in an article from Pivotal (i.e. the "owners" of Spring).
The second point here is that you can tune if persistent JMS messages are sent synchronously or not using the blockOnDurableSend URL property, e.g.:
tcp://localhost:61616?blockOnDurableSend=false
This will ensure that persistent JMS messages are sent asynchronously. This is discussed further in the ActiveMQ Artemis documentation.

Avoid multiple listens to ActiveMQ topic with Spring Boot microservice instances

We have configured our ActiveMQ message broker as a Spring Boot project and there's another Spring Boot application (let's call it service-A) that has a listener configured to listen to some topics using #JmsListener annotation. It's a Spring Cloud microservice appilcation.
The problem:
It is possible that service-A can have multiple instances running.
If we have 2 instances running, then any message coming on topic gets listened to twice.
How can we avoid every instance listening to the topic?
We want to make sure that the topic is listened to only once no matte the number of service-A instances.
Is it possible to run the microservice in a cluster mode or something similar? I also checked out ActiveMQ virtual destinations but not too sure if that's the solution to the problem.
We have also thought of an approach where we can decide who's the leader node from the multiple instances, but that's the last resort and we are looking for a cleaner approach.
Any useful pointers, references are welcome.
What you really want is a shared topic subscription which was added in JMS 2. Unfortunately ActiveMQ 5.x doesn't support JMS 2. However, ActiveMQ Artemis does.
ActiveMQ Artemis is the next generation broker from ActiveMQ. It supports most of the same features as ActiveMQ 5.x (including full support for OpenWire clients) as well as many other features that 5.x doesn't support (e.g. JMS 2, shared-nothing high-availability using replication, last-value queues, ring queues, metrics plugins for integration with tools like Prometheus, duplicate message detection, etc.). Furthermore, ActiveMQ Artemis is built on a high-performance, non-blocking core which means scalability is much better as well.

Spring JMS Consumers to a TIBCO EMS Server expire on their own

We have built a Spring Boot messaging service that listens to a JMS queue hosted on a TIBCO EMS (Enterprise Messaging Service) Server. It is a fairly straightforward application that receives a JMS message, does some data manipulation and updates a database.
The issue is that, occasionally, there are no JMS consumers on the queue, and incoming messages are not processed. However the Spring Boot app is up and running (verified by ps -ef). Restarting the app restores the consumer, but unfortunately this is not a feasible solution in production etc.
Other facts of interest:
We have observed this to happen when the JMS server accepts SSL traffic and is on deployed as a Fault Tolerant pair (although this has been a conculsive observation yet)
There is absolutely no indication in the log (like an error) when the consumer goes down.
We are using Spring-JMS (4.1.0) and TIBCO EMS (8.3.0)
Code Snippet of instantiating a DefaultJmsListenerContainerFactory:
#Bean
public DefaultJmsListenerContainerFactory listenerJmsContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
TibjmsQueueConnectionFactory cf = new TibjmsQueueConnectionFactory("tcp://localhost:7222");
cf.setUserName("admin");
cf.setUserPassword("");
factory.setConnectionFactory(cf);
return factory;
}
The JMS Listener:
#JmsListener(destination = "queue.sample", containerFactory = "listenerJmsContainerFactory")
public void listen(TextMessage message, Session session) throws JMSException{
System.out.println("Received Message: "+message.getJMSMessageID());
System.out.println("Acknowledgement Mode: "+session.getAcknowledgeMode());
// Some more application specific stuff
}
While we are trying to setup additional logging on both the Spring Boot and TIBCO side, we would like to check some points like:
Can there be a situation where a consumer idle for more than a certain time, automatically expires?
Is this something that is governed by DMLC settings like idleConsumerLimit, idleTaskExecutionLimit etc.?
Can these properties be viewed in the Spring Boot code mentioned above? For instance in the code above, the JMS Listener is being created under the hood by the DefaultJmsListenerContainerFactory. So how can we access the DMLC object so that we can invoke methods like getIdleConsumerLimit(), getIdleTaskExecutionLimit() etc.
Thanks for the inputs,
Prabal
Most likely, something in the network (router, firewall etc) is silently dropping idle connections.
While not part of the JMS spec, most vendors implement some kind of heartbeat mechanism so that the client/server exchange pings from time to time, to prevent such actions by network components and/or to detect such conditions.
Look at the Tibco documentation to figure out how to configure heartbeats (they might call it something else).

JMS - One ConnectionFactory per Queue or one ConnectionFactory for all of the queues

I am using WebSphere with ActiveMQ and ActiveMQ's JCA adapter. In our application, there are a lot of queues for different functionalities. So can you tell me, should I create one ConnectionFactory for each queue(functionality) or only one ConnectionFactory for the whole application and shared for the queues ? And the reason.
Thanks in advance.
Really depends what are your requirements. It is not specific to ActiveMQ, but queuing in general. You usually may create separate connection factories when you have:
different host/port for different queues
different security credentials to connect
want to have different connection pools
So for example, if you want to ensure that at least n connections are available for certain queues you can create separate connection factory for that. As with one connection factory, in some extreme cases, when most of your application load is - let say - on functionalityA queues, then you may not have enough connections for your functionalityB queues and that functionality may suffer starving.

How can i change activeMQ persistence adapter without data loss?

I need to change activemq persistence adapter from AMQ to KahaDB, but it's not acceptable to loose all undelivered messages stored via AMQAdapter. Is there any way to automatically use old adapter to send undelivered messages and then switch to KahaDB store?
The general solution to this is to create a new Broker instance and network the old broker and the new one. Once the brokers are networked you create a consumer on the new broker for the destinations and allow the demand to drain the mesasge's from old to new Broker.
Refer to this thread for more:

Resources