ActiveMQ rebalanceClusterClients not working with Spring Boot JMS - jms

We are using the Spring JmsTemplate implementation with a CachingConnectionFactory. We have configured the connection with a failover-url:
failover:(ssl://172.16.0.11:61616,ssl://172.16.0.12:61616)?maxReconnectDelay=2000
On the transport connector in ActiveMQ we have enabled the option "rebalanceClusterClients":
<transportConnector name="openwire" uri="ssl://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600" rebalanceClusterClients="true">
<publishedAddressPolicy>
<publishedAddressPolicy publishedHostStrategy="IPADDRESS" />
</publishedAddressPolicy>
</transportConnector>
However, all of the clients are connecting to the first broker in the list of brokers instead of some of them being rebalanced to the second broker.
Previously we did not use the Spring JMS implementation, but instead we used the ActiveMQ libraries directly. This implementation did allow rebalancing the connected clients.
Is something in Spring preventing the rebalancing? Perhaps the CachingConnectionFactory?
EDIT 2019-07-10
I've found these two (p1 and p2) posts on SO where it is stated that CachingConnectionFactory doesn't play nicely with the failover-protocol. However, I think this has been resolved since then as we do see the connections moving between brokers if a broker is turned off.
What we do not see is connections being balanced across brokers. We did see this behavior when we were still using our own custom JMS implementation. So might it be something in Spring or the JmsTemplate?

The actual problem was not ActiveMQ or Spring rather an external firewall prevented this from working.

Related

ActiveMQ 5 redelivery policies ignored with AMQP

I have a Spring Boot application using Qpid JMS to speak AMQP with an ActiveMQ 5.15.14 broker. Even though the redelivery plugin is configured, the redelivery policies of the broker are ignored. However the redelivery policy of the client (Qpid) does come into play.
When the exact same code and client configuration is connected to a ActiveMQ Artemis broker, the redelivery policy of the broker kicks in which is what I'm looking for.
Anything you are aware of that could explain this different behavior between ActiveMQ 5 and ActiveMQ Artemis? Both brokers are using pretty much OOTB configuration aside from the redelivery policies, and schedulerSupport is enabled in my ActiveMQ 5 broker as well. Here's what the redelivey configuration looks like in activemq.xml:
<redeliveryPlugin fallbackToDeadLetter="true" sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<defaultEntry>
<redeliveryPolicy initialRedeliveryDelay="5000" maximumRedeliveries="9" redeliveryDelay="60000" />
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
One more thing to consider : the redelivery policies of the ActiveMQ 5 broker are applied when I use Openwire (JMS) instead of AMQP.
The AMQP protocol head in ActiveMQ 5.x is far more primitive than that of the Artemis broker implementation and is likely not reacting correct to the dispositions that are being sent back from the AMQP client. Also the 5.x broker can react differently based on the transformer setting in the 'transportConnector' on the broker which can be one of JMS, NATIVE or RAW. The JMS transformer will give the most ActiveMQ compatible behaviour but requires a complete transformation internally to an OpenWire message and then back to AMQP when going from AMQP sender to AMQP receiver which can hurt performance significantly. The NATIVE transformation will attempt to preserve some insights into the redelivered state of the message but won't catch every case most likely. With the RAW mode there will be no insight into the message delivery count at all and as such you definitely won't get any redelivery processing on the broker side.
I short, if you are looking for a fully functional AMQP broker then choose Artemis as it has had a lot of work, if you just need something that can get messages flowing then 5.x should work but don't expect the same quality of service.

Performance issues with ActiveMQ Artemis and Spring JmsTemplate

While doing some load tests with the ActiveMQ Artemis broker and my Spring Boot application I am getting into performance issues.
What I am doing is, sending e.g. 12,000 messages per second to the broker with JMSeter and the application receives them and saves them to a DB. That works fine. But when I extend my application by a filter mechanism, which forwards events after saving to DB, back to the broker using jmsTemplate.send(destination, messageCreator) it goes very slow.
I first used ActiveMQ 5.x and there this mechanism works fine. There you could configure the ActiveMQConnectionFactory with setAsyncSend(true) to tune performance. For the ActiveMQ Artemis ConnectionFactory implementation there is no such a possibility. Is there another way to tune performance like in ActiveMQ 5.x?
I am using Apache ActiveMQ Artemis 2.16.0 (but also tried 2.15.0), artemis-jms-client 2.6.4, and Spring Boot 1.5.16.RELEASE.
The first thing to note is that you need to be very careful when using Spring's JmsTemplate to send messages as it employs a well-known anti-pattern that can really kill performance. It will actually create a new JMS connection, session, and producer for every message it sends. I recommend you use a connection pool like this one which is based on the ActiveMQ 5.x connection pool implementation but now supports JMS 2. For additional details about the danger of using JmsTemplate see the ActiveMQ documentation. This is also discussed in an article from Pivotal (i.e. the "owners" of Spring).
The second point here is that you can tune if persistent JMS messages are sent synchronously or not using the blockOnDurableSend URL property, e.g.:
tcp://localhost:61616?blockOnDurableSend=false
This will ensure that persistent JMS messages are sent asynchronously. This is discussed further in the ActiveMQ Artemis documentation.

Avoid multiple listens to ActiveMQ topic with Spring Boot microservice instances

We have configured our ActiveMQ message broker as a Spring Boot project and there's another Spring Boot application (let's call it service-A) that has a listener configured to listen to some topics using #JmsListener annotation. It's a Spring Cloud microservice appilcation.
The problem:
It is possible that service-A can have multiple instances running.
If we have 2 instances running, then any message coming on topic gets listened to twice.
How can we avoid every instance listening to the topic?
We want to make sure that the topic is listened to only once no matte the number of service-A instances.
Is it possible to run the microservice in a cluster mode or something similar? I also checked out ActiveMQ virtual destinations but not too sure if that's the solution to the problem.
We have also thought of an approach where we can decide who's the leader node from the multiple instances, but that's the last resort and we are looking for a cleaner approach.
Any useful pointers, references are welcome.
What you really want is a shared topic subscription which was added in JMS 2. Unfortunately ActiveMQ 5.x doesn't support JMS 2. However, ActiveMQ Artemis does.
ActiveMQ Artemis is the next generation broker from ActiveMQ. It supports most of the same features as ActiveMQ 5.x (including full support for OpenWire clients) as well as many other features that 5.x doesn't support (e.g. JMS 2, shared-nothing high-availability using replication, last-value queues, ring queues, metrics plugins for integration with tools like Prometheus, duplicate message detection, etc.). Furthermore, ActiveMQ Artemis is built on a high-performance, non-blocking core which means scalability is much better as well.

How to bridge between IBM MQSeries and ActiveMQ Artemis 7.x?

Has anyone succeeded in creating a bridge between IBM MQSeries (MQS) and ActiveMQ Artemis 7.x (AMQ 7) so that the later can send messages to and receive from the first? Currently I have no problem bridging between MQS 7.5 and AMQ 6.3 by deploying a camel route and MQS libraries on the broker itself. However, the same way doesn't work anymore as each route deployment requires a broker reconfiguration and restart.
Thanks in advance for any feedback.
A few examples ship with ActiveMQ Artemis which might be helpful:
The "inter-broker-bridge" example in the examples/features/sub-modules/ directory. This example demonstrates how to deploy an instance of org.apache.activemq.artemis.jms.bridge.impl.JMSBridgeImpl to the broker using Spring in a web application.
The "camel" example in the examples/features/standard/ directory. This example demonstrates how to deploy a Camel route to the broker using Spring in a web application.
I can't speak to whether or not either of these can be updated at runtime as I've not actually attempted that. Both of these options should be able to move messages in either direction (i.e. from Artemis to MQS or from MQS to Artemis).
Another option would simply be to run Camel standalone and deploy your routes there. This would give you more flexibility as it would allow you to specifically choose the hardware where the routes run as well as how many resources the Camel JVM consumes. Running Camel routes directly on the broker, while convenient, isn't a great fit because the broker is a broker and not an application server.
To be clear, ActiveMQ Artemis and IBM MQSeries are not directly compatible with each other and are not expected to be. This true for most (if not all) JMS broker implementations. The role of components like the ActiveMQ Artemis JMS bridge and integration platforms like Camel are to solve the compatibility problem by using a common API to speak to both brokers - JMS in this case. Any broker which implements JMS can be integrated using these methods.

Spring's DefaultMessageListenerContainer to use connection factory directly from broker or JCA managed?

I have a Spring Integration application, where I have a JMS inbound channel adapter that will receive messages from a queue in a remote JMS broker. I'm looking up the connection factory directly from the broker's remote JNDI service and this is what I use to set up my inbound channel adapter. I understand that behind the scenes there is a DefaultMessageListenerContainer. According to AbstractMessageListenerContainer javadocs, found here, if "sessionTransacted" is set to "true" for the DMLC, local JMS transactions will be used for the delivery of messages from the broker.
I'm not interested in having the receipt of messages to be part of externally managed transactions.
Now, if the JMS broker provides a resource adapter and therefore there is a JCA managed connection factory (that might be capable of participating in JTA transactions) configured in a JBoss App server where my Spring Integration is running packaged in a war . I could use this instead of directly looking up from the broker's JNDI as I described above. Since I am not interested in global transactions, I don't see the value of using this JCA connection factory, moreover I don't know if the caching of connections/sessions that JCA does might clash with the DMLC caching strategy. Additionally I'm not sure if adding a JCA layer to wrap the original connection factory will impact performance.
Which is the right approach then, take connection factory directly from broker JNDI and let DMLC handle everything or take a JCA managed connection? There seems to be a not very well documented school of thought that tells that JCA is safer and more robust, but if I'm not using EJBs I dont see how this can be true.

Resources