Is there an equivalent to the ActiveMQ 5 PooledConnectionFactory for Artemis? Why is it available in one and not the other?
Spring, for example, offers a generic CachingConnectionFactory. This is great, but it implements the SingleConnectionFactory and only "pools" one connection.
It would be key to have a similar mechanism in the Artemis client that actually pooled greater than one connections.
Another thought is that maybe it's not implemented because a single connection supports concurrent sessions! I haven't tested the performance of a using a new connection per session. Is the performance the same or similar?
The PooledConnectionFactory in the ActiveMQ 5.x code-base is generic and can actually be used with ActiveMQ Artemis so there was no reason to port it into the Artemis code-base. That said, the JMS connection pool implementation has been pulled out of the ActiveMQ 5.x code-base, cleaned up, modified to support JMS 2, and made available here.
I'm not clear on what you mean by "concurrent sessions." Do you mean that the connection supports concurrently creating session or that the sessions themselves support concurrent use? The former is supported, but the latter is not.
In terms of performance, you'd have to benchmark your specific use-case. There are too many variables to simply say one is better than the other.
Related
While doing some load tests with the ActiveMQ Artemis broker and my Spring Boot application I am getting into performance issues.
What I am doing is, sending e.g. 12,000 messages per second to the broker with JMSeter and the application receives them and saves them to a DB. That works fine. But when I extend my application by a filter mechanism, which forwards events after saving to DB, back to the broker using jmsTemplate.send(destination, messageCreator) it goes very slow.
I first used ActiveMQ 5.x and there this mechanism works fine. There you could configure the ActiveMQConnectionFactory with setAsyncSend(true) to tune performance. For the ActiveMQ Artemis ConnectionFactory implementation there is no such a possibility. Is there another way to tune performance like in ActiveMQ 5.x?
I am using Apache ActiveMQ Artemis 2.16.0 (but also tried 2.15.0), artemis-jms-client 2.6.4, and Spring Boot 1.5.16.RELEASE.
The first thing to note is that you need to be very careful when using Spring's JmsTemplate to send messages as it employs a well-known anti-pattern that can really kill performance. It will actually create a new JMS connection, session, and producer for every message it sends. I recommend you use a connection pool like this one which is based on the ActiveMQ 5.x connection pool implementation but now supports JMS 2. For additional details about the danger of using JmsTemplate see the ActiveMQ documentation. This is also discussed in an article from Pivotal (i.e. the "owners" of Spring).
The second point here is that you can tune if persistent JMS messages are sent synchronously or not using the blockOnDurableSend URL property, e.g.:
tcp://localhost:61616?blockOnDurableSend=false
This will ensure that persistent JMS messages are sent asynchronously. This is discussed further in the ActiveMQ Artemis documentation.
Is there any difference between CachingConnectionFactory configured to have cache bigger than 1 and PoolingConnectionFactory?
I have seen both in various projects and I would like to understand the rationale behind choosing one of them.
It really depends on your use case.
The bittronix factory pools connections and serves up a different connection for each use (and returns it to the pool).
The CachingConnectionFactory uses a single connection and caches sessions, producers, consumers.
That really is a strange question. Do you need XA? If yes, then you have no choice but to go with PoolingConnectionFactory. You don't need XA? Then don't bother with Bitronix and go with CachingConnectionFactory.
If you use plugable XA transaction managers like Bitronix (or Atomikos), use their pool implementations instead of Spring's because they perform additional operations like automatic enlisting in of resources in XA transactions.
Bitronix pools are:
bitronix.tm.resource.jdbc.PoolingDataSource for JDBC
bitronix.tm.resource.jms.PoolingConnectionFactory for JMS
It's worth to take a look at the Bitronix test cases for examples how to setup the pools:
https://github.com/bitronix/btm/blob/master/btm/src/test/java/bitronix/tm/mock/JmsPoolTest.java
https://github.com/bitronix/btm/blob/master/btm/src/test/java/bitronix/tm/mock/JdbcPoolTest.java
I am using WebSphere with ActiveMQ and ActiveMQ's JCA adapter. In our application, there are a lot of queues for different functionalities. So can you tell me, should I create one ConnectionFactory for each queue(functionality) or only one ConnectionFactory for the whole application and shared for the queues ? And the reason.
Thanks in advance.
Really depends what are your requirements. It is not specific to ActiveMQ, but queuing in general. You usually may create separate connection factories when you have:
different host/port for different queues
different security credentials to connect
want to have different connection pools
So for example, if you want to ensure that at least n connections are available for certain queues you can create separate connection factory for that. As with one connection factory, in some extreme cases, when most of your application load is - let say - on functionalityA queues, then you may not have enough connections for your functionalityB queues and that functionality may suffer starving.
We are creating new application, which is going to use IBM's MQ as a JMS provider for a short term and switch to Tibco EMS within a year.
My question is how involving the changes would be from the Application code perspective.
So far reading from JMS documentation, my impression is it should only require minimal changes. Does anyone have experience with it and provide some input on the work involved in switching between JMS provider.
I've done POCs where I swapped out connection factories and used WMQ JMS Classes to send to various providers (TIBCO, ActiveMQ, etc), to prove out the interchangeability. I've also done full swaps from one vendors JMS to another. In theory, it should be very simple.
The biggest change will be with the connection factories. Everything JMS specific will be the same between providers. The more tightly coupled the code is to the connection factories, the more complex it will be to change the app itself. Outside of this, you may need to change vendor specific implementation of objects, such as MQQueue vs Queue.
One addition thing to keep in mid is dependent on the IBM endpoints. If you are using "target queue managers" on any producers, these will need to change. WMQ has a specific URI to reach queues on specific Queue Managers in a cluster ( "queue://target_qm/queue_name/" ). If any application uses this URI it will need to ensure it makes the proper changes here as well.
To process a large number of messages coming to a queue i need guarantee of at least one jms connection to be there at any time. I am using spring and spring allows to have multiple sessions on a single connection only. In case one and only connection fails, application will come to standstill till spring reconnects to the JMS bridge.
So how can i create more than one connection to a queue in Spring, also how can i do connection pooling here.
The answer to this depends on whether you are using Spring inside a J2EE container(jboss etc.) or in a standalone application.
Standalone - you'll find pooling connections to be a problem. Springs SingleConnectionFactory can be setup to renew the connection on an exception garaunteeing that at some point a connection will come online and start processing the queue again, but you'll still have the problem of waiting for that single connection to renew, plus depending on what messaging implementation your dealing with and how it does load balancing you may find yourself stuck with a connection to a single node in a cluster.
If you are running in a container you can rely on the containers connection factory which will be much more robust. JBoss Messaging in the container for instance will failover seamlessly to other nodes and handles pooling under the covers, but if your working in the container its usually easier to bail on JMS template which kind of sucks and use whatever that container provides.