Prevent use of CachingConnectionFactory with DefaultJmsListenerContainerFactory - spring

I am working on a brand new project in which I need to have listeners that will consume messages from several queues (no need to have producer for now).
Starting from scratch, I am using the last Spring JMS version (4.1.2).
Here is an extract of my configuration file:
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="jmsConnectionFactory"
p:sessionCacheSize="3" />
<bean id="jmsListenerContainerFactory"
class="org.springframework.jms.config.DefaultJmsListenerContainerFactory"
p:connectionFactory-ref="cachedConnectionFactory"
p:destinationResolver-ref="jndiDestinationResolver"
p:concurrency="3-5"
p:receiveTimeout="5000" />
But I think I may be wrong since DefaultJmsListenerContainerFactory will build regular DefaultMessageListenerContainerS. And, as stated in the doc, CachingConnectionFactory should not be used with a message listener container...
Even if I am using the new Spring 4.1 DefaultJmsListenerContainerFactory class the answer from post is still valid (cacheConsumers = true can be an issue + don't need to cache sessions for listener containers because the sessions are long lived) , right?
Instead of using the CachingConnectionFactory, I should use the SingleConnectionFactory (and not directly the broker implementation one)?
If the SingleConnectionFactory class should indeed be used, is the "reconnectOnException" property should be set to true (as it is done in the CachingConnectionFactory) or does the new "setBackOff" method (from DefaultJmsListenerContainerFactory) deals with the same kind of issues?
Thanks for any tips

Correct.
There's not really much benefit in using a SingleConnectionFactory unless you want to share a single connection across multiple containers; the DMLC will use a single connection from the vendor factory by default for all consumer threads (cacheLevel >= CACHE_CONNECTION), unless a TransactionManager is configured.
The container(s) will handle reconnection - even before the 'new' backOff property - backOff just adds more sophistication to the reconnection algorithm - it used to just retry every n seconds (5 by default).
As stated in the answer you cited, it's ok to use a CCF as long as you disable consumer caching.
Correction: Yes, when using the SingleConnectionFactory, you do need to set reconnectOnException to true in order for the container to properly recover its connection. Otherwise, it simply hands out the stale connection.

Related

SourcePollingChannelAdapter: can inception of polling trigger after application start by arbitrary delay?

Versions:
Spring: 5.2.16.RELEASE
Spring Integrations: 5.3.9.RELEASE
macOS Big Sur: 11.6
For a fuller account of my spring-integration configuration, see this question I posted yesterday.
To sum up, I have set up this channel for polling changes to a directory:
<int-file:inbound-channel-adapter id="channelIn" directory="${channel.dir}" auto-create-directory="false" use-watch-service="false" filter="channelFilter" watch-events="CREATE,MODIFY">
<int-file:nio-locker ref="channelLocker"/>
<int:poller fixed-delay="${channel.polling.delay}" max-messages-per-poll="${channel.polling.maxmsgs}"></int:poller>
</int-file:inbound-channel-adapter>
It works fine. However, there does not appear to be a configuration option to start the polling after application-start by some arbitrary delay. In my case, I don't think there is any program error (yet) in starting the polling service immediately after Tomcat container starts my war-file. But it is also true that there is quite a bit going on during the application-start, and my preference would be to defer the inception of the polling service some time after the bean for SourcePollingChannelAdapter is created.
Is there anyway to do this in Spring?
There are (at least) a couple of options:
Instead of fixed-delay, use the trigger property to point to a PeriodicTrigger bean with an initialDelay (and fixedDelay).
Set auto-startup="false" and start the adapter manually either directly, or using a control bus.
https://docs.spring.io/spring-integration/docs/current/reference/html/system-management.html#control-bus

DefaultJmsListenerContainer using BeanFactoryPostProcessor

I am currently trying to support dynamic multiple jms provider scenario in my application. So far I did achieved to create DefaultMessageListenerContainer using post processor. Cool part is that the DefaultMessageContainerListener has destinationName property where you can easily set the queue to be listened/sent for messages.
However, the DefaultJmsListenerContainerFactory has no such method to set the queue name. I do reached at around the SimpleJmsListenerEndpoint that the DefaultJmsListenerContainerFactory using to initiate the container. But I am unable to find how to set it. Please see below what I did so far.
beanDefinitionRegistry.registerBeanDefinition("messageListenerContainer",
BeanDefinitionBuilder.rootBeanDefinition(DefaultJmsListenerContainerFactory.class)
.addPropertyReference("connectionFactory", "queueConnectionFactory")
.addPropertyReference("destinationResolver", "jndiDestinationResolver")
.addPropertyValue("concurrency", concurrency)
.addPropertyValue("sessionAcknowledgeMode", Session.AUTO_ACKNOWLEDGE)
.getBeanDefinition()
);
But as you can see I can not set the queue endpoint for listening. How can I do that from here?

Is it possible to automatically create queues without RabbitAutoConfiguration.class? AMQP

I'm using 2.1.0.RELEASE version of Spring boot with AMQP. Unfortunetly I need to connect to several different RabbitMQ servers. I had to exclude RabbitAutoConfiguration.class besause due to changes in above version of spring it's impossible to start without one of the ConnectionFactory beans as primary, but even if I set one of them as #Primary, obviously it doesn't work, because how would amqp/spring-boot know which queue to create on which server...
so, is it possible to automatically create queues on different servers with auto configuration disabled?
Yes, you need a RabbitAdmin for each connection factory.
By default all components will be declared on all brokers, but you can add conditions. See Conditional Declaration.
By default, all queues, exchanges, and bindings are declared by all RabbitAdmin instances (assuming they have auto-startup="true") in the application context.
#Bean
public Queue queue1() {
Queue queue = new Queue("foo");
queue.setAdminsThatShouldDeclare(admin1());
return queue;
}

Default taskScheduler Bean - Spring integraion 2.2.0 Vs 3.0.2 with Spring 3.2.9

I have a stand alone application that uses File inbound channel adapter to poll for a file from a Specified location at certain interval.
I don't have a taskScheduler instance defined.
When running the application with both Spring integration 2.2.0 and 3.0.2, I see that there are 10 threads created with name task-scheduler-x after certain amount of time. I believe this is the default behavior.
I removed the file inbound channel adapter configuration from my application and re-run it, I noticed the following behavior.
In 3.0.2 , 10 threads are getting created with name task-scheduler-x.
In 2.2.0, Though a taskScheduler instance is getting created (I can see the message about the bean creation in the logs), I don't see any threads getting created with the name task-scheduler-x.
Why is this behavior different between these two versions? What should I do if I don't want to create a taskScheduler instance or I don't want to create any threads for task scheduling?
Thanks for the help.
The framework now has a built-in component (header channel registry) that uses the taskScheduler.
It's not really using many resources although it does have this side effect of instantiating the scheduler thread pool.
We'll look at adding an option to disable it if you don't need/use it. In the meantime, you can revert to the pre 3.0 behavior by adding this bean to your context:
<bean id="integrationHeaderChannelRegistry" class="org.springframework.integration.channel.DefaultHeaderChannelRegistry">
<property name="autoStartup" value="false" />
</bean>
I opened a JIRA Issue for this.

XA transactions and message bus

In our new project we would like to achieve transactions that involve jpa (mysql) and a message bus (rabbitmq)
We started building our infrastructure with spring data using mysql and rabbitmq (via spring amqp module). Since rabbitMq is not XA-transactional we configured the neo4j chainedTransactionManager as our main transactionManager. This manager takes as argument the jpa txManager and the rabbitTransactionManager.
Now, I do get the ability to annotate a service with #Transacitonal and use both the jpa and rabbit inside it. If I throw an exception within the service then none of the actions actually occur.
Here are my questions:
Is this configuration really gives me an atomic transaction?
I've heard that the chained tx manager is not using a 2 phase commit but a "best effort", is this best effort less reliable? if so how?
What the ChainedTransactionManager does is basically start and commit transactions in reverse order. So if you have a JpaTransactionManager and a RabbitTransactionManager and configured it like so.
#Bean
public PlatformTransactionManager transactionManager() {
return new ChainedTransactionManager(rabbitTransactionManager(), jpaTransactionManager());
}
Now if tha JPA commit succeeds but your commit to rabbitMQ fails your database changes will still be persisted as those are already committed.
To answer your first question it doesn't give you a real atomic transaction, everything that has been committed prior to the occurence of the Exception (on committing) will remain committed.
See http://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/transaction/ChainedTransactionManager.html

Resources