Closing Sessions in Spring Boot JMS CachingConnectionFactory - spring-boot

I have my JMS configuration like below (Spring boot 1.3.8);
#Configuration
#EnableJms
public class JmsConfig {
#Autowired
private AppProperties properties;
#Bean
TopicConnectionFactory topicConnectionFactory() throws JMSException {
return new TopicConnectionFactory(properties.getBrokerURL(), properties.getBrokerUserName(),
properties.getBrokerPassword());
}
#Bean
CachingConnectionFactory connectionFactory() throws JMSException {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(topicConnectionFactory());
connectionFactory.setSessionCacheSize(50);
return connectionFactory;
}
#Bean
JmsTemplate jmsTemplate() throws JMSException {
JmsTemplate jmsTemplate = new JmsTemplate(connectionFactory());
jmsTemplate.setPubSubDomain(Boolean.TRUE);
return jmsTemplate;
}
#Bean
DefaultJmsListenerContainerFactory defaultContainerFactory() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setPubSubDomain(Boolean.TRUE);
factory.setRecoveryInterval(30 * 1000L);
return factory;
}
}
This should work fine. But i am worried about whats written on the doc of CachingConnectionFactory
Specially, these parts;
NOTE: This ConnectionFactory requires explicit closing of all Sessions obtained from its shared Connection
Note also that MessageConsumers obtained from a cached Session won't get closed until the Session will eventually be removed from the pool. This may lead to semantic side effects in some cases.
I thought the framework handled the closing session and connection part? If it does not; how should i close them properly?
or maybe i am missing something?
Any help is appreciated :)
F.Y.I : I Use SonicMQ as the broker

Yes, the JmsTemplate will close the session; the javadocs refer to direct use outside of the framework.

Related

Spring AMQP stop send or consuming message

I have an application which receives a message from one queue, processes it and sends it to another queue. When it's receiving a lot of messages (20 thousand or more), spring shows me this message when it tries to send the message to another queue:
connection error; protocol method: #method<connection.close>(reply-code=504 reply-text=CHANNEL_ERROR - second 'channel.open' seen class-id=20 method-id=10)
So I raised the channel cache size and created two CachingConnectionFactory one for consumer and another for the producer, this configurations I followed a note from spring doc:
When the application is configured with a single CachingConnectionFactory, as it is by default with Spring Boot auto-configuration, the application will stop working when the connection is blocked by the Broker. And when it is blocked by the Broker, any its clients stop to work. If we have producers and consumers in the same application, we may end up with a deadlock when producers are blocking the connection because there are no resources on the Broker anymore and consumers can’t free them because the connection is blocked. To mitigate the problem, there is just enough to have one more separate CachingConnectionFactory instance with the same options - one for producers and one for consumers. The separate CachingConnectionFactory isn’t recommended for transactional producers, since they should reuse a Channel associated with the consumer transactions.
Following this recommendations the error message disappeared, but now the application suddenly stops, it's not sending or receiving new messages and all queues are idle. It's kind strange because it has a low concurrency number on listener. What am I missing?
Configuration:
Spring Boot: 2.0.8.RELEASE
Spring AMQP: 2.0.11.RELEASE
RabbitMQ: 3.8.8
spring:
rabbitmq:
listener:
simple:
default-requeue-rejected: false
concurrency: 5
max-concurrency: 8
cache:
channel:
size: 1000
#Bean
public ConnectionFactory consumerConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(properties.getHost());
connectionFactory.setPort(properties.getPort());
connectionFactory.setUsername(properties.getUsername());
connectionFactory.setPassword(properties.getPassword());
connectionFactory.setChannelCacheSize(properties.getCache().getChannel().getSize());
connectionFactory.setConnectionNameStrategy(cns());
return connectionFactory;
}
#Bean
public ConnectionFactory producerConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(properties.getHost());
connectionFactory.setPort(properties.getPort());
connectionFactory.setUsername(properties.getUsername());
connectionFactory.setPassword(properties.getPassword());
connectionFactory.setChannelCacheSize(properties.getCache().getChannel().getSize());
connectionFactory.setConnectionNameStrategy(cns());
return connectionFactory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(#Qualifier("consumerConnectionFactory") ConnectionFactory consumerConnectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer,
RabbitProperties properties) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setErrorHandler(errorHandler());
factory.setConcurrentConsumers(properties.getListener().getSimple().getConcurrency());
factory.setMaxConcurrentConsumers(properties.getListener().getSimple().getMaxConcurrency());
configurer.configure(factory, consumerConnectionFactory);
return factory;
}
#Bean
#Primary
public RabbitAdmin producerRabbitAdmin() {
return new RabbitAdmin(producerConnectionFactory());
}
#Bean
public RabbitAdmin consumerRabbitAdmin() {
return new RabbitAdmin(consumerConnectionFactory());
}
#Bean
#Primary
public RabbitTemplate producerRabbitTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(producerConnectionFactory());
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public RabbitTemplate consumerRabbitTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(consumerConnectionFactory());
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
After analize, the problem was due to Java Memory Heap limit. Besides, I updated my configuration, removed ConnectionFactory beans, and set a publisher factory to RabbitTemplate
So I ended with this:
#Bean
#Primary
public RabbitTemplate producerRabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
rabbitTemplate.setUsePublisherConnection(true);
return rabbitTemplate;
}
#Bean
public RabbitTemplate consumerRabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer,
RabbitProperties properties) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setErrorHandler(errorHandler());
factory.setConcurrentConsumers(properties.getListener().getSimple().getConcurrency());
factory.setMaxConcurrentConsumers(properties.getListener().getSimple().getMaxConcurrency());
configurer.configure(factory, connectionFactory);
return factory;
}
With this configuration memory consume was reduced and I was able to raise consumer concurrey numbers:
spring:
rabbitmq:
listener:
simple:
default-requeue-rejected: false
concurrency: 10
max-concurrency: 15
cache:
channel:
size: 1000
I'm looking now for the right cache channel size and to raise even more concurrency numbers.

Spring JMS HornetQ user is null

I am trying to connect to a remote HornetQ broker in a spring boot/spring jms application and setup a #JmsListener.
HornetQ ConnectionFactory is being fetched from JNDI registry that HornetQ instance hosts. Everything works fine as long as HornetQ security is turned off but when it is turned on I get this error
WARN o.s.j.l.DefaultMessageListenerContainer : Setup of JMS message listener invoker failed for destination 'jms/MI/Notification/Queue' - trying to recover. Cause: User: null doesn't have permission='CONSUME' on address jms.queue.MI/Notification/Queue
I ran a debug session to figure out that ConnectionFactory instance being returned is HornetQXAConnectionFactory but user and password fields are not set, which I believe is why user is null. I verified that user principal and credentials are set in JNDI properties but somehow it is not being passed on to ConnectionFactory instance. Any help on how I can get this setup working would be greatly appreciated.
This is my jms related config
#Configuration
#EnableJms
public class JmsConfig {
#Bean
public JmsListenerContainerFactory<?> jmsListenerContainerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setDestinationResolver(destinationResolver());
return factory;
}
#Bean // Serialize message content to json using TextMessage
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.BYTES);
converter.setTypeIdPropertyName("_type");
return converter;
}
#Value("${jms.jndi.provider.url}")
private String jndiProviderURL;
#Value("${jms.jndi.principal}")
private String jndiPrincipal;
#Value("${jms.jndi.credentials}")
private String jndiCredential;
#Bean
public JndiTemplate jndiTemplate() {
Properties env = new Properties();
env.put("java.naming.factory.initial", "org.jnp.interfaces.NamingContextFactory");
env.put("java.naming.provider.url", jndiProviderURL);
env.put("java.naming.security.principal", jndiPrincipal);
env.put("java.naming.security.credentials", jndiCredential);
return new JndiTemplate(env);
}
#Bean
public DestinationResolver destinationResolver() {
JndiDestinationResolver destinationResolver = new JndiDestinationResolver();
destinationResolver.setJndiTemplate(jndiTemplate());
return destinationResolver;
}
#Value("${jms.connectionfactory.jndiname}")
private String connectionFactoryJNDIName;
#Bean
public JndiObjectFactoryBean connectionFactoryFactory() {
JndiObjectFactoryBean jndiObjectFactoryBean = new JndiObjectFactoryBean();
jndiObjectFactoryBean.setJndiTemplate(jndiTemplate());
jndiObjectFactoryBean.setJndiName(connectionFactoryJNDIName);
jndiObjectFactoryBean.setResourceRef(true);
jndiObjectFactoryBean.setProxyInterface(ConnectionFactory.class);
return jndiObjectFactoryBean;
}
#Bean
public ConnectionFactory connectionFactory(JndiObjectFactoryBean connectionFactoryFactory) {
return (ConnectionFactory) connectionFactoryFactory.getObject();
}
}
JNDI and JMS are 100% independent as they are completely different specifications implemented in potentially completely different ways. Therefore the credentials you use for your JNDI lookup do not apply to your JMS resources. You need to explicitly set the username and password credentials on your JMS connection. This is easy using the JMS API directly (e.g. via javax.jms.ConnectionFactory#createConnection(String username, String password)). Since you're using Spring you could use something like this:
#Bean
public ConnectionFactory connectionFactory(JndiObjectFactoryBean connectionFactoryFactory) {
UserCredentialsConnectionFactoryAdapter cf = new UserCredentialsConnectionFactoryAdapter();
cf.setTargetConnectionFactory((ConnectionFactory) connectionFactoryFactory.getObject());
cf.setUsername("yourJmsUsername");
cf.setPassword("yourJmsPassword");
return cf;
}
Also, for what it's worth, the HornetQ code-base was donated to the Apache ActiveMQ project three and a half years ago now and it lives on as the Apache ActiveMQ Artemis broker. There's been 22 releases since then with numerous new features and bug fixes. I strongly recommend you migrate if at all possible.
Wrap the connection factory in a UserCredentialsConnectionFactoryAdapter.
/**
* An adapter for a target JMS {#link javax.jms.ConnectionFactory}, applying the
* given user credentials to every standard {#code createConnection()} call,
* that is, implicitly invoking {#code createConnection(username, password)}
* on the target. All other methods simply delegate to the corresponding methods
* of the target ConnectionFactory.
* ...

Make OracleDataSource robust against Database restarts and hickups

So I got a advanced queue working with a Connectionfactory:
ConnectionFactory jmsQueueConnectionFactory() throws JMSException, SQLException {
final OracleDataSource dataSource = new OracleDataSource();
dataSource.setUser(username);
dataSource.setPassword(password);
dataSource.setURL(url);
dataSource.setImplicitCachingEnabled(true);
dataSource.setFastConnectionFailoverEnabled(true);
return AQjmsFactory.getConnectionFactory(dataSource);
}
This is running on a shared Database which might be restarted or sometimes the network just has a short hickup. Which results in no more messages from the Queue.
I use a spring MessageListener to retrieve messages and there is actually no indicator or what so ever that the queue is not running anymore. After restarting the application I then get a load of older messages that should have been processed already.
Is there a way or specific data source implementation which reconnects or something?
Update: Listener Impl
#Bean
OracleAqQueueFactoryBean etlQueueFactory() throws JMSException, SQLException {
final OracleAqQueueFactoryBean bean = new OracleAqQueueFactoryBean();
bean.setConnectionFactory(jmsQueueConnectionFactory());
bean.setOracleQueueUser("USER");
bean.setOracleQueueName("QUEUE");
return bean;
}
#Bean
DefaultMessageListenerContainer jmsContainer() throws JMSException, SQLException {
final DefaultMessageListenerContainer bean = new DefaultMessageListenerContainer();
bean.setConnectionFactory(jmsQueueConnectionFactory());
bean.setDestination(etlQueueFactory().getObject());
bean.setMessageListener(new MyListener());
bean.setSessionTransacted(false);
return bean;
}
public class MyListener implements MessageListener {
#Override
public void onMessage(Message message) {
...
}
}
I guess you have to do it on the JMS level, not DB level.
Not sure what type of listener you are using, but a DefaultMessageListenerContainer in Spring is implemented with a consumer.receive(timeout) loop. It's more robust than using a plain listener as it will attempt to reconnect on each poll cycle (if needed).

Configuring multiple DefaultJmslistenercontainerfactory

In my app, I have 2 diff mq conn factory beans. For this I have 2 diff DefaultJmslistenercontainerfactory beans ex cf1 n cf2. Each of DefaultJmslistenercontainerfactory bean is being referred in seperate #JmsListener. ..Now i want to start stop each listrner programatically , for that I am overriding configureMessageListeners(JmsListenerRegistrar) method where I can set the DefaultJmslistenercontainerfactory instance. Note I only one instance can be set..
then in my code I get spring instance of JmsListenerRegistry from which I can get list dmlc..which I can start n stop
However. .since I have set only one DefaultJmslistenercontainerfactory instance, my code returns only one dmlc..
Question here is how can I pass multiple DefaultJmslistenercontainerfactory instances in configureJmsListener() method??
Note- I do not create dmlc manually..I just configure factory..
Why are you using configureMessageListeners() ? That is for programmatic endpoint registration, not influencing the configuration of #JmsListener.
Show your configuration (edit the question, don't try to post code/config in comments).
This works fine for me...
#Bean
public JmsListenerContainerFactory<DefaultMessageListenerContainer> one(
#Qualifier("jmsConnectionFactory1") ConnectionFactory cf) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(cf);
return factory;
}
#Bean
public JmsListenerContainerFactory<DefaultMessageListenerContainer> two(
#Qualifier("jmsConnectionFactory2") ConnectionFactory cf) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(cf);
return factory;
}
#JmsListener(id="fooListener", destination="foo", containerFactory="one")
public void listen1(String payload) {
System.out.println(payload + "foo");
}
#JmsListener(id="barListener", destination="bar", containerFactory="two")
public void listen2(String payload) {
System.out.println(payload + "bar");
}
...
#Autowired
JmsListenerEndpointRegistry registry;
...
MessageListenerContainer fooContainer = registry.getListenerContainer("fooListener");
MessageListenerContainer barContainer = registry.getListenerContainer("barListener");
You can also use registry.getListenerContainers() to get a collection.
I thought I explained all this in my answer to your other question.

activeMQ does not participate in Weblogic XA transactions

I try to get XA transactions involving a jdbc and jms DataSource working in a Spring webapp deployed to Weblogic.
Using a local Atomikos TransactionManager, this works - I see XA debug messages in ActiveMQ, and stuff stays consistent. In Weblogic however, the database and ActiveMQ are not transactionally consistent.
I have added a foreign JMS server in Weblogic
JNDI Initial Context Factory:
org.apache.activemq.jndi.ActiveMQInitialContextFactory
JNDI Connection URL:
tcp://localhost:61616
JNDI Properties:
connectionFactoryNames=XAConnectionFactory
To that server, I have added a ConnectionFactory (Remote JNDI Name = XAConnectionFactory). Lookups work, so far so good.
In my code, this is how I setup the Spring JTA:
#Override
#Bean
#Profile(AppConfig.PROFILE_WEBLOGIC)
public JtaTransactionManager transactionManager()
{
WebLogicJtaTransactionManager tx = new WebLogicJtaTransactionManager();
tx.afterPropertiesSet();
return tx;
}
And this is my JMS config:
#Bean
#Profile(AppConfig.PROFILE_WEBLOGIC)
public ConnectionFactory connectionFactory()
{
Properties props = new Properties();
props.put(Context.INITIAL_CONTEXT_FACTORY, env.getProperty(Context.INITIAL_CONTEXT_FACTORY));
props.setProperty(Context.PROVIDER_URL, env.getProperty(Context.PROVIDER_URL));
try
{
InitialContext ctx = new InitialContext(props);
ActiveMQXAConnectionFactory connectionFactory = (ActiveMQXAConnectionFactory) ctx
.lookup(env.getProperty("jms.connectionFactory"));
return connectionFactory;
}
catch(NamingException e)
{
throw new RuntimeException("XAConnectionFactory lookup failed", e);
}
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() throws JMSException
{
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setTransactionManager(txConfig.transactionManager());
factory.setBackOff(new FixedBackOff());
return factory;
}
#Bean(name = "jmsTemplate")
#Override
public JmsTemplate jmsTemplate() throws JMSException
{
JmsTemplate t = new JmsTemplate();
t.setConnectionFactory(connectionFactory());
t.setMessageTimestampEnabled(true);
t.setMessageIdEnabled(true);
return t;
}
My JMS consumer is annotated with:
#Transactional
#JmsListener(destination = "test.q1")
Is there anything I am missing?
Turns out this only works via the Resource Adapter, its not possible solely via the JNDI ConnectionFactory.
It is possible, using the undocumented ActiveMQ JNDI property "xa=true" in the foreign JMS server definition, see here:
Deployment of ActiveMQ resource adapter fails
ActiveMQInitialConnectionFactory cannot return an
XAConnectionFactory
ActiveMQInitialConnectionFactory returns XA
connection factory

Resources