spring boot activemq consumer connection pool - spring

Spring Boot ActiveMQ consumer connection pool is needed to configure? I have only one consumer in spring boot application (as a micro service), producers are in another application. I am little confused by the below: (extracted from http://activemq.apache.org/spring-support.html)
Note: while the PooledConnectionFactory does allow the creation of a collection of active consumers, it does not 'pool' consumers. Pooling makes sense for connections, sessions and producers, which can be seldom-used resources, are expensive to create and can remain idle a minimal cost. Consumers, on the other hand, are usually just created at startup and left going, handling incoming messages as they come. When a consumer is complete, it's preferred to shut down it down rather than leave it idle and return it to a pool for later reuse: this is because, even if the consumer is idle, ActiveMQ will keep delivering messages to the consumer's prefetch buffer, where they'll get held up until the consumer is active again.
At the same page, I can see this: You can use the activemq-pool org.apache.activemq.pool.PooledConnectionFactory for efficient pooling of the connections and sessions for your collection of consumers, or you can use the Spring JMS org.springframework.jms.connection.CachingConnectionFactory to achieve the same effect
I tried CachingConnectionFactory (which can take ActiveMQConnectionFactory) where it has only few setter to hold cacheConsumers(boolean), cacheProducers(boolean), nothing related to pool the connection. I know that 1 connection can give you multiple session, then per session you have multiple consumer/producer. But my question is for Consumer how do we pool as the above statement is saying leave it to default. So I did this by just one method:#Bean
public JmsListenerContainerFactory myFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// This provides all boot's default to this factory, including the message converter
factory.setConcurrency("3-10");
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
return factory;
}</em><br>
Dynamic scaling this link also suggests this, but I could not find concrete solution. Did someone came across this kind of situation, please give your suggestion. Thanks for reading this post and any help greatly appreciated.
Additional details for Production: This consumer will receive ~500 message per sec. Using Spring Boot version 1.5.8.RELEASE, ActiveMQ 5.5 is my JMS

There is an package called org.apache.activemq.jms.pool in activemq which provides PooledConsumer. Below is the code for that. Please check and see if it works for you. I know its not the spring way but you can easily manage to customise your poll method.
PooledConnectionFactory pooledConFactory = null;
PooledConnection pooledConnection = null;
PooledSession pooledSession = null;
PooledMessageConsumer pooledConsumer = null;
Message message = null;
try
{
// Get the connection object from PooledConnectionFactory
pooledConFactory = ( PooledConnectionFactory ) this.jmsTemplateMap.getConnectionFactory();
pooledConnection = ( PooledConnection ) pooledConFactory.createConnection();
pooledConnection.start();
// Create the PooledSession from pooledConnection object
pooledSession = ( PooledSession ) pooledConnection.createSession( false, 1 );
// Create the PooledMessageConsumer from session with given ack mode and destination
pooledConsumer = ( PooledMessageConsumer ) pooledSession.
createConsumer( this.jmsTemplateMap.getDefaultDestination(), <messageFilter if any>);
while ( true )
{
message = pooledConsumer.receiveNoWait();
if ( message != null)
break;
}
}
catch ( JMSException ex )
{
LOGGER.error("JMS Exception occured, closing the session", ex );
}
return message;

Related

Spring Batch Parallel processing with JMS

I implemented a spring batch project that reads from a weblogic Jms queue (Custom Item Reader not message driven), then pass the Jms message data to an item writer (chunk = 1) where i call some APIs and write in DataBase.
However, i am trying to implement parallel Jms processing, reading in parallel Jms messages and passing them to the writer without waiting for the previous processes to complete.
I’ve used a DefaultMessageListenerContainer in a previous project and it offers a parallel consuming of jms messages, but in this project i have to use the spring batch framework.
I tried using the easiest solution (multi-threaded step) but it
didn’t work , JmsException : "invalid blocking receive when another
receive is in progress" which means probably that my reader is
statefull.
I thought about using remote partitioning but then i have to read all
messages and put the data into step execution contexts before calling
the slave steps, which isn't really efficient if dealing with a large
number of messages.
I looked a little bit into remote chunking, i understand that it passes data via queue channels, but i can't seem to find the utility in reading from a Jms and putting messages in a local queue for slave workers.
How can I approach this?
My code:
#Bean
Step step1() {
return steps.get("step1").<Message, DetectionIncoherenceLiqJmsOut>chunk(1)
.reader(reader()).processor(processor()).writer(writer())
.listener(stepListener()).build();
}
#Bean
Job job(#Qualifier("step1") Step step1) {
return jobs.get("job").start(step1).build();
}
Jms Code :
#Override
public void initQueueConnection() throws NamingException, JMSException {
Hashtable<String, String> properties = new Hashtable<String, String>();
properties.put(Context.INITIAL_CONTEXT_FACTORY, env.getProperty(WebLogicConstant.JNDI_FACTORY));
properties.put(Context.PROVIDER_URL, env.getProperty(WebLogicConstant.JMS_WEBLOGIC_URL_RECEIVE));
InitialContext vInitialContext = new InitialContext(properties);
QueueConnectionFactory vQueueConnectionFactory = (QueueConnectionFactory) vInitialContext
.lookup(env.getProperty(WebLogicConstant.JMS_FACTORY_RECEIVE));
vQueueConnection = vQueueConnectionFactory.createQueueConnection();
vQueueConnection.start();
vQueueSession = vQueueConnection.createQueueSession(false, 0);
Queue vQueue = (Queue) vInitialContext.lookup(env.getProperty(WebLogicConstant.JMS_QUEUE_RECEIVE));
consumer = vQueueSession.createConsumer(vQueue, "JMSCorrelationID IS NOT NULL");
}
#Override
public Message receiveMessages() throws NamingException, JMSException {
return consumer.receive(20000);
}
Item reader :
#Override
public Message read() throws Exception {
return jmsServiceReceiver.receiveMessages();
}
Thanks ! i'll appreciate the help :)
There's a BatchMessageListenerContainer in the spring-batch-infrastructure-tests sub project.
https://github.com/spring-projects/spring-batch/blob/d8fc58338d3b059b67b5f777adc132d2564d7402/spring-batch-infrastructure-tests/src/main/java/org/springframework/batch/container/jms/BatchMessageListenerContainer.java
Message listener container adapted for intercepting the message reception with advice provided through configuration.
To enable batching of messages in a single transaction, use the TransactionInterceptor and the RepeatOperationsInterceptor in the advice chain (with or without a transaction manager set in the base class). Instead of receiving a single message and processing it, the container will then use a RepeatOperations to receive multiple messages in the same thread. Use with a RepeatOperations and a transaction interceptor. If the transaction interceptor uses XA then use an XA connection factory, or else the TransactionAwareConnectionFactoryProxy to synchronize the JMS session with the ongoing transaction (opening up the possibility of duplicate messages after a failure). In the latter case you will not need to provide a transaction manager in the base class - it only gets on the way and prevents the JMS session from synchronizing with the database transaction.
Perhaps you could adapt it for your use case.
I was able to do so with a multithreaded step :
// Jobs et Steps
#Bean
Step stepDetectionIncoherencesLiq(#Autowired StepBuilderFactory steps) {
int threadSize = Integer.parseInt(env.getProperty(PropertyConstant.THREAD_POOL_SIZE));
return steps.get("stepDetectionIncoherencesLiq").<Message, DetectionIncoherenceLiqJmsOut>chunk(1)
.reader(reader()).processor(processor()).writer(writer())
.readerIsTransactionalQueue()
.faultTolerant()
.taskExecutor(taskExecutor())
.throttleLimit(threadSize)
.listener(stepListener())
.build();
}
And a jmsItemReader with jmsTemplate instead of creating session and connections explicitly, it manages connections so i dont have the jms exception anymore:( JmsException : "invalid blocking receive when another receive is in progress" )
#Bean
public JmsItemReader<Message> reader() {
JmsItemReader<Message> itemReader = new JmsItemReader<>();
itemReader.setItemType(Message.class);
itemReader.setJmsTemplate(jmsTemplate());
return itemReader;
}

Spring DefaultJmsListenerContainerFactory dropping messages on JBoss EAP 7

I have the following Spring JMS factory configuration:
#Bean(name = "jmsListenerFactory")
public DefaultJmsListenerContainerFactory jmsListenerFactory() throws Exception {
DefaultJmsListenerContainerFactory container = new DefaultJmsListenerContainerFactory();
container.setErrorHandler(new MqErrorHandler());
container.setConnectionFactory(connectionFactory);
container.setTransactionManager(transactionManager);
container.setConcurrency("5-10");
return container;
}
Listener setup:
#JmsListener(destination = ACK_QUEUE, containerFactory = "jmsListenerFactory")
#Transactional
public void receive(String response) throws Exception {
try {
logger.debug(response);
Msg ackNack = (Msg) unmarshaller.unmarshal(new StringReader(response));
... and so on
... should acknowledge JMS and commit DB update
}
JTA Transaction Manager:
public JtaTransactionManager jbossTransactionManager() throws Exception {
JtaTransactionManager transactionManager = new JtaTransactionManager();
transactionManager.setTransactionManagerName("java:/TransactionManager");
return transactionManager;
}
I have an Oracle database and a JMS connection factory resource included via JNDI, both XA compatible.
Problem is that some JMS messages are not making it to the listener. They are 100% provided to the queue but like they just "disappear". No logging or errors are reported in the logs even to TRACE level.
Nothing else is listening to this queue and similar transactions are getting processed successfully from the same queue. I cannot reproduce it in any guaranteed way and it is completely intermittent.
Any thoughts?
The lesson to be learned here is never to assume the following:
"Nothing else is listening to this queue.."
There are ways to try and verify this:
Check how many processes currently have the queue open for INPUT:
DIS QL(QUEUENAME) IPPROCS
Display all connections that have the queue currently open with OPENOPTS that include MQOO_INPUT*, this will show both IP address (if it is a client connection), and the application name (APPLTAG) presented by the client.
DIS CONN(*) TYPE(ALL) WHERE(OBJNAME EQ QUEUENAME)
Both of the above only show a point in time view, if you are trying to find a app that is connecting reading messages from the queue and then disconnecting, then you may not see it.
As #MoragHughson mentioned in a comment, start an activity trace and look for connections to this queue and see if the IPs/application names connecting are what is expected, this has the advantage that you can see over a period of time all of the things that have connected to the queue.
Below is an example of one way to do this using the amqsevt from IBM MQ v9.1 or later and targeting a IBM MQ v9.0 or later queue manager (this relies on a opensource JSON processor called jq). The example below will run until there is no activity for 5 minutes (300 seconds) or until a key is pressed (you can increase the time by increasing the -w value):
#Replace the string _REPLACE_WITH_QMGR_NAME_ with your queue manager name in both places.
#Replace the string _REPLACE_WITH_QUEUE_NAME_ with your queue name.
amqsevt -m _REPLACE_WITH_QMGR_NAME_ -t '$SYS/MQ/INFO/QMGR/_REPLACE_WITH_QMGR_NAME_/ActivityTrace/ConnectionId/#' -w 300 -o json | jq -r '
.eventData
| [.channelName, .channelType, .connectionName, .applName, .remoteProduct, .remoteVersion] as $x
| ( .activityTrace[]
| select(.objectName == "_REPLACE_WITH_QUEUE_NAME_") | [.operationTime]
+ $x
+ [.operationId, .objectName, .resolvedQueueName, .reasonCode.value, .putDate, .putTime] )
| #csv
'
Issue was resolved as messages were being silently consumed by DEV queue manager ... which was invalid configuration.

Is our simultaneous completion of database and JMS processing smart or lucky?

We are using JMS to process messages in a Java 1.8 SE environment. The messages originate from an Oracle (12) Advanced Queue.
We would like to read a message from a JMS queue, do some work based on it, and save the result in the database. We don’t want to lose any messages, and we don’t want to duplicate processing on any message. In other words, we’d like the processing of the JMS message and the associated database activity to be a single transaction.
We’ve read various articles about how to do this (including Transaction and redelivery in JMS, JMS Message Delivery Reliability and Acknowledgement Patterns, Reliable JMS with Transactions). The consensus seems to be to use JTA/XA, but we were hoping for something simpler.
We are using Oracle’s Advanced Queueing as our JMS provider, so we decided to see whether we could use the same database connection for both JMS and database activity, so that a single commit would work for both JMS and database activity. It seems to have worked.
In the code below, we create a QueueConnection using an existing SQL Connection when we initialize the JMS queue. After processing the message, committing the JMS session also commits the database changes.
We haven’t seen this approach discussed elsewhere, so we’re wondering if
We have a reliable solution that works for Oracle Advanced
Queueing,
We have a solution that just happens to work some of the
time for this version of Oracle Advanced Queueing,
We just got
really, really lucky on our test cases, and this approach is fraught
with peril
Please comment on whether our approach should be reliable or whether we should use JTA/XA.
public class OracleJmsQueue {
private DataSource dataSource;
protected Queue queue;
protected QueueConnection queueConnection;
protected QueueReceiver queueReceiver;
protected QueueSession queueSession;
private java.sql.Connection dbConnection = null;
protected void initQueueSession()
throws JMSException, SQLException {
// Connect to the database source of messages
DataSource dataSource = getDataSource();
dbConnection = dataSource.getConnection();
dbConnection.setAutoCommit(false);
queueConnection = AQjmsQueueConnectionFactory.createQueueConnection(
dbConnection);
queueSession =
queueConnection.createQueueSession(true, Session.SESSION_TRANSACTED);
queue = ((AQjmsSession)queueSession).getQueue(queueUser, queueName);
queueReceiver = queueSession.createReceiver(queue);
}
public void run() {
initQueueSession();
// code omitted
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(200);
final Message message = queueReceiver.receiveNoWait();
if (message != null) {
processMessage(message); // alters DB tables
commitSession();
}
}
// catches omitted
}
}
protected void commitSession() throws JMSException {
logger.info("Committing " + queueName + " queue session");
queueSession.commit();
}
} // class OracleJmsQueue
It looks that your assumptions about JMS and OAQ are correct, given that processMessage uses the dbConnection class attribute.
https://docs.oracle.com/javaee/7/api/javax/jms/QueueConnection.html
So, answering your question: Yes, you have a reliable solution (assuming what I mentioned before).

SpringJMS - How to Disconnect a MessageListenerContainer

I want to disconnect the DefaultMessageListenerContainer for a queue. I am using dmlc.stop(), dmlc.shutdown(). At the time of conneciton 5 consumer threads get connected to queue. when I try to disconnect, 4 of the consumers get disconnected, but 1 consumer remains connected. (See screenshot at the end of thread).
Environment
1. ActiveMQ with AMQP
2. SpringJMS with ApacheQpid
Problem
After calling destroy and stop method, there's still one consumer connected to the queue.
Required Solution
I want to know, how to cleanly disconnect a MessageListenerContainer with zero consumer connections to the queue.
Configurations and Code
#Bean
public DefaultMessageListenerContainer getMessageContainer(ConnectionFactory amqpConnectionFactory, QpidConsumer messageConsumer){
DefaultMessageListenerContainer listenerContainer = new DefaultMessageListenerContainer();
listenerContainer.setConcurrency("5-20");
listenerContainer.setRecoveryInterval(jmsRecInterval);
listenerContainer.setConnectionFactory(new CachingConnectionFactory(amqpConnectionFactory));
listenerContainer.setMessageListener(messageConsumer);
listenerContainer.setDestinationName(destinationName);
return listenerContainer;
}
private void stopListenerIfRunning() {
DefaultMessageListenerContainer dmlc = (DefaultMessageListenerContainer) ctx.getBean("messageContainer");
if (null != dmlc) {
if(!dmlc.isRunning()){return;}
dmlc.stop(new Runnable() {
#Override
public void run() {
logger.debug("Closed Listener Container for Connection {}", sub.getQueueName());
if (sub.getSubscriptionStatus() == SubscriptionStatus.DELETED
|| sub.getSubscriptionStatus() == SubscriptionStatus.SUSPENDED_DELETE) {
listenerHandles.remove(sub.getQueueName());
}
}
});
dmlc.destroy();
dmlc.shutdown();
}
}
}
listenerContainer.setConnectionFactory(newCachingConnectionFactory(amqpConnectionFactory));
You need to destroy the CachingConnectionFactory.
You generally don't need a caching factory with the listener container since the sessions are long-lived; you definitely should not if you have variable concurrency; from the javadocs...
* <p><b>Note: Don't use Spring's {#link org.springframework.jms.connection.CachingConnectionFactory}
* in combination with dynamic scaling.</b> Ideally, don't use it with a message
* listener container at all, since it is generally preferable to let the
* listener container itself handle appropriate caching within its lifecycle.
* Also, stopping and restarting a listener container will only work with an
* independent, locally cached Connection - not with an externally cached one.
if you want the connection cached, use a SingleConnectionFactory or call setCacheConsumers(false) on the CCF.

Springboot JMS LIstener ActiveMQ is very slow

Im having a SpringBoot application which consume my custom serializable message from ActiveMQ Queue. So far it is worked, however, the consume rate is very poor, only 1 - 20 msg/sec.
#JmsListener(destination = "${channel.consumer.destination}", concurrency="${channel.consumer.maxConcurrency}")
public void receive(IMessage message) {
processor.process(message);
}
The above is my channel consumer class's snippet, it has a processor instance (injected, autowired and inside it i have #Async service, so i can assume the main thread will be released as soon as message entering #Async method) and also it uses springboot activemq default conn factory which i set from application properties
# ACTIVEMQ (ActiveMQProperties)
spring.activemq.broker-url= tcp://localhost:61616?keepAlive=true
spring.activemq.in-memory=true
spring.activemq.pool.enabled=true
spring.activemq.pool.expiry-timeout=1
spring.activemq.pool.idle-timeout=30000
spring.activemq.pool.max-connections=50
Few things worth to inform:
1. I run everything (Eclipse, ActiveMQ, MYSQL) in my local laptop
2. Before this, i also tried using custom connection factory (default AMQ, pooling, and caching) equipped with custom threadpool task executor, but still getting same result. Below is a snapshot performance capture which i took and updating every 1 sec
3. I also notive in JVM Monitor that the used heap keep incrementing
I want to know:
1. Is there something wrong/missing from my steps?I can't even touch hundreds in my message rate
2. Annotated #JmsListener method will execute process async or sync?
3. If possible and supported, how to use traditional sync receive() with SpringBoot properly and ellegantly?
Thank You
I'm just checking something similar. I have defined DefaultJmsListenerContainerFactory in my JMSConfiguration class (Spring configuration) like this:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(CachingConnectionFactory connectionFactory) {
// settings made based on https://bsnyderblog.blogspot.sk/2010/05/tuning-jms-message-consumption-in.html
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory(){
#Override
protected void initializeContainer(DefaultMessageListenerContainer container) {
super.initializeContainer(container);
container.setIdleConsumerLimit(5);
container.setIdleTaskExecutionLimit(10);
}
};
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("10-50");
factory.setCacheLevel(CACHE_CONSUMER);
factory.setReceiveTimeout(5000L);
factory.setDestinationResolver(new BeanFactoryDestinationResolver(beanFactory));
return factory;
}
As you can see, I took those values from https://bsnyderblog.blogspot.sk/2010/05/tuning-jms-message-consumption-in.html. It's from 2010 but I could not find anything newer / better so far.
I have also defined Spring's CachingConnectionFactory Bean as a ConnectionFactory:
#Bean
public CachingConnectionFactory buildCachingConnectionFactory(#Value("${activemq.url}") String brokerUrl) {
// settings based on https://bsnyderblog.blogspot.sk/2010/02/using-spring-jmstemplate-to-send-jms.html
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(activeMQConnectionFactory);
cachingConnectionFactory.setSessionCacheSize(10);
return cachingConnectionFactory;
}
This setting will help JmsTemplate with sending.
So my answer to you is set the values of your connection pool like described in the link. Also I guess you can delete spring.activemq.in-memory=true because (based on documentation) in case you specify custom broker URL, "in-memory" property is ignored.
Let me know if this helped.
G.

Resources