I have a message producer which produces around 15 messages/second
The consumer is a spring integration project which consumes from the Message Queue and does a lot of processing. Currently it is single threaded and not able to match with the rate at which the producer are sending the messages. hence the queue depth keeps on increasing
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)// push raw fa data
.filter(ingFilter, "filterMessageOnEvent").transform(eventHandler, "parseEvent")
.aggregate(a -> a.correlationStrategy(corStrgy, "getCorrelationKey").releaseStrategy(g -> {
boolean eonExists = g.getMessages().stream()
.anyMatch(eon -> ((FlightModel) eon.getPayload()).getEstGmtOnDtm() != null);
if (eonExists) {
boolean einExists = g.getMessages().stream()
.anyMatch(ein -> ((FlightModel) ein.getPayload()).getEstGmtInDtm() != null);
if (einExists) {
return true;
}
}
return false;
}).messageStore(this.messageStore)).channel("AggregatorEventChannel").get();
is it possible to use executor channel to process this in a multithreaded environment and speed up the consumer process
If yes, please suggest how can i achieve - To ensure ordering of the messages I need to assign the messages of same type (based on the id of the message) to the same thread of the executor channel.
[UPDATED CODE]
I have created the below executor channels
public static final MessageChannel SKW_DEFAULT_CHANNEL = MessageChannels
.executor(ASQ_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
public static final MessageChannel RPA_DEFAULT_CHANNEL = MessageChannels
.executor(ASH_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
Now from the main message flow I redirected to a custom router which forwards the message to Executor channel as shown below -
#Bean
public IntegrationFlow baseEventFlow1() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)// push raw fa data
.filter(ingFilter, "filterMessageOnEvent").route(route()).get();
}
public AbstractMessageRouter router() {
return new AbstractMessageRouter() {
#Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
if (message.getPayload().toString().contains("\"id\":\"RPA")) {
return Collections.singletonList(RPA_DEFAULT_CHANNEL);
} else if (message.getPayload().toString().contains("\"id\":\"SKW")) {
return Collections.singletonList(SKW_DEFAULT_CHANNEL);
} else {
return Collections.singletonList(new NullChannel());
}
}
};
}
I will have individual consumer flow for the corresponding executor channel.
Please correct my understaning
[UPDATED]
#Bean
#BridgeTo("uaxDefaultChannel")
public MessageChannel ucaDefaultChannel() {
return MessageChannels.executor(UCA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel ualDefaultChannel() {
return MessageChannels.executor(UAL_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
public IntegrationFlow uaEventFlow() {
return IntegrationFlows.from("uaDefaultChannel").wireTap(UA_WIRE_TAP_CHNL)
.transform(eventHandler, "parseEvent")
}
So BridgeTo on the executor channel will forward the messages
hence the queue depth keeps on increasing
Since it looks like your queue is somewhere on JMS broker that is really OK to have such a behavior. That's exactly for what messaging systems have been designed - to distinguish producer and consumer and deal with messages in a destination whenever it is possible.
if you want to increase a polling from JMS, you can consider to have a concurrency option on the JMS container:
/**
* The concurrency to use.
* #param concurrency the concurrency.
* #return current {#link JmsDefaultListenerContainerSpec}.
* #see DefaultMessageListenerContainer#setConcurrency(String)
*/
public JmsDefaultListenerContainerSpec concurrency(String concurrency) {
this.target.setConcurrency(concurrency);
return this;
}
/**
* The concurrent consumers number to use.
* #param concurrentConsumers the concurrent consumers count.
* #return current {#link JmsDefaultListenerContainerSpec}.
* #see DefaultMessageListenerContainer#setConcurrentConsumers(int)
*/
public JmsDefaultListenerContainerSpec concurrentConsumers(int concurrentConsumers) {
this.target.setConcurrentConsumers(concurrentConsumers);
return this;
}
/**
* The max for concurrent consumers number to use.
* #param maxConcurrentConsumers the max concurrent consumers count.
* #return current {#link JmsDefaultListenerContainerSpec}.
* #see DefaultMessageListenerContainer#setMaxConcurrentConsumers(int)
*/
public JmsDefaultListenerContainerSpec maxConcurrentConsumers(int maxConcurrentConsumers) {
this.target.setMaxConcurrentConsumers(maxConcurrentConsumers);
return this;
}
See more info the Docs: https://docs.spring.io/spring/docs/5.2.3.RELEASE/spring-framework-reference/integration.html#jms-receiving
But that won't allow you to "asign messages to the specific thread". There is just like no way to partition in JMS.
We can do that with Spring Integration using router according your "based on the id of the message" and particular ExecutorChannel instances configured with a singled-threaded Executor. Every ExecutorChannel is going to be its dedicated executor with only single thread. This way you will ensure an order for messages with the same partition key and you'll process them in parallel. All the ExecutorChannel can have the same subscriber or bridge to the same channel for processing.
However you need to keep in mind that when you are leaving JMS listener thread, you finish JMS transaction and you fail to process a message in that separate thread you may lose a message.
Related
I'm experimenting a lot with Apache Kafka in a Spring Boot App at the moment.
My current goal is to write a REST endpoint that takes in some message payload, which will use a KafkaTemplate to send the data to my local Kafka running on port 9092.
This is my producer config:
#Bean
public Map<String,Object> producerConfig() {
// config settings for creating producers
Map<String,Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,this.bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,5000);
configProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,4000);
configProps.put(ProducerConfig.RETRIES_CONFIG,0);
return configProps;
}
#Bean
public ProducerFactory<String,String> producerFactory() {
// creates a kafka producer
return new DefaultKafkaProducerFactory<>(producerConfig());
}
#Bean("kafkaTemplate")
public KafkaTemplate<String,String> kafkaTemplate(){
// template which abstracts sending data to kafka
return new KafkaTemplate<>(producerFactory());
}
My rest endpoint forwards to a service, the service looks like this:
#Service
public class KafkaSenderService {
#Qualifier("kafkaTemplate")
private final KafkaTemplate<String,String> kafkaTemplate;
#Autowired
public KafkaSenderService(KafkaTemplate<String,String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendMessageWithCallback(String message, String topicName) {
// possibility to add callbacks to define what shall happen in success/ error case
ListenableFuture<SendResult<String,String>> future = kafkaTemplate.send(topicName, message);
future.addCallback(new KafkaSendCallback<String, String>() {
#Override
public void onFailure(KafkaProducerException ex) {
logger.warn("Message could not be delivered. " + ex.getMessage());
}
#Override
public void onSuccess(SendResult<String, String> result) {
logger.info("Your message was delivered with following offset: " + result.getRecordMetadata().offset());
}
});
}
}
The thing now is: I'm expecting the "onFailure()" method to get called when the message could not be sent. But this seems not to work. When I change the bootstrapServers variable in the producer config to localhost:9091 (which is the wrong port, so there should be no connection possible), the producer tries to connect to the broker. It will do several connection attempts, and after 5 seconds, a TimeOutException will occur. But the "onFailure() method won't get called. Is there a way to achieve that the "onFailure()" method can get called event if the connection cannot be established?
And by the way, I set the retries count to zero, but the prodcuer still does a second connection attempt after the first one. This is the log output:
EDIT: it seems like the Kafke producer/ KafkaTemplate goes into an infinite loop when the broker is not available. Is that really the intended behaviour?
The KafkaTemplate does really nothing fancy about connection and publishing. Everything is delegated to the KafkaProducer. What you describe here would happen exactly even if you'd use just plain Kafka Client.
See KafkaProducer.send() JavaDocs:
* #throws TimeoutException If the record could not be appended to the send buffer due to memory unavailable
* or missing metadata within {#code max.block.ms}.
Which happens by the blocking logic in that producer:
/**
* Wait for cluster metadata including partitions for the given topic to be available.
* #param topic The topic we want metadata for
* #param partition A specific partition expected to exist in metadata, or null if there's no preference
* #param nowMs The current time in ms
* #param maxWaitMs The maximum time in ms for waiting on the metadata
* #return The cluster containing topic metadata and the amount of time we waited in ms
* #throws TimeoutException if metadata could not be refreshed within {#code max.block.ms}
* #throws KafkaException for all Kafka-related exceptions, including the case where this method is called after producer close
*/
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long nowMs, long maxWaitMs) throws InterruptedException {
Unfortunately this is not explained in the send() JavaDocs which claims to be fully asynchronous, but apparently it is not. At least in this metadata part which has to be available before we enqueue the record for publishing.
That's what we cannot control and it is not reflected on the returned Future:
try {
clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), nowMs, maxBlockTimeMs);
} catch (KafkaException e) {
if (metadata.isClosed())
throw new KafkaException("Producer closed while send in progress", e);
throw e;
}
See more info in Apache Kafka docs how to adjust the KafkaProducer for this matter: https://kafka.apache.org/documentation/#theproducer
Question answered inside the discussion on https://github.com/spring-projects/spring-kafka/discussions/2250# for anyone else stumbling across this thread. In short, kafkaTemplate.getProducerFactory().reset();does the trick.
I am just curious about the consumer state when no data is published to queue where the consumer is linked
#Component
public class ABC {
#RabbitListener(queues = RabbitQueueName,containerFactory = RABBIT_LISTENER_CONTAINER_FACTORY, id =SomeId)
public void onMessage(String message) {
try {
//business logic
} catch (Exception e) {
//some logger
}
}
}
Any link which explains this is more useful
Thanks in Advance
See ListenerContainerIdleEvent:
/**
* An event that is emitted when a container is idle if the container
* is configured to do so.
*
* #author Gary Russell
* #since 1.6
*
*/
#SuppressWarnings("serial")
public class ListenerContainerIdleEvent extends AmqpEvent {
Docs are here: https://docs.spring.io/spring-amqp/docs/current/reference/html/#idle-containers
It depends on the container type.
With the SimpleMessageListenerContainer (default), the amqp-client pushes new deliveries into a BlockingQueue; the listener thread polls that queue for new messages according to the receiveTimeout (default 1 second). If you turn on TRACE logging, you will see that polling.
With the DirectMessageListenerContainer, the listener is called directly by the amqp-client thread whenever a new message arrives.
I am trying to figure out the best way to handle errors that might have occurred in a service that is called after a aggregate's group timeout occurred that mimics the same flow as if the releaseExpression was met.
Here is my setup:
I have a AmqpInboundChannelAdapter that takes in messages and send them to my aggregator.
When the releaseExpression has been met and before the groupTimeout has expired, if an exception gets thrown in my ServiceActivator, the messages get sent to my dead letter queue for all the messages in that MessageGroup. (10 messages in my example below, which is only used for illustrative purposes) This is what I would expect.
If my releaseExpression hasn't been met but the groupTimeout has been met and the group times out, if an exception gets throw in my ServiceActivator, then the messages do not get sent to my dead letter queue and are acked.
After reading another blog post,
link1
it mentions that this happens because the processing happens in another thread by the MessageGroupStoreReaper and not the one that the SimpleMessageListenerContainer was on. Once processing moves away from the SimpleMessageListener's thread, the messages will be auto ack.
I added the configuration mentioned in the link above and see the error messages getting sent to my error handler. My main question, is what is considered the best way to handle this scenario to minimize message getting lost.
Here are the options I was exploring:
Use a BatchRabbitTemplate in my custom error handler to publish the failed messaged to the same dead letter queue that they would have gone to if the releaseExpression was met. (This is the approach I outlined below but I am worried about messages getting lost, if an error happens during publishing)
Investigate if there is away I could let the SimpleMessageListener know about the error that occurred and have it send the batch of messages that failed to a dead letter queue? I doubt this is possible since it seems the messages are already acked.
Don't set the SimpleMessageListenerContainer to AcknowledgeMode.AUTO and manually ack the messages when they get processed via the Service when the releaseExpression being met or the groupTimeOut happening. (This seems kinda of messy, since there can be 1..N message in the MessageGroup but wanted to see what others have done)
Ideally, I want to have a flow that will that will mimic the same flow when the releaseExpression has been met, so that the messages don't get lost.
Does anyone have recommendation on the best way to handle this scenario they have used in the past?
Thanks for any help and/or advice!
Here is my current configuration using Spring Integration DSL
#Bean
public SimpleMessageListenerContainer workListenerContainer() {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitConnectionFactory);
container.setQueues(worksQueue());
container.setConcurrentConsumers(4);
container.setDefaultRequeueRejected(false);
container.setTransactionManager(transactionManager);
container.setChannelTransacted(true);
container.setTxSize(10);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
#Bean
public AmqpInboundChannelAdapter inboundRabbitMessages() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(workListenerContainer());
return adapter;
}
I have defined a error channel and defined my own taskScheduler to use for the MessageStoreRepear
#Bean
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler ts = new ThreadPoolTaskScheduler();
MessagePublishingErrorHandler mpe = new MessagePublishingErrorHandler();
mpe.setDefaultErrorChannel(myErrorChannel());
ts.setErrorHandler(mpe);
return ts;
}
#Bean
public PollableChannel myErrorChannel() {
return new QueueChannel();
}
public IntegrationFlow aggregationFlow() {
return IntegrationFlows.from(inboundRabbitMessages())
.transform(Transformers.fromJson(SomeObject.class))
.aggregate(a->{
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
}
)
.handle("someService", "processMessages")
.get();
}
Here is my custom error flow
#Bean
public IntegrationFlow errorResponse() {
return IntegrationFlows.from("myErrorChannel")
.<MessagingException, Message<?>>transform(MessagingException::getFailedMessage,
e -> e.poller(p -> p.fixedDelay(100)))
.channel("myErrorChannelHandler")
.handle("myErrorHandler","handleFailedMessage")
.log()
.get();
}
Here is the custom error handler
#Component
public class MyErrorHandler {
#Autowired
BatchingRabbitTemplate batchingRabbitTemplate;
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(Message<?> message) {
ArrayList<SomeObject> payload = (ArrayList<SomeObject>)message.getPayload();
payload.forEach(m->batchingRabbitTemplate.convertAndSend("some.dlq","#", m));
}
}
Here is the BatchingRabbitTemplate bean
#Bean
public BatchingRabbitTemplate batchingRabbitTemplate() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(5);
scheduler.initialize();
BatchingStrategy batchingStrategy = new SimpleBatchingStrategy(10, Integer.MAX_VALUE, 30000);
BatchingRabbitTemplate batchingRabbitTemplate = new BatchingRabbitTemplate(batchingStrategy, scheduler);
batchingRabbitTemplate.setConnectionFactory(rabbitConnectionFactory);
return batchingRabbitTemplate;
}
Update 1) to show custom MessageGroupProcessor:
public class CustomAggregtingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
#Override
protected final Object aggregatePayloads(MessageGroup group, Map<String, Object> headers) {
return group;
}
}
Example Service:
#Slf4j
public class SomeService {
#ServiceActivator
public void processMessages(MessageGroup messageGroup) throws IOException {
Collection<Message<?>> messages = messageGroup.getMessages();
//Do business logic
//ack messages in the group
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug(" deliveryTag = {}",deliveryTag);
log.debug("Channel = {}",channel);
channel.basicAck(deliveryTag, false);
}
}
}
Updated integrationFlow
public IntegrationFlow aggregationFlowWithCustomMessageProcessor() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
New ErrorHandler to do nack
public class MyErrorHandler {
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(MessageGroup messageGroup) throws IOException {
if(messageGroup!=null) {
log.debug("Nack messages size = {}", messageGroup.getMessages().size());
Collection<Message<?>> messages = messageGroup.getMessages();
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug("deliveryTag = {}",deliveryTag);
log.debug("channel = {}",channel);
channel.basicNack(deliveryTag, false, false);
}
}
}
}
Update 2 Added custom ReleaseStratgedy and change to aggegator
public class CustomMeasureGroupReleaseStratgedy implements ReleaseStrategy {
private static final int MAX_MESSAGE_COUNT = 10;
public boolean canRelease(MessageGroup messageGroup) {
return messageGroup.getMessages().size() >= MAX_MESSAGE_COUNT;
}
}
public IntegrationFlow aggregationFlowWithCustomMessageProcessorAndReleaseStratgedy() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.transactional(true);
a.releaseStrategy(new CustomMeasureGroupReleaseStratgedy());
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
There are some flaws in your understanding.If you use AUTO, only the last message will be dead-lettered when an exception occurs. Messages successfully deposited in the group, before the release, will be ack'd immediately.
The only way to achieve what you want is to use MANUAL acks.
There is no way to "tell the listener container to send messages to the DLQ". The container never sends messages to the DLQ, it rejects a message and the broker sends it to the DLX/DLQ.
Producer of the message is not sending message as persistent and when i am trying to consume the message through MessageListener, and any exception(runtime) occurs, it retries for specific number of times (default is 6 from AMQ side) and message get lost.
Reason is that since producer is not setting the Delivery mode as Persistent, after certain number of retry attempt, DLQ is not being created and message does not move to DLQ. Due to this , i lost the message.
My Code is like this :-
#Configuration
#PropertySource("classpath:application.properties")
public class ActiveMqJmsConfig {
#Autowired
private AbcMessageListener abcMessageListener;
public DefaultMessageListenerContainer purchaseMsgListenerforAMQ(
#Qualifier("AMQConnectionFactory") ConnectionFactory amqConFactory) {
LOG.info("Message listener for purchases from AMQ : Starting");
DefaultMessageListenerContainer defaultMessageListenerContainer =
new DefaultMessageListenerContainer();
defaultMessageListenerContainer.setConnectionFactory(amqConFactory);
defaultMessageListenerContainer.setMaxConcurrentConsumers(4);
defaultMessageListenerContainer
.setDestinationName(purchaseReceivingQueueName);
defaultMessageListenerContainer
.setMessageListener(abcMessageListener);
defaultMessageListenerContainer.setSessionTransacted(true);
return defaultMessageListenerContainer;
}
#Bean
#Qualifier(value = "AMQConnectionFactory")
public ConnectionFactory activeMQConnectionFactory() {
ActiveMQConnectionFactory amqConnectionFactory =
new ActiveMQConnectionFactory();
amqConnectionFactory
.setBrokerURL(System.getProperty(tcp://localhost:61616));
amqConnectionFactory
.setUserName(System.getProperty(admin));
amqConnectionFactory
.setPassword(System.getProperty(admin));
return amqConnectionFactory;
}
}
#Component
public class AbcMessageListener implements MessageListener {
#Override
public void onMessage(Message msg) {
//CODE implementation
}
}
Problem :- By setting the client-id at connection level (Connection.setclientid("String")), we can subscribe as durable subscriber even though message is not persistent. By doing this, if application throws runtime exception , after a certain number of retry attempt, DLQ will be created for the Queue and message be moved to DLQ.
But in DefaultMessageListenerContainer, connection is not exposed to client. it is maintained by Class itself as a pool, i guess.
How can i achieve the durable subscription in DefaultMessageListenerContainer?
You can set the client id on the container instead:
/**
* Specify the JMS client ID for a shared Connection created and used
* by this container.
* <p>Note that client IDs need to be unique among all active Connections
* of the underlying JMS provider. Furthermore, a client ID can only be
* assigned if the original ConnectionFactory hasn't already assigned one.
* #see javax.jms.Connection#setClientID
* #see #setConnectionFactory
*/
public void setClientId(#Nullable String clientId) {
this.clientId = clientId;
}
and
/**
* Set the name of a durable subscription to create. This method switches
* to pub-sub domain mode and activates subscription durability as well.
* <p>The durable subscription name needs to be unique within this client's
* JMS client id. Default is the class name of the specified message listener.
* <p>Note: Only 1 concurrent consumer (which is the default of this
* message listener container) is allowed for each durable subscription,
* except for a shared durable subscription (which requires JMS 2.0).
* #see #setPubSubDomain
* #see #setSubscriptionDurable
* #see #setSubscriptionShared
* #see #setClientId
* #see #setMessageListener
*/
public void setDurableSubscriptionName(#Nullable String durableSubscriptionName) {
this.subscriptionName = durableSubscriptionName;
this.subscriptionDurable = (durableSubscriptionName != null);
}
I want to disconnect the DefaultMessageListenerContainer for a queue. I am using dmlc.stop(), dmlc.shutdown(). At the time of conneciton 5 consumer threads get connected to queue. when I try to disconnect, 4 of the consumers get disconnected, but 1 consumer remains connected. (See screenshot at the end of thread).
Environment
1. ActiveMQ with AMQP
2. SpringJMS with ApacheQpid
Problem
After calling destroy and stop method, there's still one consumer connected to the queue.
Required Solution
I want to know, how to cleanly disconnect a MessageListenerContainer with zero consumer connections to the queue.
Configurations and Code
#Bean
public DefaultMessageListenerContainer getMessageContainer(ConnectionFactory amqpConnectionFactory, QpidConsumer messageConsumer){
DefaultMessageListenerContainer listenerContainer = new DefaultMessageListenerContainer();
listenerContainer.setConcurrency("5-20");
listenerContainer.setRecoveryInterval(jmsRecInterval);
listenerContainer.setConnectionFactory(new CachingConnectionFactory(amqpConnectionFactory));
listenerContainer.setMessageListener(messageConsumer);
listenerContainer.setDestinationName(destinationName);
return listenerContainer;
}
private void stopListenerIfRunning() {
DefaultMessageListenerContainer dmlc = (DefaultMessageListenerContainer) ctx.getBean("messageContainer");
if (null != dmlc) {
if(!dmlc.isRunning()){return;}
dmlc.stop(new Runnable() {
#Override
public void run() {
logger.debug("Closed Listener Container for Connection {}", sub.getQueueName());
if (sub.getSubscriptionStatus() == SubscriptionStatus.DELETED
|| sub.getSubscriptionStatus() == SubscriptionStatus.SUSPENDED_DELETE) {
listenerHandles.remove(sub.getQueueName());
}
}
});
dmlc.destroy();
dmlc.shutdown();
}
}
}
listenerContainer.setConnectionFactory(newCachingConnectionFactory(amqpConnectionFactory));
You need to destroy the CachingConnectionFactory.
You generally don't need a caching factory with the listener container since the sessions are long-lived; you definitely should not if you have variable concurrency; from the javadocs...
* <p><b>Note: Don't use Spring's {#link org.springframework.jms.connection.CachingConnectionFactory}
* in combination with dynamic scaling.</b> Ideally, don't use it with a message
* listener container at all, since it is generally preferable to let the
* listener container itself handle appropriate caching within its lifecycle.
* Also, stopping and restarting a listener container will only work with an
* independent, locally cached Connection - not with an externally cached one.
if you want the connection cached, use a SingleConnectionFactory or call setCacheConsumers(false) on the CCF.