Throttle #JmsListener in spring (prevent TaskRejectedException) - spring

I currently have a JMSListener configured with the following threadpool:
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.initialize();
The queue I'm listening to has well over 100 messages, and when I start the listener it will process the first 10 messages with no problems, and then I will get TaskRejectedException exceptions for the the remaining messages.
My intent is that #JmsListener should not pull any new messages if there are no available threads to process said message. Does anyone know if this configuration is possible? I'm using version 1.5.3 of springboot.
-TIA

If you don't want to lose messages you should not use an executor at all; the container will ack each message as soon as it is queued in the executor. If the system dies, any messages not yet processed will be lost.
Instead, use the container's concurrency settings to multi-thread your listener.
If you really must use an executor and don't mind losing messages, use a caller-runs policy in the executor - setRejectedExecutionHandler().

Related

Where does spring rabbit consume a message on a new thread

I'm working on tracing rabbitmq, and I've found that spring rabbit consumes a message on a new thread. I wanna know where does it create a new thread
See Choosing a Container.
The (default) SimpleMessageListenerContainer passes the messages to a dedicated thread for each consumer; the thread(s) are created when the container is start()ed. You can specify a custom TaskExecutor. By default, it uses a SimpleAsyncTaskExecutor.
The DirectMessageListenerContainer calls the listener on the amqp-client dispatcher thread.

Running RabbitMQ consumer in different thread after consuming message

I need to know about the processing of consumed message(thread flow) via Spring SimpleMessageListenerContainer
I have following understanding
1) Messages consumed via consumer threads. (you can define the consumer thread pools via task executors).
2) the same consumer thread who receives the message process it and gets blocked until it does not finish the execution of handler method.
3) meanwhile other consumer threads gets created to consume the other messages and process those message. The interval to create those consumer threads is based on setStartConsumerMinInterval settings.
Please let me know if I am correct?
The next part is
I want to separate the consuming of message and processing of message in different threads(differnt pools for consuming and processing) how we can do that?
I have tried this way, I have made the handle message of handler as #Async to run it in different threads. Is it a correct way or any better way is available?
The last part is
in my Spring boot application I am both publishing and consuming messages, and I am using a single connection factory(CachingConnectionFactory). Should I use 2 connections factories 1 for publishing and other for consuming? and pass respecctive connection factory to the publishing and consuming beans?

Integration with external systems over JMS. Clustered environment

I have an application where I created 2 message listener containers for external system A which listens two queues respectively.
Also I have 1 message listener container which running and listening another queue of external system B. I am using spring DefaultMessageListenerContainer.
My application is running on clustered environment, while defining my message listener container I injected to it my listener which implements javax MessageListener interface and acts as kind of MDB.
So my questions are:
Is it normal to have instance of message listener container per queue?
Will my message driven pojo (MDP) execute onMessage() on each application node?
If yes, how can I avoid it? I want each message to be consumed once on some of the application nodes.
What is default behavior of DefaultMessageListenerContainer, message is acknowledged as soon as I reached onMessage or after I finished execution of onMessage? Or maybe I need to acknowledge it manually?
See the spring framework JMS documentation and the JMS specification.
Yes it is normal - a container can only listen to one destination.
It depends on the destination type; for a topic, each instance will get a copy of the message; for a queue, multiple listeners (consumers) will compete for messages. This has nothing to do with Spring, it's the way JMS works.
See #2.
With the DMLC, it is acknowledged immediately before calling the container; set sessionTransacted = true so the ack is not committed until the listener exits. With a SimpleMessageListenerContainer, the message is ack'd when the listener exits. See the Javadocs for the DMLC and SMLC (as well as the abstract classes they subclass) for the differences.

Spring Integration: Message Driven Channel Adapter

As per the documentation int-jms:message-driven-channel-adapter uses SimpleAsyncTaskExecutor
SimpleAsyncTaskExecutor doesn't reuse threads and creates a new thread for each task. In case of message-driven-channel-adapter what is the definition of a task?
In case of message driven channel Adapter the task is a constantly polling loop. So, this is going to be a long- living resource which keeps thread active. Therefore we don’t care too much about source of threads. See Spring JMS for more information.

JMSTemplate and DefaultMessageListenerContainer

I have 2 queues in Redhat ActiveMQ, one is used for consuming and other one for both producing and consuming object messages.
Once consumed from the main queue it is pushed to 2nd queue for further processing, However while using JmsTemplate the messages are getting lost randomly,
I am using same Bean with ActiveMQConnectionFactory on 2 of the DMLC container and with the JmsTemplate
Let me know how to ensure that messages are not getting lost in JmsTemplate.
I would double check that nobody else check for messages on the queues you have. If there is some sort of development environment where you have several instances of the application running - they might compete for the messages. It could be another developer launching another instance of the app using the same connection string to the ActiveMQ, or dev/stage environments.

Resources