Spring Integration: Message Driven Channel Adapter - spring

As per the documentation int-jms:message-driven-channel-adapter uses SimpleAsyncTaskExecutor
SimpleAsyncTaskExecutor doesn't reuse threads and creates a new thread for each task. In case of message-driven-channel-adapter what is the definition of a task?

In case of message driven channel Adapter the task is a constantly polling loop. So, this is going to be a long- living resource which keeps thread active. Therefore we don’t care too much about source of threads. See Spring JMS for more information.

Related

Spring Rabbit CachingConnectionFactory Thread Pool

I have around 10 different rabbit mq queues in 10 different virtual hosts to connect to. For each queue, a separate SimpleMessageListenerContainer bean is defined in my spring boot application and a separate Spring Integration flow is created using each specific SimpleMessageListenerContainer.
The concurrency for SimpleMessageListenerContainer is set to 1-3. Each of the SimpleMessageListenerContainer bean is using separate CachingConnectoryFacory beans. The Connection Factory mode is set as CHANNEL.
We also have another IntegrationFlow to publish messages to an outbound queue that is using a different connection factory. I am not setting any ThreadPool Task Executors in the ConnectionFactory, so it using the default one. While doing the Load test we are noticing that the multiple thread pool (prefixed with pool-) are getting crated and after a certain point application crashes may due to the high number of threads.
It looks like the default thread pool executor is having max value of Integer unbounded which may spinning up threads based on the demand. I tried setting custom Thread Pool task executors for each connection factory and I noticed that the threads are not growing like previously but from the java profiler it shows the SimpleMessageListenerContainer threads are getting BLOCKED frequently.
I want to know if there any best practices or to be followed while setting the custom thread pool task executors in the connection factory like a ratio between Lisneter threads and connection factory threads etc?
I have done some debugging; ...-1 gets renamed to, for example AMQP Connection 127.0.0.1:5672.
That thread is not from the pool, but it is created by the same thread factory.
Similarly, the scheduled executor (for heartbeats) uses the same thread factory, and gets ...-2.
Hence the pool starts at ...-3. So indeed, you have a fixed pool of 8 threads, an I/O thread, and a heartbeat thread for each factory.
With a large number of factories like that, you probably don't need so many threads; I would suggest a single pooled executor with sufficient threads to satisfy your workload; experimentation is probably the only way to determine the number, but I would guess it's something less than 88 (11x8).

JMS listener using thread pool

I'm using a Spring project where I have an implementation of JMS Event listener to process messages from a queue.
To be precise, I'm using a SQS (AWS) queue.
All works fine.
My point is this:
I not configured anything about the concurrency but I would like to have more threads as listener to increase the performances (speed) about the messages processing from the queue.
I'm thinking about the possibility to configure a ThreadPool (TaskExecutor) and adding the annotation #Async on my methods about the message processing.
So, I will have a onMessage method into my listener where, after message validation I will call this async methods.
Is this a good practice? Will I have some issues using this approach?
I'm looking on the web for this and I see that I's possible to configure directly the concurrency value on the listener.
I'm very confusing there are a lot of possible ways to have this and I'm not able to understand the best approach.
Are these equivalent solutions?
Do not use #Async - simply increase the concurrency of the listener container and it will be handled for you automatically by spring-jms.

Running RabbitMQ consumer in different thread after consuming message

I need to know about the processing of consumed message(thread flow) via Spring SimpleMessageListenerContainer
I have following understanding
1) Messages consumed via consumer threads. (you can define the consumer thread pools via task executors).
2) the same consumer thread who receives the message process it and gets blocked until it does not finish the execution of handler method.
3) meanwhile other consumer threads gets created to consume the other messages and process those message. The interval to create those consumer threads is based on setStartConsumerMinInterval settings.
Please let me know if I am correct?
The next part is
I want to separate the consuming of message and processing of message in different threads(differnt pools for consuming and processing) how we can do that?
I have tried this way, I have made the handle message of handler as #Async to run it in different threads. Is it a correct way or any better way is available?
The last part is
in my Spring boot application I am both publishing and consuming messages, and I am using a single connection factory(CachingConnectionFactory). Should I use 2 connections factories 1 for publishing and other for consuming? and pass respecctive connection factory to the publishing and consuming beans?

how to understand consumer and listener in spring kafka

I using spring kafka with multi-thread feature(ConcurrentKafkaListenerContainerFactory), I found 2 types of thread names like this:
1. #0-1-kafka-consumer-1
2. #0-1-kafka-listener-3
so how can I understand these 2 kinds of thread? what's the relationship among them?
Thanks in advance!
The consumer thread polls the KafkaConsumer for messages and hands them over to the listener thread which invokes your listener.
This was required with early versions of the KafkaConsumer because a slow listener could cause partition rebalancing - the heartbeats had to be sent on the consumer thread.
They have now fixed this in the KafkaConsumer (heartbeats are sent in the background) so in 2.0 we will only have one thread type and the listener is invoked on the consumer thread. 2.0.0.M2 (milestone 2) is available now; the release is planned for around the end of next month.
In the previous Kafka version (< 0.10.1) that was a pain to have slow listeners. Non-active consumer thread considered as dead and therefore rebalance happened. That's why we introduced thread hands off and delivered records for processing in the listener to the separate thread. So, those thread prefixes are exactly about that.
In the latest version 2.0, based on the Kafka 0.10.2 we have removed that logic because now heartbeat happens in the Kafka client itself and properly. Therefore we don't need to worry about slow listeners any more - everything now works on the consumer's thread.

How to launch a long running Java EE job?

I need to fire off a long running batch type job, and by long we are talking about a job that can take a couple of hours. The ejb that has the logic to run this long running job will communicate to a NoSQL store and load data etc.
So, I am using JMS MDBs to do this asynchronously. However, as each job can potentially take up to an hour or more (lets assume 4 hours max), I dont want the onMessage() method in the MDB to be waiting for so long. So I was thinking of firing off an asynchronous ejb within the onMessage() MDB method so that the MDB can be returned to the pool right after the call to the batch ejb runner.
Does it make sense to combine an asynchrous ejb method call withing an MDB? Most samples suggest using 1 or the other to achieve the same thing.
If the ejb to be invoked from the MDB is not asynchrous then the MDB will be waiting for potentially long time.
Please advise.
I would simplify things: use #Schedule to invoke #Asynchronous and forget about JMS. One less thing that can go wrong.
Whilst not yet ready for prime time, JSR 352: Batch Applications looks very promising for this sort of stuff.
https://blogs.oracle.com/arungupta/entry/batch_applications_in_java_ee
It's a matter of taste I guess.
If you have a thread from the JMS pool running your job or if you have an async ejb do it, the end result will be the same - a thread will be blocked from some pool.
It is nothing wrong with spawning an async bean from a MDB, since you might want to have the jobs triggered by a messaging interface, but you might not want to block the thread pool. Also, consider that a transaction often time out by default way before an hour, so if you do MDB transactional by some reason, you might want to consider fire of that async ejb inside the onMessage.
I think Petter answers most of the question. If you are only using mdb to get asynch behaviour, you could just fire the #Asynchronous asap.
But if you are interested in any of the other features your JMS implementation might offer in terms reliability, persistent queues, slow consumer policies, priority on jobs you should stick to mdb:s
One of the reasons behind introducing #Asynchronous in ejb 3.1 is to provide a more lightweight way to do asynchronous processing when the other JMS/MDB features are not needed.

Resources