I need to know about the processing of consumed message(thread flow) via Spring SimpleMessageListenerContainer
I have following understanding
1) Messages consumed via consumer threads. (you can define the consumer thread pools via task executors).
2) the same consumer thread who receives the message process it and gets blocked until it does not finish the execution of handler method.
3) meanwhile other consumer threads gets created to consume the other messages and process those message. The interval to create those consumer threads is based on setStartConsumerMinInterval settings.
Please let me know if I am correct?
The next part is
I want to separate the consuming of message and processing of message in different threads(differnt pools for consuming and processing) how we can do that?
I have tried this way, I have made the handle message of handler as #Async to run it in different threads. Is it a correct way or any better way is available?
The last part is
in my Spring boot application I am both publishing and consuming messages, and I am using a single connection factory(CachingConnectionFactory). Should I use 2 connections factories 1 for publishing and other for consuming? and pass respecctive connection factory to the publishing and consuming beans?
Related
I have Kafka listener writing data to a database, in case of database timeout (JdbcException), I'd like to retry, and if timeouts persist, stop consuming Kafka message.
As far as I understand, Spring Kafka 2.9 has 2 CommonErrorHandler implementations:
DefaultErrorHandler tries to redeliver messages several times and then send failed messages to logs or DLQ
CommonContainerStoppingErrorHandler stops listener container and message consumption
I would like to chain both of them: first try to redeliver messages several times, then stop container when delivery doesn't succeed.
How can I do that?
Use a DefaultErrorHandler with a custom recoverer that calls a CommonContainerStoppingErrorHandler after the retries are exhausted.
See this answer for an example
How do you exit spring boot application programmatically when retries are exhausted, to prevent kafka offset commit
(It uses the older SeekToCurrentErrorHandler, but the same concept applies.)
In my project, I am using Spring Kafka listener to consume messages from Kafka. I have a doubt that if the consume method code gets blocked due to some reason and never returned back, in this case, will this listener be able to receive new messages and proceed further or it will be hanged? In my case, it looks like, Kafka listener also got blocked and not processing further messages, even, another consumer of same group is also not receiving messages.
No; you will not get more records while a thread is blocked, unless the concurrency is > 1 and there are at least that many partitions. Even then, you will receive no more messages for the partition(s) assigned to the blocked consumer.
I'm currently trying to refactor the processing of JMS messages to work in a distributed/cloud environment. To allow a better retry and error handling the messages are first stored to the database with a JPA entity and then read by spring integration jpa inbound adapter. This works fine as long as just a single instance of my service is running. However when multiple instances are running, the instances try to process the same message even after introducing a processing state on the persisted messages.
I have already tried to save the JMS messages in a JDBC message store, however then I would have to define a group identifier according to which an instance could select a message which is not really possible since the number of instances is dynamic and I can not assign a group id for each instance. Another possibility could be some kind of distributed lock with a LockRegistry but I couldn't make that work.
Do you have any hint/advice how I could implement the following requirements the best with spring integration:
JMS message should be persisted
Any instance can pick up the message and process it
If the processing fails there will be a retry for x times (could also be retried by another instance)
If an instance crashes or gets killed during the processing the message must not be lost
Is there maybe some spring-cloud component which could be helpful?
I'm happy about every hint in which direction I should go.
Using wildfly 15 and only JavaEE (no spring) I need to consume messages from a Jms queue, in order and create for every message a new job using Jbatch, in sequence, without job overlap.
For example:
JMS queue: --> msgC --> msgB --> msgA
Jbatch:
on receive msgC, create JobC, run jobC
wait for JobC to end, watching JMS queue, on receive msgB, create JobB, run JobB
wait for JobB to end, watching JMS queue, on receive msgA, create JobA, run JobB
It's possible to achieve this ?
Processing messages in parallel or the right sequence is some standard behaviour in JMS clients and you can simply configure to do it right. That's why you have a queue. Just ensure you have only one message driven bean working on it, which should ensure you have one process and nothing running in parallel.
If you handover the task to the batch API, a different set of threads will process it, and now you need to manually ensure one job terminates before the next can start. So your message driven bean would have to poll and wait until the batch executed.
Why would you do this as it just makes your life more complicated?
I believe you could still benefit from the easy orchestration of batch steps, the restart capability or some parallel execution which you would have to cover in your message driven bean yourself.
As per the documentation int-jms:message-driven-channel-adapter uses SimpleAsyncTaskExecutor
SimpleAsyncTaskExecutor doesn't reuse threads and creates a new thread for each task. In case of message-driven-channel-adapter what is the definition of a task?
In case of message driven channel Adapter the task is a constantly polling loop. So, this is going to be a long- living resource which keeps thread active. Therefore we don’t care too much about source of threads. See Spring JMS for more information.