What happens when there are more than one #JmsListener in an application? (in terms of concurrency) - spring

I am trying to consume messages from three IBM MQ queues using JMS.
So, there are three #JmsListener in my spring boot application.
I have a doubt about it, how will they behave if all consumer can consume from their respective queues.
Will there be any concurrency?
If not what is the best way to concurrently consume from queues as I can't afford the serial execution of the application.
Thanks in advance :)

On the JmsListener annotation you can set the concurrency behavoir:
concurrency
public abstract String concurrency
The concurrency limits for the listener, if any. Overrides the value
defined by the container factory used to create the listener
container. The concurrency limits can be a "lower-upper" String — for
example, "5-10" — or a simple upper limit String — for example, "10",
in which case the lower limit will be 1.
Note that the underlying container may or may not support all
features. For instance, it may not be able to scale, in which case
only the upper limit is used.
Source: https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/annotation/JmsListener.html#concurrency--
Each Listener runs in it's own Thread.
You can easily test this by printing the message received. This will output the Thread it runs in. For example:
2020-06-06 11:26:54.339 INFO 23404 --- [enerContainer-1] c.e.d.ShippingService : Hello World!
The thread name is [enerContainer-1]
Please read more in the documentation about Spring and JMS https://docs.spring.io/spring/docs/current/spring-framework-reference/integration.html#jms-receiving

Related

How many consumers can i have? (Spring-boot + RabbitMq)

I'm using Spring Boot with RabbitMq, and a question came up, is there a limited number of consumers I can create?
Where do I find this value?
Spring-Amqp has no limit for consumer number.
But usually it will be restricted by other things. For example, if you use SimpleMessageListner, one consumer corresponds to one thread. When your number of consumers is large, your app may not be able to create so many threads, resulting in OOM: unable to create new native thread.
// OOM in my computer
#RabbitListener(queues = "testq", concurrency = "10000-10000")
public void listen() {
}
If you use a CachingConnectionFactory(connections), set CacheMode to CONNECTION, maybe your rabbitmq server cannot carry a very large number of consumers (probably hit the maximum number of file descriptors.), and your app may not be able to connect to rabbitmq.

Spring Boot #kafkaListner with blocking queue

I am new to Spring Boot #kafkaListener. My application receiving almost 200K message per second on topic. I want to separate message listener and processing of the message.
How can I use java.util.concurrent.BlockingQueue with #kafkaListener? Can I use it by using CompletableFuture?
Any sample code will help more.
I believe you want to have your consumer with pipelining implemented. Its not uncommon for one to implement this in a scenario like yours. Why? Well, the KafkaConsumer lacks in that decompressing / deserializing can be time consuming without considering the time it takes to do processing. Since these operations are stacked behind one thread, it would be ideal to separate the polling from the processing, which is achieved through a couple of buffers.
One way to do this: your EventReceiver spins up a thread for the polling. That thread would do the same thing you always do, but instead of firing off the listeners for each event, you'd pass the event to a receivedEvents buffer which could be BlockingQueue<RecieveEvent>. So in the for loop, you pass each record to the blocking queue. This thread would leverage another buffer once the for loop is over, like Queue<Map<TopicPartition, OffsetAndMetadata>> -- and it would commit the offsets that the processingThread has successfully processed.
Next, your EventReceiver spins up another thread - processingThread. This would handle pulling records from the buffer, firing the event to all the listeners for this receiver, and then update the Queues state for the pollingThread to commit.
Why doesn't the processingThread just commit the events instead of passing it back to the pollingThread? This is bc KafkaConsumer requires that the same thread that calls .poll() should be the one that calls consumer.commitAsync(...) or else you'll get a concurrency exception.
This approach doesn't work with auto commit enabled.
In terms of how one can do this using Spring Kafka, I'm not completely sure. However, I do know Spring Kafka separates EventReceiver from EventListener (#KafkaListener) which is separating the low-level kafka work from the business logic. In theory, you'd have to tune their implementation, but I think implementing this one without Spring Kafka library would be easier.

kafka consumer max-poll-records: 1 - performance

I have spring boot project with kafka consumer. I need to handle errors if some message arrives - stop the container. So I added those settings:
spring.kafka.consumer.max-poll-records: 1
Now I need to know what impact (big or not so much) it will have for performance with this setting and without (default 500). If I leave default, then kafkaListenerEndpointRegistry.getListenerContainer("myID").stop(); does not executes until kafka listener processes all the messages that are in a batch and this is no good for me for order.
You have to measure that. There is script kafka-verifiable-producer.sh which can help you produce big amount of messages. And on consumer side you can measure how much it takes to consume all messages with default value and how much with spring.kafka.consumer.max-poll-records: 1

synchrous message listener jms

I have one doubt during my work with JMS. As I know it's possible to create synchrous message consumer. However, I must launch it with a frequency, because of the fact that there is no listener. Next, to consume messages synchrously from a queue I can create a MDB and set the pool to 1. I think it is not a good solution.
My aim is to consume messages synchrously when they appear on the queue. From my point of view above mentioned solutions are not good:
1. Consumer which is launched from time to time.
2. MDB (asynchrous normally) and pool is set to 1.
Are there any solutions for my purpose?
Not sure why you don't like MDBs... but if you want to avoid them, you could use the Spring JMS listener:
http://docs.spring.io/spring-framework/docs/current/spring-framework-reference/html/jms.html

Concurrency value in JMS topic listener in Spring

I know Spring but I am a newbie in JMS and started reading the Spring JMS. From the Spring JMS doc Spring doc, I read the following
The number of concurrent sessions/consumers to start for each listener.
Can either be a simple number indicating the maximum number (e.g. "5")
or a range indicating the lower as well as the upper limit (e.g. "3-5").
Note that a specified minimum is just a hint and might be ignored at
runtime. Default is 1; keep concurrency limited to 1 in case of a topic
listener or if queue ordering is important; consider raising it for
general queues.
Just want to understand why should the concurrency limited to 1 in case of a topic listener? If I increase it, say to 10 instead of 1, what would happen?
Each subscriber receives a copy of each message published to a Topic. It makes no sense at all to put up multiple consumer, since all your application would do is to receive the same message 10 times, in different threads.
In the case of a Queue, messages on the queue would be distributed among 10 threads and hence, handled concurrently. That is indeed a very common scenario - load balancing.

Resources