In my POC, I am using Spring Cloud Config and Spring Stream Rabbit. I want to dynamically change number of listeners (concurrency). Is it possible to do that? I want to do following:
1) If there are too many messages in queue, i want to increase concurrency level.
2) In scenario where my downstream system is not available, I want to stop processing messages from queue (in short concurrency level 0).
How i can achieve this?
Thanks for help.
The listener container running in the binder supports such changes (although you can't go down to 0, but the container can be stop() ped).
However, spring-cloud-stream provides no mechanism for you to get a reference to the listener container.
You might want to consider using a #RabbitListener from Spring AMQP instead - it will give you complete control over the listener container.
Related
The messages created by the producer are all being consumed as expected.
The thing is, I need to create an endpoint to retrieve the latest messages from the consumer.
Is there a way to do it?
Like an on-demand consumer?
I found this SO post but is only to consume the last N records. I want to consume the latest without caring about the offsets.
Spring Kafka Consumer, rewind consumer offset to go back 'n' records
I'm working with Kotlin but if you have the answer in Java I don't mind either.
There are several ways to create listener containers dynamically; you can then start/stop them on demand. To get the records back into the controller, you'd need to use something like a blocking queue, or make the controller itself a MessageListener.
These answers show a couple of techniques for creating containers on demand:
How to dynamically create multiple consumers in Spring Kafka
Kafka Consumer in spring can I re-assign partitions programmatically?
I want to stop/pause the queue so that (https://issues.apache.org/jira/browse/AMQ-5229)
NO messages sent to the associate consumers
messages still to be enqueued on the queue
ability to be able to browse the queue
all the JMX counters for the queue to be available and correct.
Added: Apache ActiveMQ (Version 5.16.2)
But I don't know where to create the bean of JmsListenerEndpointRegistry and call start and stop method.
Sample code will be appreciated. Thanks.
The JmsListenerEndpointRegistry is automatically configured by Spring Boot.
Simply #Autowired it into the controlling class, give the JmsListener an id and stop/start it using the id.
Note: this does not use the AMQ feature you referenced; it simply tells the listener container to stop/start receiving messages.
I am trying to configure concurrent consumers in spring to consume messages from RabbitMQ, in order to achieve that i have configured consumers in two ways
1.annotated a method with #RabbitListener(queues = "name of queue")
2.implementing "MessageListener" interface and overriding onMessage(Message message)
In my case both the ways worked fine, but i am unable to figure out what is the advantage/disadvantage of using #RabbitListener() for starting a consumer over the other way.
Also adding to that i have configured "DirectMessageListenerContainer" in my configuration and mapped it to "MessageListener" implementation to achieve concurrent consumers, my question here is can we do the same mapping for consumer implemented through #RabbitListener() and if so how. I couldnt find any source on how a consumer started with a #RabbitListener() annotated method can be configured with a "DirectMessageListenerContainer"
Any Help is appreciated.
#RabbitListener is simply a higher-level abstraction. It uses the listener container underneath.
When using spring boot, use the ...listener.type application property to specify which type of container you want.
The default is simple.
Using Spring boot #RabbitListener, we are able to process the AMQP messages.
Whenever a message sent to queue its immediately publish to destination exchange.
Using #RabbitListener we are able to process the message immediately.
But we need to process the message only between specific timings example 1AM to 6AM.
How to achieve that ?
First of all you can take a look into Delayed Exchange feature of RabbitMQ: https://docs.spring.io/spring-amqp/docs/current/reference/html/#delayed-message-exchange
So, this way on the producer side you should determine how long the message should be delayed before it is routed to the main exchange for the actual consuming afterwards.
Another way is to take a look into Spring Integration and its Delayer component: https://docs.spring.io/spring-integration/docs/5.2.0.BUILD-SNAPSHOT/reference/html/messaging-endpoints.html#delayer
This way you will consume messages from the RabbitMQ, but will delay them in the target application logic.
And another way I see like start()/stop() the listener container for consumption and after according your timing requirements. This way the message is going to stay in the RabbitMQ until you start the listener container: https://docs.spring.io/spring-amqp/docs/current/reference/html/#containerAttributes
I'm new to ActiveMQ. I'm using it (and Apache Camel) for batch processing that ends up communicating with web services.
My question is how does ActiveMQ control how asynchronous it really is? In other words, if it can process 20 messages at the same time but the bottleneck is the web services on the other end, how can I control that? Can I slow ActiveMQ down?
Thanks!
if you're using apache camel 2.4+, you can use the throttler with camel to control message flow to endpoints - you can change the limit dynamically as of camel 2.8 - hope it helps.
The Camel throttler would be a good solution. It does however keep the messages in memory in the Camel application at the time. However it allows to react precisely.
Another alternative is the throttling in-flight route policy you can configure on the Camel ActiveMQ JMS consumer. That policy can be configured with a upper/lower water mark settings. Then the policy will automatic suspend/resume the AMQ consumer accordingly. You can extend this logic and use your custom metrics instead.
You can read about this policy here: http://camel.apache.org/routepolicy
This approach would then not keep any messages in memory, as it will control on the AMQ consumer side to suspend/resume it to "throttle" the flow.
You could also set a pre-fetch limit, preventing AMQ from dispatching messages on a specific transport to its consumers based on the amount of message_delivered responses it gets back from the client. Here is a reference