I'm using masstransit/rabbitmq docker image - RabbitMQ with the delayed exchange.
I have added the following lines into my configuration:
RabbitMq scheduler configured-
x.AddDelayedMessageScheduler();
cfg.UseDelayedMessageScheduler();
Endpoint Redelivery Configuration-
e.UseScheduledRedelivery(...);
e.UseMessageRetry(...);
MessageRetry is working as expected. I was also expecting the messages would be redelivered to the queue again after each interval specified in UseScheduledRedelivery. But no event fired at all and seems UseScheduledRedelivery not working in this configuration.
Am I missing something here?
This ended up being an issue in MassTransit, related to redelivering messages from batch consumers only. It was fixed in this commit.
Related
I have Kafka listener writing data to a database, in case of database timeout (JdbcException), I'd like to retry, and if timeouts persist, stop consuming Kafka message.
As far as I understand, Spring Kafka 2.9 has 2 CommonErrorHandler implementations:
DefaultErrorHandler tries to redeliver messages several times and then send failed messages to logs or DLQ
CommonContainerStoppingErrorHandler stops listener container and message consumption
I would like to chain both of them: first try to redeliver messages several times, then stop container when delivery doesn't succeed.
How can I do that?
Use a DefaultErrorHandler with a custom recoverer that calls a CommonContainerStoppingErrorHandler after the retries are exhausted.
See this answer for an example
How do you exit spring boot application programmatically when retries are exhausted, to prevent kafka offset commit
(It uses the older SeekToCurrentErrorHandler, but the same concept applies.)
I have integrated rabbitmq in a spring application.In my application i am doing indexing on solr using rabbit mq.
On my every queue i have set only one listner.
I want to stop the listner on message progess.But the problem is that when i am going to stop the listner by registry.stop the rabbit mq ui and logs showing that the listner is stopped. but the message on which it works sucessfully index on solr.
As per my knowledge after killing the listner, the message also not going to further process.
That's not correct. It just stops to consume more messages from the queue. Currently in-flight messages are processed gracefully. Why would one want do not do that? You would lose the data which was consumed and acknowledged on the broker.
I am using Spring-Kafka and using request/reply templates. I am noticing that after awhile I encounter timeouts when one service calls the other. The only way this seems to resolve is when I change topic name from request1, reply1 to request2, reply2 and redeploy both sides.... What am I missing in the configuration of the listener and or requester side? It seems the after a period of time it goes stale
I'm reading through the docs here https://docs.spring.io/spring-kafka/docs/2.2.6.RELEASE/reference/html/#retrying-deliveries and I cannot figure out what the correct way is for implementing stateful retry with a batch listener
The docs say that a "retry adapter is not provided for batch message listeners because the framework has no knowledge of where in a batch the failure occurred".
This is not a problem for my use case as I want to just retry the whole batch.
The docs recommend that I use a RetryTemplate within the listener itself. Ok, I can do that.
The problem comes in the next section where it discusses using the stateful retry flag to make the consumer poll between retries in order to prevent the broker from dropping my consumer.
How do I configure a batch listener to do that? Is the stateful retry flag supported for batch listeners? If My retry logic is in within the listener itself, wouldn't that prevent the polling? What exactly does the statefulRetry flag even do?
The latest version of spring kafka has a special RetryingBatchErrorHandler. https://docs.spring.io/spring-kafka/docs/2.4.6.RELEASE/reference/html/#retrying-batch-eh Thanks, Spring Kafka Team!
No; you can't add a RetryTemplate to the container factory for a batch listener.
java.lang.ClassCastException: org.springframework.kafka.listener.adapter.BatchMessagingMessageListenerAdapter cannot be cast to org.springframework.kafka.listener.MessageListener
We will clean that up with a more meaningful error..
With the upcoming 2.3 release (release candidate currently due next Friday) you can add a BackOff to the SeekToCurrentErrorHandler which provides similar functionality to the RetryTemplate; with current releases, the redelivery will be attempted immediately.
Furthermore, another new feature recently merged provides a mechanism to retry from a specific index in a batch.
Using Spring boot #RabbitListener, we are able to process the AMQP messages.
Whenever a message sent to queue its immediately publish to destination exchange.
Using #RabbitListener we are able to process the message immediately.
But we need to process the message only between specific timings example 1AM to 6AM.
How to achieve that ?
First of all you can take a look into Delayed Exchange feature of RabbitMQ: https://docs.spring.io/spring-amqp/docs/current/reference/html/#delayed-message-exchange
So, this way on the producer side you should determine how long the message should be delayed before it is routed to the main exchange for the actual consuming afterwards.
Another way is to take a look into Spring Integration and its Delayer component: https://docs.spring.io/spring-integration/docs/5.2.0.BUILD-SNAPSHOT/reference/html/messaging-endpoints.html#delayer
This way you will consume messages from the RabbitMQ, but will delay them in the target application logic.
And another way I see like start()/stop() the listener container for consumption and after according your timing requirements. This way the message is going to stay in the RabbitMQ until you start the listener container: https://docs.spring.io/spring-amqp/docs/current/reference/html/#containerAttributes