I ran into situation where I want to pause consumers to stop consuming messages through spring boot application and messages should not be lost from queue, I am using amqp and spring cloud stream.
https://github.com/spring-cloud/spring-cloud-stream/blob/master/docs/src/main/asciidoc/spring-cloud-stream.adoc#binding_visualization_control this states only kafka has this option.
Can I do something to fix this for rabbit-mq binder?
Pause is required for Kafka because if you stop() the container, the broker will rebalance and give the partitions to another instance. With RabbitMQ, the queue remains when the container is stop()ped so the issue doesn't arise there (as long as you specify a group).
You can't stop() the container for an anonymous consumer (no group) because such consumers use an auto-delete queue. There is no way to pause such consumers (the AMQP protocol has no such mechanism).
Related
Versions
Spring Boot 1.5.x,
Spring Boot 2.4.x,
Apache Kafka 0.10.2
The Situation
We have two service instances hosted on different servers. Each instance initializes multiple Kafka consumers. All consumers are listening to the same topic and are part of the same consumer group.
We are not relying on Spring Boot/Spring Kafka to configure the ConcurrentKafkaListnerContainerFactory and its DefaultKafkaConsumerFactory. All the consumer configuration properties are set to the default Apache Kafka consumer property values except for max.poll.records, session.timeout.ms, and heartbeat.interval.ms. Acknowledgement mode is set to record.
We are using the #KafkaListener annotation and setting its containerFactory property with the bean name of the initialized ConcurrentKafkaListenerContainerFactory and setting it topics property.
The Problem
When a topic does not get any messages published to it for a day or two, all consumers are removed from the consumer group.
I can’t find any reason for this to happen. From my understanding of reading both the Apache Kafka and Spring Kafka documentation if poll is called within max.poll.interval.ms, the consumer is considered alive. And if heartbeats are continuously sent by the consumer within the session.timeout.ms, the consumer is considered alive. According to the documentation, poll is called continuously and heartbeats are sent at the interval set by heartbeat.interval.ms.
The Questions
Is there a setting or property Spring Boot/Spring Kafka is setting that causes a consumer that hasn’t consumed any records from an idle topic for a day or two to be removed from the consumer group?
If yes, can this be turned off and what are the downsides?
If no, is there a way to rejoin the consumer group without having to restart the service and what are the downsides?
That Kafka version is very, very old.
Older versions removed the consumer offsets after no activity for 24 hours, even if the consumer is still connected. In 2.0, this was increased to 7 days. With newer brokers (since 2.1), consumer offsets are only removed if the consumers are not actually connected for 7 days.
See https://kafka.apache.org/documentation/#upgrade_200_notable
You can increase the broker's offsets.retention.minutes with older brokers.
I have spring kafka consumer application. The application acts as pass through which polls the messages from kafka broker and send to IBM MQ. What would be a best/simplistic approach to store the offset in case of failure?
The simplest approach is to use the default mechanism of storing the offsets in kafka itself.
If you add a SeekToCurrentErrorHandler, the container will keep redelivering records that are failed in the listener, up to 10 times by default but it can be configured for infinite retries.
If you add stateful retry, the listener adapter can add a delay between each delivery attempt.
See Stateful Retry.
ackOnError should be set to false.
I am running multiple instances of the same Spring Boot 2.0.4 Application, for scaling purposes, that consume messages from an ActiveMQ queue using the following:
#JmsListener(destination = "myQ")
Only the first consumer receives messages and if I stop the first consumer the second instance starts receiving the messages. I want each consumer to consume a message, not the same message, in a round robin fashion. But only the first consumer consumes messages.
It sounds like you want a JMS Topic rather than a Queue. You should also research durable subscriptions, shared subscriptions, and durable topics before you settle on the configuration you need for your setup.
See:
JMS API Programming Model (Search for JMS Message Consumers)
Queues vs Topics
Durable Queues and Topics
I have a couple of queues and I need to do the following with ONE of them:
A producer should send a message to this queue, but ALL consumers should receive it. So, if I have 5 spring listeners on this queue, each of them should receive the message, but not the producer. I do that because I have a tomcat cluster and rabbitmq asynchronous messages, and if I get response from the worker, I don't know how to dispatch it to the correct tomcat node. So I decided to broadcast all worker replies to all tomcat nodes. Each tomcat cluster node listens the same output queue. Then, if it's a correct tomcat instance, it will be processed, all other copies will be lost, and it's ok. How to implement it? How make consumers on tomcat's end to receive the same message the same time?
Ok, found the solution here:
RabbitMQ / AMQP: single queue, multiple consumers for same message?
It's impossible to do in rabbitmq, need to create a couple of queues for each consumer.
An exclusive consumer in Activemq is one that is sent every message from a broker until the consumer dies or goes away, at which time the broker switches consumer.
What is it that defines when the switchover takes place? How do you configure this in Spring JMS/ActiveMQ?
It's not Spring JMS doing the checking; it's the JMS provider, ActiveMQ.
JMS is an API specification; an empty framework, essentially. ActiveMQ provides the implementation backing for managing connections, message brokering, load-balancing, fail-over, etc.
The ActiveMQ broker handles switching-over consumers based on queue properties (you don't need to do anything special in your code):
queue = new ActiveMQQueue("TEST.QUEUE?consumer.exclusive=true");
The switch-over takes place when either the consumer disconnects gracefully or the broker determines that the consumer has disappeared (via the wireFormat.maxInactivityDuration elapsing without any messages or keep-alives being received). You don't have to configure anything if you're happy with the default value of wireFormat.maxInactivityDuration (30 seconds), but you can tweak that if you want to change how long it takes before the broker gives up on a client.