Spring AMQP: Stopping a SimpleConsumer from DirectMessageListenerContainer - spring

I have a use case where I am dynamically registering and removing a queue to and from a container based on some predicate. I am using a DirectMessageListenerContainer based on the advice given in the documentation as per my needs.
Those dynamic queues are temporary ones that should get deleted if they have no messages and are not in use. Right now I have a Scheduler running periodically which deregisters the queue from the container if the predicate is true.
For me, the problem is even after removing the queue from the container the consumer bound to the queue is not getting released/stopped and thus the queue is not getting eligible for delete(due to the in-use policy by the consumer).
Is there a way to release or stop a consumer without restarting the container?

When you remove a queue, its consumer(s) are canceled.
This works as expected:
#SpringBootApplication
public class So72540658Application {
public static void main(String[] args) {
SpringApplication.run(So72540658Application.class, args);
}
#Bean
DirectMessageListenerContainer container(ConnectionFactory cf) {
DirectMessageListenerContainer dmlc = new DirectMessageListenerContainer(cf);
dmlc.setQueueNames("foo", "bar");
dmlc.setMessageListener(msg -> {});
return dmlc;
}
#Bean
ApplicationRunner runner(DirectMessageListenerContainer container) {
return args -> {
System.out.println("Hit enter to remove bar");
System.in.read();
container.removeQueueNames("bar");
};
}
}
Hit enter; then:
Perhaps your consumer thread is stuck someplace? Try taking a thread dump. If you can't figure it out; post an MCRE somplace.

Related

how to stop consuming messages from kafka when error occurred and restart consuming again after some time in spring boot

This is the first time i am using Kafka. i have a spring boot application and i am consuming messages from kafka topics and storing messages in DB. I have a requirement to handle DB fail over, if DB is down that message should not be committed and suspend consuming messages for some time and after some time listener can start consuming messages again. what is the better approach to do this.
i am using spring-kafka:2.2.8.RELEASE which is internally using kafka 2.0.1
Configure a ContainerStoppingErrorHandler and throw an exception from your listener.
https://docs.spring.io/spring-kafka/docs/2.2.13.RELEASE/reference/html/#container-stopping-error-handlers
You can restart the container later when you have detected that your DB is back online.
https://docs.spring.io/spring-kafka/docs/2.2.13.RELEASE/reference/html/#kafkalistener-lifecycle
EDIT
#SpringBootApplication
public class So62125817Application {
public static void main(String[] args) {
SpringApplication.run(So62125817Application.class, args);
}
#Bean
TaskScheduler scheduler() {
return new ThreadPoolTaskScheduler();
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so62125817").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private final TaskScheduler scheduler;
private final KafkaListenerEndpointRegistry registry;
public Listener(TaskScheduler scheduler, KafkaListenerEndpointRegistry registry,
AbstractKafkaListenerContainerFactory<?, ?, ?> factory) {
this.scheduler = scheduler;
this.registry = registry;
factory.setErrorHandler(new ContainerStoppingErrorHandler());
}
#KafkaListener(id = "so62125817.id", topics = "so62125817")
public void listen(String in) {
System.out.println(in);
// run this code if you want to stop the container and restart it in 60 seconds
this.scheduler.schedule(() -> {
this.registry.getListenerContainer("so62125817.id").start();
}, new Date(System.currentTimeMillis() + 60_000));
throw new RuntimeException("test restart");
}
}
There are two approaches which I can think of doing this:
First Approach: Let the auto-commit option for consuming messages be true. The configuration for this is enable.auto.commit. By default, this would be true, so you do not need to change anything. Whenever your DB operation fails, you can put the messages on a different topic say a topic named failed_events. When you do this, you can have the same application (Which populates the DB) running say once at a daily level to consume the message from failed_events topic and populate the DB again. This way you can keep track of how many times the DB write gets failed. One small thing to note is what if during this run also the DB is down, then what do you do. You can decide what to do in this case. Probably discard the message if it is Ok to do so, or do a certain number of retries.
Second approach: If it is very deterministic to know that for how long the DB would be down. And if the time period is very small, then it is better to do a sleep operation in the case of DB write failure. Say the application sleeps for 10 minutes before it retries again. You will not have to create a separate topic in this case.
The advantage of this approach is that you don't have to run a separate instance of the same application to fetch from a different topic. You could do all of them in one single application. Maintaining this becomes relatively easier.
The disadvantage of this approach is that if the DB is down for a very long period, say 1 day, Then you will end up losing the message.

Consumer restart when I reset Spring Boot app

I have a Kafka topic with data, called "topic01"
I want to create a consumer that every time I start my Spring Boot 2 application, start reading that topic from the beginning.
I have the following code, that if I add something new to the topic if it reaches me, but when starting the first time, it won't read me from the beginning of the topic.
#KafkaListener(topics = "topic01")
public void listenTopic01(ConsumerRecord<String, MiDTO> consumerRecord) throws Exception {
logger.info("KafkaHandler");
logger.info(consumerRecord.value().toString());
logger.info(consumerRecord.key().toString());
latch.countDown();
}
application.properties:
spring.kafka.consumer.group-id=XXXXX
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
What configuration should I add, so that this #KafkaListener reads the topic from the beginning, every time I restart my application.
Either use a unique (random) group-id each time, or have your listener class implement ConsumerSeekAware and add
#Override
public void onPartitionsAssigned(Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
consumer.seekToBeginning(partitions);
}
or
#KafkaListener(topics = "topic01",
groupId = "#{T(java.util.UUID).randomUUID().toString()}")

Application is not stopped with missingTopicFatal and manually started listeners

I'm using spring and spring-kafka.
When the application starts, I'm loading compacted topics into memory then when the topics are fully read, the application is started.
In order to do that I created #KafkaListeners with autostartup to false and a SmartLifeCycle bean with AbstractMessageListenerContainer.DEFAULT_PHASE - 1 phase which is doing listener.start() on all those listeners (which read compacted topics) then wait for them to be finished.
This is working great but if I set spring.kafka.listener.missing-topics-fatal = true with a missing topic, there is an error :
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is java.lang.IllegalStateException: Topic(s) [mytopic] is/are not present and missingTopicsFatal is true
It's the expected behavior, but the application is not shutdown, my manually started listeners keeps running and sending errors :
java.lang.IllegalStateException: org.springframework.context.annotation.AnnotationConfigApplicationContext#502e1410 has not been refreshed yet
How can I make the application exit in this case ?
Catch the exception and shut down the JVM:
#SpringBootApplication
public class So60036945Application {
public static void main(String[] args) {
try {
SpringApplication.run(So60036945Application.class, args);
}
catch (ApplicationContextException ace) {
if (ace.getCause() instanceof IllegalStateException) {
ace.printStackTrace();
System.exit(1);
}
}
}
#KafkaListener(id = "so60036945", topics = "so60036945a")
public void listen(String in) {
System.out.println(in);
}
}
But, as I said in Gitter, it's better to auto-start the compacted topic listeners and start the other listeners manually (the other way around to what you are doing now).

MessageListener.onMessage is getting called continuously on RabbitMQ with Spring Boot

I have MessageListener.onMessage with a thread sleep. I'm simulating actual processing time the onMessage
method will take by the above mentioned Thread sleep. However what I have noticed is that it is getting called multiple times consecutively for the remaining messages till they get processed by the onMessage method. I see this as an inefficiency.
Actual message count in to queue : 1000
Output of running number for hits
onMessage<<15656
onMessage<<15657
onMessage<<15658
onMessage<<15659
onMessage<<15660
onMessage<<15661
onMessage<<15662
onMessage<<15663
Code block
#Service
class ThreadPooledMessageListener implements MessageListener {
#Autowired
TaskExecutor threadPoolTaskExecutor;
AtomicInteger processedCount = new AtomicInteger();
#Override
public void onMessage(Message message) {
System.out.println("onMessage<<" + processedCount.incrementAndGet());
threadPoolTaskExecutor.execute(new MessageProcessor(message));
}
}
class MessageProcessor implements Runnable {
Message processingMessage;
public MessageProcessor(Message message) {
this.processingMessage = message;
}
#Override
public void run() {
System.out.println("================================"+ Thread.currentThread().getName());
System.out.println(processingMessage);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("================================");
}
}
What are the possible fixes for this.
As #Gary Russell has pointed out; Issue was that I have used non-spring managed container SimpleMessageListenerContainer in my code. Fixed it with spring managed bean and defined concurrency there. Works as expected.
Fixed code segment
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueues(queue);
container.setMessageListener(threadPooledMessageListener);
container.setConcurrentConsumers(4);
container.start();
return container;
}
>I see this as an inefficiency.
It's not clear what you mean. Since you are handing off the processing of a message to another thread, the listener exits immediately and, of course, the next message is delivered.
This will risk message loss in the event of a failure.
If you are trying to achieve concurrency; it's better to set the container concurrentConsumers property and not do your own thread management in the listener. The container will manage the consumers for you.

How to stop and restart consuming message from the RabbitMQ with #RabbitListener

I am able to stop the consuming and restart the consuming but the problem is that when I am restarting the consuming, I am able to process the already published message but when I publish the new messages those are not able to process.
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Consumer;
#Component
public class RabbitMqueue implements Consumer {
int count = 0;
#RabbitListener(queues="dataQueue")
public void receivedData(#Payload Event msg, Channel channel,
#Header(AmqpHeaders.CONSUMER_TAG) String tag) throws IOException,
InterruptedException {
count++;
System.out.println("\n Message recieved from the Dataqueue is " + msg);
//Canceling consuming working fine.
if(count == 1) {
channel.basicCancel(tag);
System.out.println("Consumer is cancle");
}
count++;
System.out.println("\n count is " + count + "\n");
Thread.sleep(5000);
//restarting consumer. able to process already consumed messages
//but not able to see the newly published messages to the queue I mean
//newly published message is moving from ready to unack state but nothing
//happening on the consumer side.
if(count == 2) {
channel.basicConsume("dataQueue", this);
System.out.println("Consumer is started ");
}
}
}
You must not do this channel.basicCancel(tag).
The channel/consumer are managed by Spring; the only thing you should do with the consumer argument is ack or nack messages (and even that is rarely needed - it's better to let the container do the acks).
To stop/start the consumer, use the endpoint registry as described in the documentation.
Containers created for annotations are not registered with the application context. You can obtain a collection of all containers by invoking getListenerContainers() on the RabbitListenerEndpointRegistry bean. You can then iterate over this collection, for example, to stop/start all containers or invoke the Lifecycle methods on the registry itself which will invoke the operations on each container.
e.g. registry.stop() will stop all the listeners.
You can also get a reference to an individual container using its id, using getListenerContainer(String id); for example registry.getListenerContainer("multi") for the container created by the snippet above.
If your are using AMQP/Rabbit, you can try one of these:
1) Prevent starting at startup in code:
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
//
//autoStartup = false, prevents handling messages immedeatly. You need to start each listener itselve.
//
factory.setAutoStartup(false);
factory.setMessageConverter(new Jackson2JsonMessageConverter());
return factory;
}
2) Prevent starting at startup in in app.yml/props:
rabbitmq.listener.auto-startup: false
rabbitmq.listener.simple.auto-startup: false
3) Start/stop individual listeners
give your #RabbitListener a id:
#RabbitListener(queues = "myQ", id = "myQ")
...
and :
#Autowired
private RabbitListenerEndpointRegistry rabbitListenerEndpointRegistry;
MessageListenerContainer listener =
rabbitListenerEndpointRegistry.getListenerContainer("myQ");
...
listener.start();
...
listener.stop();

Resources