I am trying to consume multiple message from a topic with manual ack but ack work if all message only by ack one time.
#KafkaListener(
id = "${kafka.buyers.product-sales-pricing.id}",
topics = "${kafka.buyers.product-sales-pricing.topic}",
groupId = "${kafka.buyers.group-id}",
concurrency = "${kafka.buyers.concurrency}"
)
public void listen( List<String> message, Acknowledgment ack ){}
In above code i am getting 5 message per consume if i put following configuration in spring boot property file:
kafka:
max-poll-records: 5 # Maximum number of records returned in a single call to poll()
but if i ack that listen then it ack all 5 message at same time.
Actually i want to ack separately for each message(means 5 message with 5 ack).
How can i do this in spring boot project?
When using a batch listener, the entire batch is acked when Acknowledgment.acknowledge() is called.
I would recommend using a single record listener rather than a batch listener for this use case.
listen(String msg, Acknowledgment ack)
It's not clear why you would commit offsets for only part of the batch.
If you must use a batch listener, it can still be done, but rather more complicated - you would need to get List<ConsumerRecord<?, ?>> to get topic/partition/offset information and also add Consumer<?, ?> consumer to the method parameters (and remove the Acknowledgment; you can then call commitOffsets() on the consumer however you want. But you MUST call it on the listener thread - the consumer is not thread-safe.
Related
I am using spring boot (version 2.7.1) with spring cloud stream kafka binder (2.8.5) for processing Kafka messages
I've functional style consumer that consumes messages in batches. Right now its retrying 10 times and commits the offset for errored records.
I want now to introduce the mechanism of retry for certain numbers (works using below error handler) then stop processing messages and fail entire batch messages without auto committing offset.
I read through the documents and understand that CommonContainerStoppingErrorHandler can be used for stopping the container from consuming messages.
My handler looks below now and its retries exponentially.
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<String, Message>> errorHandler() {
return (container, destinationName, group) -> {
container.getContainerProperties().setAckMode(ContainerProperties.AckMode.BATCH);
ExponentialBackOffWithMaxRetries backOffWithMaxRetries = new ExponentialBackOffWithMaxRetries(2);
backOffWithMaxRetries.setInitialInterval(1);
backOffWithMaxRetries.setMultiplier(2.0);
backOffWithMaxRetries.setMaxInterval(5);
container.setCommonErrorHandler(new DefaultErrorHandler(backOffWithMaxRetries));
};
}
How do I chain CommonContainerStoppingErrorHandler along with above error handler, so the failed batch is not commited and replayed upon restart ?
with BatchListenerFailedException from consumer, it is possible to fail entire batch (including one or other valid records before any problematic record in that batch) ?
Add a custom recoverer to the error handler - see this answer for an example: How do you exit spring boot application programmatically when retries are exhausted, to prevent kafka offset commit
No; records before the failed one will have their offsets committed.
I have jms message endpoint like:
#Bean
public JmsMessageDrivenEndpoint fsJmsMessageDrivenEndpoint(ConnectionFactory fsConnectionFactory,
Destination fsInboundDestination,
MessageConverter fsMessageConverter) {
return Jms.messageDrivenChannelAdapter(fsConnectionFactory)
.destination(fsInboundDestination)
.jmsMessageConverter(fsMessageConverter)
.outputChannel("fsChannelRouter.input")
.errorChannel("fsErrorChannel.input")
.get();
}
So, my questions is did I get next message before current message will be processed? If it will...Did it will get all messages in mq queue until it fills up all the memory? How to avoid it?
The JmsMessageDrivenEndpoint is based on the JmsMessageListenerContainer, its threading model and MessageListener callback for pulled messages. As long as your MessageListener blocks, it doesn't go to the next message in the queue to pull. When we build an integration flow starting with JmsMessageDrivenEndpoint, it becomes as a MessageListener callback. As long as we process the message downstream in the same thread (DirectChannel by default in between endpoints), we don't pull the next message from JMS queue. If you place a QueueChannel or an ExecutorChannel in between, you shift a processing to a different thread. The current one (JMS listener) gets a control back and it is ready to pull the next message. And in this case your concern about the memory is correct. You can still use QueueChannel with limited size or your ExecutorChannel can be configured with limited thread pool.
In any way my recommendation do not do any thread shifting in the flow when you start from JMS listener container. It is better to block for the next message and let the current transaction to finish its job. So you won't lose a message when something crashes.
I am using ActiveMQ for messaging and there is one requirement that if message is duplicate then it should handled by AMQ automatically.
For that I generate unique message key and set to messageproccessor.
following is code :
jmsTemplate.convertAndSend(dataQueue, event, messagePostProccessor -> {
LocalDateTime dt = LocalDateTime.now();
long ms = dt.get(ChronoField.MILLI_OF_DAY) / 1000;
String messageUniqueId = event.getResource() + event.getEntityId() + ms;
System.out.println("messageUniqueId : " + messageUniqueId);
messagePostProccessor.setJMSMessageID(messageUniqueId);
messagePostProccessor.setJMSCorrelationID(messageUniqueId);
return messagePostProccessor;
});
As it can be seen code generates unique id and then set it to messagepostproccessor.
Can somehelp me on this, is there any other configuration that I need do.
A consumer can receive duplicate messages mainly for two reasons: a producer sent the same message more times or a consumer receive the same message more times.
Apache ActiveMQ Artemis includes powerful automatic duplicate message detection, filtering out messages sent by a producer more times.
To prevent a consumer from receiving the same message more times, an idempotent consumer must be implemented, ie Apache Camel provides an Idempotent consumer component that would work with any JMS provider, see: http://camel.apache.org/idempotent-consumer.html
In my Spring Boot/Kafka project I have the following listener:
#KafkaListener(topics = "${kafka.topic.update}", containerFactory = "updateKafkaListenerContainerFactory")
public void onUpdateReceived(ConsumerRecord<String, Update> consumerRecord, Acknowledgment ack) {
// do some logic
ack.acknowledge();
}
Inside of the listener I need to check some condition according to my business logic and if it is not met - skip processing of this certain message and let Kafka know to redeliver this message one more time.
The reason I need this - according to the business logic of my application I need to avoid sending more than one post per second into the particular Telegram chat. This why I'd like to check the chatLastSent time in the Kafka listener and postpone message sending if needed(via message redelivery to this Kafka topic)
How to properly do it? Do I only need to not perform the ack.acknowledge(); this time or there is another, more proper way in order to achieve it?
Use the SeekToCurrentErrorHandler.
When you throw an exception, the container will invoke the error handler which will re-seek the unprocessed messages so they will be fetched again on the next poll.
You can use a RecordFilterStrategy.
See doc here : https://docs.spring.io/spring-kafka/docs/2.0.5.RELEASE/reference/html/_reference.html#_filtering_messages
In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763