Spring Kafka Manual Ack and retry happens after exception : unexpected behaviour - spring

I have a Kafka consumer annotated with #KakaListner, with in consumer i have requirement like to acknowledge kafka message first and then execute rest api. So while executing rest api i am getting exception which is expected behaviour and consumer reties happening even i ack. same message before rest call.
#KafkaListener(topics = "${spring.kafka.topic.name}", groupId = "${consumer.topicGroupId}")
public void listenEvent(ConsumerRecord<String, Event> consumerRecord, Acknowledgment acknowledgment) throws IOException {
acknowledgment.acknowledge();
patchService.patch(arguments...);
}
PatchService code :
public class PatchService {
public String patch(arguments....) {
try {
ResponseEntity<String> response = restTemplate.exchange(uri, HttpMethod.PATCH, request, String.class);
return response.getBody();
} catch (HttpClientErrorException ex) {
log.error("Error updating API having error response {}", ex.getResponseBodyAsString());
throw new APIException(ex.getStatusCode(), ex.getResponseBodyAsString());
}
}
}
ConsumerConfiguration Code :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() throws FileNotFoundException {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<String, String>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
With this property set to false in consumerFactory() :
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
What i am doing wrong here ? i don't want retries when exception comes as part of rest api i just want to log that message which i am doing.

Catch the exception and don't throw it.
Kafka maintains two pointers for each group/partition - the current position and the committed offset.
The default error handler resets the current position so the failed record will be redelivered, regardless of the committed offset.

Related

How can I test that I have configured ChainedKafkaTransactionManager correctly in my spring boot service

My spring boot service needs to consume kafka events off one topic, do some processing (including writing to the db with JPA) and then produce some events on a new topic. No matter what happens I cannot have a situation where I have published events without updating the database, and if anything goes wrong then I want the next poll of the consumer to retry the event. My processing logic including the db update is idempotent so retrying that is fine
I think I have achieved exactly once semantics as described on https://docs.spring.io/spring-kafka/reference/html/#exactly-once by using a ChainedKafkaTransactionManager like so:
#Bean
public ChainedKafkaTransactionManager chainedTransactionManager(JpaTransactionManager jpa, KafkaTransactionManager<?, ?> kafka) {
kafka.setTransactionSynchronization(SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return new ChainedKafkaTransactionManager(kafka, jpa);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
ChainedKafkaTransactionManager chainedTransactionManager) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setTransactionManager(chainedTransactionManager);
return factory;
}
The relevant kafka config in my application.yaml file looks like:
kafka:
...
consumer:
group-id: myGroupId
auto-offset-reset: earliest
properties:
isolation.level: read_committed
...
producer:
transaction-id-prefix: ${random.uuid}
...
Because the commit order is critical to my application I would like to write a integration test to prove that the commits happen in the desired order and that if an error occurs during the commit to kafka then the original event is consumed again. However I am struggling to find a good way of causing a failure between the db commit and the kafka commit.
Any suggestions or alternative ways I could do this?
Thanks
You could use a custom ProducerFactory to return a MockProducer (provided by kafka-clients.
Set the commitTransactionException so that it is thrown when the KTM tries to commit the transaction.
EDIT
Here is an example; it doesn't use the chained TM, but that shouldn't make a difference.
#SpringBootApplication
public class So66018178Application {
public static void main(String[] args) {
SpringApplication.run(So66018178Application.class, args);
}
#KafkaListener(id = "so66018178", topics = "so66018178")
public void listen(String in) {
System.out.println(in);
}
}
spring.kafka.producer.transaction-id-prefix=tx-
spring.kafka.consumer.auto-offset-reset=earliest
#SpringBootTest(classes = { So66018178Application.class, So66018178ApplicationTests.Config.class })
#EmbeddedKafka(bootstrapServersProperty = "spring.kafka.bootstrap-servers")
class So66018178ApplicationTests {
#Autowired
EmbeddedKafkaBroker broker;
#Test
void kafkaCommitFails(#Autowired KafkaListenerEndpointRegistry registry, #Autowired Config config)
throws InterruptedException {
registry.getListenerContainer("so66018178").stop();
AtomicReference<Exception> listenerException = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1);
((ConcurrentMessageListenerContainer<String, String>) registry.getListenerContainer("so66018178"))
.setAfterRollbackProcessor(new AfterRollbackProcessor<>() {
#Override
public void process(List<ConsumerRecord<String, String>> records, Consumer<String, String> consumer,
Exception exception, boolean recoverable) {
listenerException.set(exception);
latch.countDown();
}
});
registry.getListenerContainer("so66018178").start();
Map<String, Object> props = KafkaTestUtils.producerProps(this.broker);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(props);
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf);
template.send("so66018178", "test");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(listenerException.get()).isInstanceOf(ListenerExecutionFailedException.class)
.hasCause(config.exception);
}
#Configuration
public static class Config {
RuntimeException exception = new RuntimeException("test");
#Bean
public ProducerFactory<Object, Object> pf() {
return new ProducerFactory<>() {
#Override
public Producer<Object, Object> createProducer() {
MockProducer<Object, Object> mockProducer = new MockProducer<>();
mockProducer.commitTransactionException = Config.this.exception;
return mockProducer;
}
#Override
public Producer<Object, Object> createProducer(String txIdPrefix) {
Producer<Object, Object> producer = createProducer();
producer.initTransactions();
return producer;
}
#Override
public boolean transactionCapable() {
return true;
}
};
}
}
}
Do not use ChainedKafkaTransactionManager anymore, it is deprecated.
according to docs:
https://docs.spring.io/spring-kafka/reference/html/#container-transaction-manager
"The ChainedKafkaTransactionManager is now deprecated, since version 2.7; see the javadocs for its super class ChainedTransactionManager for more information. Instead, use a KafkaTransactionManager in the container to start the Kafka transaction and annotate the listener method with #Transactional to start the other transaction."
In my tests, where I tried to simulate exception in Producer after DB transaction committed, I simply left mandatory field empty in Kafka event (used Avro schema), and in the second test I deleted the topic for producing with the help of Kafka Admin. And then I wrote some asserts to verify that Kafka Listener was called again, when retrying.

Spring-kafka error handling with DeadLetterPublishingRecoverer

I am trying to implement error handling in Spring boot kafa. In my Kafka listener I am throwing a runtime exception as per below:
#KafkaListener(topics= "Kafka-springboot-example", groupId="group-employee-json")
public void consumeEmployeeJson(Employee employee) {
logger.info("Consumed Employee JSON: "+ employee);
if(null==employee.getEmployeeId()) {
throw new RuntimeException("failed");
//throw new ListenerExecutionFailedException("failed");
}
}
And I have configured error handling as per below:
#Configuration
#EnableKafka
public class KafkaConfiguration {
#Bean
public ConcurrentKafkaListenerContainerFactory<Object, Object> containerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
KafkaTemplate<Object, Object> template){
ConcurrentKafkaListenerContainerFactory<Object, Object> factory= new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(template)));
return factory;
}
}
And my listener for DLT is as per below:
#KafkaListener(topics= "Kafka-springboot-example.DLT", groupId="group-employee-json")
public void consumeEmployeeErrorJson(Employee employee) {
logger.info("Consumed Employee JSON frpm DLT topic: "+ employee);
}
But my message is not getting published to DLT topic.
Any idea what I am doing wrong?
Edited:
application.properties
server.port=8088
#kafka-producer-config
spring.kafka.producer.bootstrap-servers=localhost:9092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
#Kafka consumer properties
spring.kafka.consumer.bootstrap-servers=localhost:9092
spring.kafka.consumer.group-id=group-employee-json
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.consumer.properties.spring.json.trusted.packages=*
public ConcurrentKafkaListenerContainerFactory<Object, Object> containerFactory(
If you use a non-standard bean name for the container factory, you need to set it on the #KafkaListener in the containerFactory property.
The default bean name is kafkaListenerContainerFactory which is auto-configured by Boot. You need to either override that bean or configure the listener to point to your non-standard bean name.

AWS SQS (queue) with Spring Boot - performance issues

I have a service that reads all messages from AWS SQS.
#Slf4j
#Configuration
#EnableJms
public class JmsConfig {
private SQSConnectionFactory connectionFactory;
public JmsConfig(
#Value("${amazon.sqs.accessKey}") String awsAccessKey,
#Value("${amazon.sqs.secretKey}") String awsSecretKey,
#Value("${amazon.sqs.region}") String awsRegion,
#Value("${amazon.sqs.endpoint}") String awsEndpoint) {
connectionFactory = new SQSConnectionFactory(
new ProviderConfiguration(),
AmazonSQSClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(awsEndpoint, awsRegion))
.build());
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(this.connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setReceiveTimeout(2000L); //??????????
return factory;
}
#Bean
public JmsTemplate defaultJmsTemplate() {
return new JmsTemplate(this.connectionFactory);
}
I've heard about long polling so I wonder how I could use it in my case. I wonder how this listener works - I do not want to create unnecessary calls to the AWS SQS.
My listener that reads messages and converts them to the Object and saves on Redis db:
#JmsListener(destination = "${amazon.sqs.destination}")
public void receive(String requestJSON) throws JMSException {
log.info("Received");
try {
Trace trace = Trace.fromJSON(requestJSON);
traceRepository.save(trace);
(...)
I'd like to know your opinions - what is the best approach to minimalize unnecessary calls to SQS to get messages.
Maybe shoud I use for example
factory.setReceiveTimeout(2000L);
Unfortunately there is too little information in Internet about it
Thanks,
Matthew

Argument type error with Spring AMQP receiver

My Spring AMQP application has been logging the following exception on initiation:
org.springframework.amqp.rabbit.listener.exception.ListenerExecutionFailedException: Failed to invoke target method 'receiveMessage' with argument type = [class [B], value = [{[B#660cff44}]
From my searching I understand that this is because there is a class incompatibility with the message type? However, I am not able to see where this is.
The following are the relevant code segments:
#Bean
public MessageConverter jsonMessageConverter(){
return new Jackson2JsonMessageConverter();
}
#Bean
Queue queue() {
return new Queue(config.getAMQPResultsQueue(), false);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(config.getAMQPResultsExchange());
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with("#");
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(config.getAMQPResultsQueue());
container.setMessageListener(listenerAdapter);
container.setMessageConverter(jsonMessageConverter());
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
and
#Component
public class Receiver {
public void receiveMessage(String message) {
System.out.println("Received <" + message + ">");
}
}
I have tried setting the class of message to Byte[] but the result is the same. I am sure I am missing something simple - just not sure what it is!
The Jackson2JsonMessageConverter will only perform conversion if the message has a content_type header that contains json.
Otherwise, it will return byte[].
byte[] will also not be converted to Byte[]. Set the header or use byte[].
I ran into this issue when I set the property in RabbitMQ interface to content-type (understandably, since that's how the http spec spells it). But RabbitMQ has it underscored. content_type is the name of the property you would have to set in RabbitMQ interface to publish a message with HTTP Header Content-Type

How to read pending messages from an ActiveMQ queue in Spring Boot

I like to read pending (not acknowledged) messages in a ActiveMQ queue using Spring boot. How to do that?
So far I can read a message the moment it is send to the queue:
#JmsListener(destination = "LOCAL.TEST",
containerFactory = "myJmsListenerContainerFactory")
public void receiveMessage(final Message jsonMessage) throws JMSException {
String messageData = null;
// jsonMessage.acknowledge(); // dont consume message (for testing)
LOGGER.info("=== Received message {}", jsonMessage);
}
using a standard configuration for the mq-connection:
#Bean
public ActiveMQConnectionFactory getActiveMQConnectionFactory() {
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(BROKER_URL + ":" + BROKER_PORT);
return activeMQConnectionFactory;
}
and a standard ListenerContainerFactory:
#Bean
public DefaultJmsListenerContainerFactory myJmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(getActiveMQConnectionFactory());
factory.setConcurrency("1-1");
return factory;
}
But this just loggs a message if I manually send one using
#Autowired
private JmsTemplate jmsTemplate;
public void send(String destination, String message) {
LOGGER.info("sending message='{}' to destination='{}'", message, destination);
jmsTemplate.convertAndSend(destination, message);
}
with the standard template
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(getActiveMQConnectionFactory());
return template;
}
I cannot read messages sent earlier that are still in the Queue (since I didn't .acknowledge() them)...
JMS supports "browsing" messages which appears to be the functionality you want. You should therefore change your Spring application to use a QueueBrowser instead of actually consuming the messages.
Messages won't be resent if not acknowledged. They are not returned to the queue until the session is closed or the connection lost, for example by stopping (and restarting) the listener container created by the factory.
You can access the container using the JmsListenerEndpointRegistry bean (or stop/start the entire registry which will stop/start all of its containers).
To read all pending messages, you can do like this
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616?jms.redeliveryPolicy.maximumRedeliveries=1");
Connection connection = connectionFactory.createConnection("admin", "admin");
connection.start();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue("listenerQueue");
MessageConsumer consumer = session.createConsumer(destination);
QueueBrowser browser = session.createBrowser((Queue) destination);
Enumeration elems = browser.getEnumeration();
while (elems.hasMoreElements()) {
Message message = (Message) consumer.receive();
if (message instanceof TextMessage) {
TextMessage textMessage = (TextMessage) message;
System.out.println("Incoming Message: '" + textMessage.getText() + "'");
message.acknowledge();
}
}
connection.close();
Step by step implementation of Spring boot ActiveMQ. Lets write some code to make it more clear. This will help to read all pending messages in current session only.
Add these dependencies in pom.xml file.
<!-- Dependencies to setup JMS and active mq environment -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-broker</artifactId>
</dependency>
Add #EnableJms into your main controller where your main() method exists.
Create connection factory by adding these 2 methods in application controller only.
#Bean
public JmsListenerContainerFactory<?> myFactory(
ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
logger.info("configuring jms connection factory....");
// anonymous class
factory.setErrorHandler(
new ErrorHandler() {
#Override
public void handleError(Throwable t) {
logger.error("An error has occurred in the transaction", t);
}
});
// lambda function
factory.setErrorHandler(t -> logger.info("An error has occurred in the transaction"));
configurer.configure(factory, connectionFactory);
return factory;
}
// Serialize message content to json using TextMessage
#Bean
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
Mention credentials in in application.yml file as
spring.activemq.user=admin
spring.activemq.password=admin
spring.activemq.broker-url=tcp://localhost:61616?jms.redeliveryPolicy.maximumRedeliveries=1
Autowire jmsTemplate in any spring bean class.
#Autowired
private JmsTemplate jmsTemplate;
Now it is time to send message to a queue.
jmsTemplate.convertAndSend("anyQueueName", "value1");
jmsTemplate.convertAndSend("anyQueueName", "value2");
...
Add a jmslistener. This method will be called automatically by JMS when any message will be pushed to queue.
#JmsListener(destination ="anyQueueName", containerFactory = "myFactory")
public void receiveMessage(String user) {
System.out.println("Received <" + user + ">");
}
Manually you can read the messages available in queue:-
import javax.jms.TextMessage;
import javax.jms.QueueBrowser;
import javax.jms.Session;
import javax.jms.TextMessage;
public void readMessageFromQueue(){
jmsTemplate.browse("anyQueueName", new BrowserCallback<TextMessage>() {
#Override
public TextMessage doInJms(Session session, QueueBrowser browser) throws JMSException {
Enumeration<TextMessage> messages = browser.getEnumeration();
while (messages.hasMoreElements()) {
System.out.println("message found : -"+ messages.nextElement().getText());
}
}
});
}
Output :-
message found :- value1
message found :- value2
-Happy Coding

Resources