Spring cloud stream kafka consumer DefaultErrorHandler not working - spring

I need help in error handling scenario in spring cloud stream kafka binder. My Application has java 8 consumer of which binding is specified in application.yaml. The consumer is written as :
#Bean
public Consumer<Message<Transaction>> doProcess() {
return message -> {
Transaction transaction = message.getPayload();
if(true) {
throw new RuntimeException("exception!! !!:)");
}
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT,
Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
}
application.yaml:
spring.application.name: appname
spring.cloud.stream:
function.definition: doProcess
kafka:
default.consumer:
startOffset: latest
useNativeDecoding: true
bindings:
input.consumer.autoCommitOffset: false
bindings:
doProcess-in-0:
destination: kafka.input.topic.name
group: appGroup
content-type: application/*+avro
consumer:
autoCommitOffset: false.
Bean defined are:
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer> listener() {
System.out.println(String.format("DEBUG: Bean has bean created."));
return new KafkaListenerContainerCustomizer();
}
public class KafkaListenerContainerCustomizer implements ListenerContainerCustomizer<AbstractMessageListenerContainer> {
private Object notifier;
public KafkaListenerContainerCustomizer(Object notifier){
this.notifier = notifier;
}
#Override
public void configure(AbstractMessageListenerContainer container, String destinationName, String group) {
KafkaGlobalErrorHandler eh = new KafkaGlobalErrorHandler(new ExponentialBackOff());
container.setCommonErrorHandler(eh);
}
}
public class KafkaGlobalErrorHandler extends DefaultErrorHandler {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaGlobalErrorHandler.class);
public KafkaGlobalErrorHandler(BackOff backOff) {
super(backOff);
}
#Override
public void handleRecord(Exception exception, ConsumerRecord<?, ?> record, Consumer<?, ?> consumer,
MessageListenerContainer container) {
LOGGER.error("Error occured while processing: " + ListenerUtils.recordToString(record), exception);
String topic = record.topic();
long offset = record.offset();
int partition = record.partition();
if (exception.getClass().equals(DeserializationException.class)) {
DeserializationException deserializationException = (DeserializationException) exception;
LOGGER.error("Malformed Message Deserialization Exception on topic {}, offset {}, data, {}, msg {}",
topic,
offset,
deserializationException.getData(),
deserializationException.getLocalizedMessage());
} else {
LOGGER.error("An Exception has occurred. topic {}, offset {}, data, {}, msg {}", topic, offset, partition,
exception.getLocalizedMessage());
}
consumer.commitSync();
}
#Override
public void handleBatch(Exception exception, ConsumerRecords<?, ?> records, Consumer<?, ?> consumer,
MessageListenerContainer container, Runnable invokeListener) {
for (ConsumerRecord<?, ?> record : records) {
String topic = record.topic();
long offset = record.offset();
int partition = record.partition();
if (exception.getClass().equals(DeserializationException.class)) {
DeserializationException deserializationException = (DeserializationException) exception;
LOGGER.error("Malformed Message Deserialization Exception on topic {}, offset {}, data, {}, msg {}",
topic,
offset,
deserializationException.getData(),
deserializationException.getLocalizedMessage());
} else {
LOGGER.error("An Exception has occurred. topic {}, offset {}, data, {}, msg {}", topic, offset, partition,
exception.getLocalizedMessage());
}
consumer.commitSync();
}
}
#Override
public void handleOtherException(Exception thrownException, Consumer<?, ?> consumer,
MessageListenerContainer container, boolean batchListener) {
LOGGER.error("Error occurred while not processing records", thrownException);
}
}
Now, I am struggling with error handling :
We need to write the custom exception handler, where we can catch the exception(error in both application code and framework) and send notification to a user group via email in AWS env. But, we are not able to find any error handler which can catch both types of exception. We tried with SeekToCuurentErrorHandler but it did not work. Then I tried with DefaultErrorHandler as suggested in this post : Spring cloud stream kafka consumer error handling and retries issues, but it's working only for some exception (i.e able to catch in handleOtherException method), and if consumer code throws any RuntimeException(as given in consumer code attached here), it is not caught by DefaultErrorHandler.

Related

DefaultErrorHandler is not configurable If #RetryableTopic used for retry and DLT handler

Spring boot version : 2.7.6
Spring kafka version : 2.8.11
Issue:
I was trying to handle the deserialization issues in code. To handle such issues in code, I created my own class by extending
DefaultErrorHandler
and overriding the public void handleOtherException(Exception thrownException, Consumer<?, ?> consumer, MessageListenerContainer container, boolean batchListener) {}
Sample code as below
public class CustomDefaultErrorHandler extends DefaultErrorHandler {
private static Logger log = LoggerFactory.getLogger(CustomDefaultErrorHandler.class);
#Override
public void handleOtherException(Exception thrownException, Consumer<?, ?> consumer, MessageListenerContainer container, boolean batchListener) {
manageException(thrownException, consumer);
}
private void manageException(Exception ex, Consumer<?, ?> consumer) {
log.error("Error polling message: " + ex.getMessage());
if (ex instanceof RecordDeserializationException) {
RecordDeserializationException rde = (RecordDeserializationException) ex;
consumer.seek(rde.topicPartition(), rde.offset() + 1L);
consumer.commitSync();
} else {
log.error("Exception not handled");
}
}
}
If I use the #RetryableTopic along with #KafkaListener
#RetryableTopic(listenerContainerFactory = "kafkaListenerContainerFactory", backoff = #Backoff(delay = 8000, multiplier = 2.0),
dltStrategy = DltStrategy.FAIL_ON_ERROR
, traversingCauses = "true", autoCreateTopics = "true", numPartitions = "3", replicationFactor = "3",
fixedDelayTopicStrategy = FixedDelayStrategy.MULTIPLE_TOPICS, include = {RetriableException.class, RecoverableDataAccessException.class,
SQLTransientException.class, CallNotPermittedException.class}
)
#KafkaListener(topics = "${topic.name}", groupId = "order", containerFactory = "kafkaListenerContainerFactory", id = "OTR")
public void consumeOTRMessages(ConsumerRecord<String, PayloadsVO> payload, #Header(KafkaHeaders.RECEIVED_TOPIC) String topicName) throws JsonProcessingException {
logger.info("Payload :{}", payload.value());
payloadsService.savePayload(payload.value(), pegasusTopicName);
}
What I saw in debugging the code, #RetryableTopic has its own DefaultErrorHandler configurations in
ListenerContainerFactoryConfigurer
and it stops my custom handler and deserialization process wont stop on issue.
Can you please suggest any way since I wanted to use annotations for retry process in my code
I tried to configured my own implementation of
DefaultErrorHandler
by extending it and configured in
ConcurrentKafkaListenerContainerFactory
It's quite involved, but you should be able to override the RetryTopicComponentFactory bean and override listenerContainerFactoryConfigurer() to return your custom error handler.
That said, deserialization exceptions will go straight to the DLT anyway.
BTW, calling commitSync() here is worthless because there were no records returned by the poll().

Spring kafka producer keep on waiting for one minute in some cases for producing data and failed with exception for some cases

I have implemented Spring Kafka Template for producing event in my spring boot project.The code for producing an event is given below-
Producer Config:
#Beanpublic Map<String, Object> producerConfigs() throws FileNotFoundException {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,kafkaProperties.getBootstrapServers());
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,kafkaProperties.getSecurity().getProtocol());
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,ResourceUtils.getFile("classpath:client.truststoreks").getAbsolutePath());
props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG,StringUtils.EMPTY);props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,JsonSerializer.class);
props.put(ProducerConfig.LINGER_MS_CONFIG, "100");
return props;
}
Producer Service Code:
public class KafkaProducerService<V> implements KafkaProducer<V> {
#Autowired
KafkaTemplate<String, V> kafkaTemplate;
#Autowired
KafkaTemplate<String, V> transactionLogKafkaTemplate;
public KafkaProducerService(KafkaTemplate<String, V> kafkaTemplate, KafkaTemplate<String, V> transactionLogKafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
this.transactionLogKafkaTemplate = transactionLogKafkaTemplate;
}
#Override
#Retryable({KafkaException.class, TimeoutException.class})
public void produce(String topic, String key, V value) {
log.info("Calling producer service for producing event on topic "+topic);
sendCallbackEvents(kafkaTemplate, topic, key, value);
}
private void sendCallbackEvents(KafkaTemplate<String, V> kafkaTemplate, String topic, String key, V value) {
ProducerRecord<String, V> producerRecord = new ProducerRecord(topic, key, value);
ListenableFuture<SendResult<String, V>> future = kafkaTemplate.send(producerRecord);
future.addCallback(new ListenableFutureCallback<SendResult<String, V>>() {
#Override
public void onSuccess(SendResult<String, V> result) {
log.info(String.format("Produced event to topic %s: key = %-10s value = %s", topic, key, value));
}
#Override
public void onFailure(Throwable ex) {
log.error("Producing of data on topic {} is failed", topic, ex.getCause());
}
});
}
}
P.S: We are using AWS MSK as a broker for producing an event.
But in some cases, it's taking one minute time for producing an event and getting failed with the below error in logs-
ERROR LogAccessor - Exception thrown when sending a message with key='xx' and payload='Event(key=value)' to topic topicName:
Hence it's able to produce the event due to retry logic on producer service but due to that 1-minute delay, I am facing several issues.
I tried to find out the reason for the producer service delay and failure while going through the Spring Kafka dependency classes but no luck.
I am not able to find out the exact reason why the producer is getting delayed and failing in 1st attempt for some cases. Can anyone help me in identifying the reason for that and the solution to the issue?

Producer callback in Spring Cloud Stream with reactor core publisher

I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.

Why spring-cloud-stream not populating JMSMessageID while publishing messages to Solace topics?

Summarization of the problem
In my project, we are trying to use Spring-Cloud-Stream (SCS) to connect to Solace. Eventually we plan to move to Kafka. So using SCS will help us move over to Kafka quite easily without any code changes and very minimal configuration & dependency changes.
We had been using Solace for a while using JMS. Now when we tried to publish messages to Solace using SCS, we observed that in the message, some crucial JMS Headers (JMSMessageID, JMSType, JMSPriority,JMSCorrelationID, JMSExpiration) are blank.
Do we need to configure the JMS headers separately ? If yes, how ?
What I've already tried
I tried to set headers like this, but this is just resulting in duplicate headers with the same name.
#Output(SendReport.TO_NMR)
public void sendMessage(String request) {
log.info("****************** Got this Report Request: " + request);
MessageBuilder<String> builder = MessageBuilder.withPayload(request);
builder.setHeader("JMSType","report-request");
builder.setHeader("JMSMessageId","1");
builder.setHeader("JMSCorrelationId","11");
builder.setHeader("JMSMessageID","4");
builder.setHeader("JMSCorrelationID","114");
builder.setHeader("ApplicationMessageId","111");
builder.setHeader("ApplicationMessageID","112");
builder.setCorrelationId("23434");
Message message = builder.build();
sendReport.output().send(message);
}
JMS Header of the message in Solace looks like this
JMSMessageID
JMSDestination TOPIC_NAME
JMSTimestamp Wed Dec 31 18:00:00 CST 1969
JMSType
JMSReplyTo
JMSCorrelationID
JMSExpiration 0
JMSPriority 0
JMSType nmr-report-request
JMSMessageId 1
JMSMessageID 4
_isJavaSerializedObject-contentType true
_isJavaSerializedObject-id true
solaceSpringCloudStreamBinderVersion 0.1.0
ApplicationMessageId 111
ApplicationMessageID 112
JMSCorrelationId 11
JMSCorrelationID 114
correlationId 23434
id [-84,-19,0,5,115,114,0,14,106,97,118,97,46,117,116,105,108,46,85,85,73,68,-68,-103,3,-9,-104,109,-123,47,2,0,2,74,0,12,108,101,97,115,116,83,105,103,66,105,116,115,74,0,11,109,111,115,116,83,105,103,66,105,116,115,120,112,13,-26,2,-51,111,-17,73,73,-18,-32,-26,-11,-46,-89,50,-37] (offset=377, length=80)
contentType [-84,-19,0,5,115,114,0,33,111,114,103,46,115,112,114,105,110,103,102,114,97,109,101,119,111,114,107,46,117,116,105,108,46,77,105,109,101,84,121,112,101,56,-76,29,-63,64,96,-36,-81,2,0,3,76,0,10,112,97,114,97,109,101,116,101,114,115,116,0,15,76,106,97,118,97,47,117,116,105,108,47,77,97,112,59,76,0,7,115,117,98,116,121,112,101,116,0,18,76,106,97,118] (offset=473, length=190)
timestamp 1555707627482
Code used to connect to Solace
Spring Boot Main Class
#SpringBootApplication
#EnableDiscoveryClient
#Slf4j
#EnableBinding({SendReport.class})
public class ReportServerApplication {
public static void main(final String[] args) {
ApplicationContext ctx = new ClassPathXmlApplicationContext("applicationContext-server.xml");
new SpringApplicationBuilder(ReportServerApplication.class).listeners(new EnvironmentPreparedListener()) .run(args);
}
Class to connect channel to topic:
public interface SendReport {
String TO_NMR = "solace-poc-outbound";
#Output(SendReport.TO_NMR)
MessageChannel output();
}
Message Handler:
#Slf4j
#Component
#EnableBinding({SendReport.class})
public class MessageHandler {
private SendReport sendReport;
public MessageHandler(SendReport sendReport){
this.sendReport = sendReport;
}
#Output(SendReport.TO_NMR)
public void sendMessage(String request) {
log.info("****************** Got this Report Request: " + request);
var message = MessageBuilder.withPayload(request).build();
sendReport.output().send(message);
}
}
Properties used for configuration : application.yml
spring:
cloud:
# spring cloud stream binding
stream:
bindings:
solace-poc-outbound:
destination: TOPIC_NAME
contentType: text/plain
solace:
java:
host: tcp://xyz.abc.com
#port: xxx
msgVpn: yyy
clientUsername: aaa
Dependencies used:
'org.springframework.cloud:spring-cloud-stream',
'com.solace.spring.cloud:spring-cloud-starter-stream-solace:1.1.+'
Observation
Expected result : All JMS headers should get populated by SCS.
Actual result : Some JMS headers are not getting populated.
See JMS Message JavaDocs:
/** Sets the message ID.
*
* <P>This method is for use by JMS providers only to set this field
* when a message is sent. This message cannot be used by clients
* to configure the message ID. This method is public
* to allow a JMS provider to set this field when sending a message
* whose implementation is not its own.
*
* #param id the ID of the message
*
* #exception JMSException if the JMS provider fails to set the message ID
* due to some internal error.
*
* #see javax.jms.Message#getJMSMessageID()
*/
void
setJMSMessageID(String id) throws JMSException;
So, this property cannot be populated from the application level.
In the ActiveMQ I see the code like this:
msg.setMessageId(new MessageId(producer.getProducerInfo().getProducerId(), sequenceNumber));
// Set the message id.
if (msg != message) {
message.setJMSMessageID(msg.getMessageId().toString());
But still: it is not what we can control from the application level.
The priority, deliveryMode and timeToLive ca be populated from the JmsSendingMessageHandler:
if (this.jmsTemplate instanceof DynamicJmsTemplate && this.jmsTemplate.isExplicitQosEnabled()) {
Integer priority = StaticMessageHeaderAccessor.getPriority(message);
if (priority != null) {
DynamicJmsTemplateProperties.setPriority(priority);
}
if (this.deliveryModeExpression != null) {
Integer deliveryMode =
this.deliveryModeExpression.getValue(this.evaluationContext, message, Integer.class);
if (deliveryMode != null) {
DynamicJmsTemplateProperties.setDeliveryMode(deliveryMode);
}
}
if (this.timeToLiveExpression != null) {
Long timeToLive = this.timeToLiveExpression.getValue(this.evaluationContext, message, Long.class);
if (timeToLive != null) {
DynamicJmsTemplateProperties.setTimeToLive(timeToLive);
}
}
}
The JmsCorrelationID must be populated by the JmsHeaders.CORRELATION_ID. The JmsType by the JmsHeaders.TYPE, respectively:
public void fromHeaders(MessageHeaders headers, javax.jms.Message jmsMessage) {
try {
Object jmsCorrelationId = headers.get(JmsHeaders.CORRELATION_ID);
if (jmsCorrelationId instanceof Number) {
jmsCorrelationId = jmsCorrelationId.toString();
}
if (jmsCorrelationId instanceof String) {
try {
jmsMessage.setJMSCorrelationID((String) jmsCorrelationId);
}
catch (Exception e) {
this.logger.info("failed to set JMSCorrelationID, skipping", e);
}
}
Object jmsReplyTo = headers.get(JmsHeaders.REPLY_TO);
if (jmsReplyTo instanceof Destination) {
try {
jmsMessage.setJMSReplyTo((Destination) jmsReplyTo);
}
catch (Exception e) {
this.logger.info("failed to set JMSReplyTo, skipping", e);
}
}
Object jmsType = headers.get(JmsHeaders.TYPE);
if (jmsType instanceof String) {
try {
jmsMessage.setJMSType((String) jmsType);
}
catch (Exception e) {
this.logger.info("failed to set JMSType, skipping", e);
}
}
See DefaultJmsHeaderMapper for more info.

Issues getting ActiveMQ Advisory messages for MessageConsumed

I need to be able to receive notification when a ActiveMQ client consumes a MQTT message.
activemq.xml
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" advisoryForConsumed="true" />
</policyEntries>
</policyMap>
</destinationPolicy>
In the below code, I get MQTT messages on myTopic fine. I do not get advisory messages in processAdvisoryMessage / processAdvisoryBytesMessage.
#Component
public class MqttMessageListener {
#JmsListener(destination = "mytopic")
public void processMessage(BytesMessage message) {
}
#JmsListener(destination = "ActiveMQ.Advisory.MessageConsumed.Topic.>")
public void processAdvisoryMessage(Message message) {
System.out.println("processAdvisoryMessage Got a message");
}
#JmsListener(destination = "ActiveMQ.Advisory.MessageConsumed.Topic.>")
public void processAdvisoryBytesMessage(BytesMessage message) {
System.out.println("processAdvisoryBytesMessageGot a message");
}
}
What am I doing wrong?
I have also attempted doing this with a ActiveMQ BrokerFilter:
public class AMQMessageBrokerFilter extends GenericBrokerFilter {
#Override
public void acknowledge(ConsumerBrokerExchange consumerExchange, MessageAck ack) throws Exception {
super.acknowledge(consumerExchange, ack);
}
#Override
public void postProcessDispatch(MessageDispatch messageDispatch) {
Message message = messageDispatch.getMessage();
}
#Override
public void messageDelivered(ConnectionContext context, MessageReference messageReference) {
log.debug("messageDelivered called.");
super.messageDelivered(context, messageReference);
}
#Override
public void messageConsumed(ConnectionContext context, MessageReference messageReference) {
log.debug("messageConsumed called.");
super.messageConsumed(context, messageReference);
}
In this second scenario I was unable to both have the message and a contect with which to send the consumed notification. acknowledge/messageDelivered/messageConsumed all have a connection context but only postProcessDispatch has the message which I need part of it (payload is JSON) in order to send my outgoing message. I could be eager and use send which has both but it is safer to wait until at least it was acknowledged.
I have tried:
#Override
public void postProcessDispatch(MessageDispatch messageDispatch) {
super.postProcessDispatch(messageDispatch);
String topic = messageDispatch.getDestination().getPhysicalName();
if( topic == null || topic.equals("delivered") )
return;
try {
ActiveMQTopic responseTopic = new ActiveMQTopic("delivered");
ActiveMQTextMessage responseMsg = new ActiveMQTextMessage();
responseMsg.setPersistent(false);
responseMsg.setResponseRequired(false);
responseMsg.setProducerId(new ProducerId());
responseMsg.setText("Delivered msg: "+msg);
responseMsg.setDestination(responseTopic);
String messageKey = ":"+rand.nextLong();
MessageId msgId = new MessageId(messageKey);
responseMsg.setMessageId(msgId);
ProducerBrokerExchange producerExchange=new ProducerBrokerExchange();
ConnectionContext context = getAdminConnectionContext();
producerExchange.setConnectionContext(context);
producerExchange.setMutable(true);
producerExchange.setProducerState(new ProducerState(new ProducerInfo()));
next.send(producerExchange, responseMsg);
}
catch (Exception e) {
log.debug("Exception: "+e);
}
However the above seems to lead to unstable server. I'm thinking this is related to using the getAdminConnectionContext which seems wrong.
My factory was setting setPubSubDomain to false by default. This disables connections for advisory messages for topics. I set it to true and things started working. Note that queues will not work with this set. To get around that I created two factories and named their beans.
#Bean(name="main")
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(ConnectionFactory connectionFactory) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
// factory.setDestinationResolver(destinationResolver);
// factory.setPubSubDomain(true);
factory.setConcurrency("3-10");
return factory;
}

Resources