Currently when I create producer to send my records and for example for some reasons kafka is not available producer keeps sending the same message indefinitely. How I can stop producing messages for example after I received this error 3 times:
Connection to node -1 could not be established. Broker may not be available.
I'm using reactor kafka producer:
#Bean
public KafkaSender<String, String> createSender() {
return KafkaSender.create(senderOptions());
}
private SenderOptions<String, String> senderOptions() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());
props.put(ProducerConfig.CLIENT_ID_CONFIG, kafkaProperties.getClientId());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.RETRIES_CONFIG, kafkaProperties.getProducerRetries());
return SenderOptions.create(props);
}
and then use it to send record:
sender.send(Mono.just(SenderRecord.create(new ProducerRecord<>(topicName, null, message), message)))
.flatMap(result -> {
if (result.exception() != null) {
return Flux.just(ResponseEntity.badRequest()
.body(result.exception().getMessage()));
}
return Flux.just(ResponseEntity.ok().build());
})
.next();
I'm afraid the clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs); is not involved in the retry and it iterates until maxBlockTimeMs = 60000 by default. You can decrease this option for the producer via ProducerConfig.MAX_BLOCK_MS_CONFIG property:
public static final String MAX_BLOCK_MS_CONFIG = "max.block.ms";
private static final String MAX_BLOCK_MS_DOC = "The configuration controls how long <code>KafkaProducer.send()</code> and <code>KafkaProducer.partitionsFor()</code> will block."
+ "These methods can be blocked either because the buffer is full or metadata unavailable."
+ "Blocking in the user-supplied serializers or partitioner will not be counted against this timeout.";
UPDATE
We can fix the problem like this:
#PostMapping(path = "/v1/{topicName}")
public Mono<ResponseEntity<?>> postData(
#PathVariable("topicName") String topicName, String message) {
return sender.send(Mono.just(SenderRecord.create(new ProducerRecord<>(topicName, null, message), message)))
.flatMap(result -> {
if (result.exception() != null) {
sender.close();
return Flux.just(ResponseEntity.badRequest()
.body(result.exception().getMessage()));
}
return Flux.just(ResponseEntity.ok().build());
})
.next();
}
Pay attention to the sender.close(); in the case of error.
I think it's time to raise an issue against Reactor Kafka project to allow close producer on error.
You can use circuit breaker pattern for this type of issues but before applying this pattern try to find root cause and it seems that your ProducerConfig.RETRIES_CONFIG property is being overridden somewhere.
Rather than focus on the error. Fix the problem - it's not connecting to the broker
You're not overriding this in your compose file, so your app is trying to connect to itself
bootstrap-servers: ${KAFKA_BOOTSTRAP_URL:localhost:9092}
In the compose yml, it appears you forgot this
rest-proxy:
environment:
KAFKA_BOOTSTRAP_URL: kafka:9092
Alternatively, if possible, you can use the existing Confluent REST Proxy docker image instead of reinventing the wheel
Related
I'm experimenting a lot with Apache Kafka in a Spring Boot App at the moment.
My current goal is to write a REST endpoint that takes in some message payload, which will use a KafkaTemplate to send the data to my local Kafka running on port 9092.
This is my producer config:
#Bean
public Map<String,Object> producerConfig() {
// config settings for creating producers
Map<String,Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,this.bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG,5000);
configProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,4000);
configProps.put(ProducerConfig.RETRIES_CONFIG,0);
return configProps;
}
#Bean
public ProducerFactory<String,String> producerFactory() {
// creates a kafka producer
return new DefaultKafkaProducerFactory<>(producerConfig());
}
#Bean("kafkaTemplate")
public KafkaTemplate<String,String> kafkaTemplate(){
// template which abstracts sending data to kafka
return new KafkaTemplate<>(producerFactory());
}
My rest endpoint forwards to a service, the service looks like this:
#Service
public class KafkaSenderService {
#Qualifier("kafkaTemplate")
private final KafkaTemplate<String,String> kafkaTemplate;
#Autowired
public KafkaSenderService(KafkaTemplate<String,String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendMessageWithCallback(String message, String topicName) {
// possibility to add callbacks to define what shall happen in success/ error case
ListenableFuture<SendResult<String,String>> future = kafkaTemplate.send(topicName, message);
future.addCallback(new KafkaSendCallback<String, String>() {
#Override
public void onFailure(KafkaProducerException ex) {
logger.warn("Message could not be delivered. " + ex.getMessage());
}
#Override
public void onSuccess(SendResult<String, String> result) {
logger.info("Your message was delivered with following offset: " + result.getRecordMetadata().offset());
}
});
}
}
The thing now is: I'm expecting the "onFailure()" method to get called when the message could not be sent. But this seems not to work. When I change the bootstrapServers variable in the producer config to localhost:9091 (which is the wrong port, so there should be no connection possible), the producer tries to connect to the broker. It will do several connection attempts, and after 5 seconds, a TimeOutException will occur. But the "onFailure() method won't get called. Is there a way to achieve that the "onFailure()" method can get called event if the connection cannot be established?
And by the way, I set the retries count to zero, but the prodcuer still does a second connection attempt after the first one. This is the log output:
EDIT: it seems like the Kafke producer/ KafkaTemplate goes into an infinite loop when the broker is not available. Is that really the intended behaviour?
The KafkaTemplate does really nothing fancy about connection and publishing. Everything is delegated to the KafkaProducer. What you describe here would happen exactly even if you'd use just plain Kafka Client.
See KafkaProducer.send() JavaDocs:
* #throws TimeoutException If the record could not be appended to the send buffer due to memory unavailable
* or missing metadata within {#code max.block.ms}.
Which happens by the blocking logic in that producer:
/**
* Wait for cluster metadata including partitions for the given topic to be available.
* #param topic The topic we want metadata for
* #param partition A specific partition expected to exist in metadata, or null if there's no preference
* #param nowMs The current time in ms
* #param maxWaitMs The maximum time in ms for waiting on the metadata
* #return The cluster containing topic metadata and the amount of time we waited in ms
* #throws TimeoutException if metadata could not be refreshed within {#code max.block.ms}
* #throws KafkaException for all Kafka-related exceptions, including the case where this method is called after producer close
*/
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long nowMs, long maxWaitMs) throws InterruptedException {
Unfortunately this is not explained in the send() JavaDocs which claims to be fully asynchronous, but apparently it is not. At least in this metadata part which has to be available before we enqueue the record for publishing.
That's what we cannot control and it is not reflected on the returned Future:
try {
clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), nowMs, maxBlockTimeMs);
} catch (KafkaException e) {
if (metadata.isClosed())
throw new KafkaException("Producer closed while send in progress", e);
throw e;
}
See more info in Apache Kafka docs how to adjust the KafkaProducer for this matter: https://kafka.apache.org/documentation/#theproducer
Question answered inside the discussion on https://github.com/spring-projects/spring-kafka/discussions/2250# for anyone else stumbling across this thread. In short, kafkaTemplate.getProducerFactory().reset();does the trick.
I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:
Consume messages from a topic
Store consumed message in the DB
Call an external service for some information
Process the data
Record the results in DB
Send the message to another topic
Acknowledge the message (I have the acknowledge mode set to manual)
We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.
#Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
return message -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
return MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
}
}
My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.
My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).
earlier, we could do this through try/catch
#StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
try {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
}
Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template
Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.
#Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
try{
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback.
//only ack if the data was sent successfully.
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}
catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
};
}
Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.
Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..
Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).
As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation
#Configuration
public class ProducerListenerConfiguration {
private final MongoTemplate mongoTemplate;
public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener() {
#SneakyThrows
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
}
#SneakyThrows
#Override
public void onError(ProducerRecord producerRecord, #Nullable RecordMetadata recordMetadata, Exception exception) {
ProducerListener.super.onError(producerRecord, recordMetadata, exception);
}
};
}
public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(id));
Update update = new Update();
update.set("processed", isProcessed);
update.set("version", version);
return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
}
}
Then with each successful attempt, the DB is updated with the processing result and the updated version number.
I am trying to perform the spring kafka batch process error handling. First of all i have few questions.
what is difference between listener and container error handlers and what errors comes into these two categories ?
Could you please help some samples on this to understand better ?
Here is our design:
Poll every certain interval
consume messages in a batch mode
push to local cache (application cache) based on key (to avoid duplicate events)
push all values one by one to another topic once batch process done.
clear the the cache once the operation 3 done and acknowledge the offsets manually.
Here is my plan to have error handling:
public ConcurrentKafkaListenerContainerFactory<String, String> myListenerPartitionContainerFactory(String groupId) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory(groupId));
factory.setConcurrency(partionCount);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
factory.getContainerProperties().setIdleBetweenPolls(pollInterval);
factory.setBatchListener(true);
return factory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> myPartitionsListenerContainerFactory()
{
return myListenerPartitionContainerFactory(groupIdPO);
}
#Bean
public RecoveringBatchErrorHandler(KafkaTemplate<String, String> errorKafkaTemplate) {
DeadLetterPublishingRecoverer recoverer =
new DeadLetterPublishingRecoverer(errorKakfaTemplate);
RecoveringBatchErrorHandler errorHandler =
new RecoveringBatchErrorHandler(recoverer, new FixedBackOff(2L, 5000)); // push error event to the error topic
}
#KafkaListener(id = "mylistener", topics = "someTopic", containerFactory = "myPartitionsListenerContainerFactory"))
public void listen(List<ConsumerRecord<String, String>> records, #Header(KafkaHeaders.MESSAGE_KEY) String key, Acknowledgement ack) {
Map hashmap = new Hashmap<>();
records.forEach(record -> {
try {
//key will be formed based on the input record - it will be id.
hashmap.put(key, record);
}
catch (Exception e) {
throw new BatchListenerFailedException("Failed to process", record);
}
});
// Once success each messages to another topic.
try {
hashmap.forEach( (key,value) -> { push to another topic })
hashmap.clear();
ack.acknowledge();
} catch(Exception ex) {
//handle producer exceptions
}
}
is the direction good or any improvements needs to be done? And also what type of container and listener handlers need to be implemented?
#Gary Russell.. could you please help on this ?
The listener error handler is intended for request/reply situations where the error handler can return a meaningful reply to the sender.
You need to throw an exception to trigger the container error handler and you need to know in the index in the original batch to tell it which record failed.
If you are using manual acks like that, you can use the nack() method to indicate which record failed (and don't throw an exception in that case).
I am learning to use kafka, I have two services a producer and a consumer.
The producer produces messages that require processing (queries to services and database). These messages are received by the consumer, it is responsible for processing them and saves the result in a database
Producer
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
...
kafkaTemplate.send(topic, message);
Consumer
#KafkaListener(topics = "....")
public void listen(#Payload String message) {
....
}
I would like all messages to be processed correctly by the consumer.
I do not know how to handle errors on the consumer side in this context. For example, a database might be temporarily disabled and could not handle certain messages.
What to do in these cases?
I know that the responsibility belongs to the consumer.
I could do retries, but retry several times in a row if a database is down does not seem like a good idea. And if I continue to consume messages, the index advances and I lose the events that I could not process by mistake.
You have control over kafka consumer in form of committing the offset of records read. Kafka will continue to return the same records unless the offset is committed. You can set offset commit to manual and based on the success of your business logic decide whether to commit or not. See a sample below
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test");
props.put("enable.auto.commit", "false");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("foo", "bar"));
final int minBatchSize = 200;
List<ConsumerRecord<String, String>> buffer = new ArrayList<>();
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
buffer.add(record);
}
if (buffer.size() >= minBatchSize) {
insertIntoDb(buffer);
consumer.commitSync();
buffer.clear();
}
}
Consumer.commitsync() commits the offset.
Also see the kakfa consumer documentation to understand the consumer offsets here .
This link was very helpful https://dzone.com/articles/spring-for-apache-kafka-deep-dive-part-1-error-han
Spring provides the DeadLetterPublishingRecoverer class that performs a correct handling of errors.
Spring cloud Kafka stream does not retry upon deserialization error even after specific configuration. The expectation is, it should retry based on the configured retry policy and at the end push the failed message to DLQ.
Configuration as below.
spring.cloud.stream.bindings.input_topic.consumer.maxAttempts=7
spring.cloud.stream.bindings.input_topic.consumer.backOffInitialInterval=500
spring.cloud.stream.bindings.input_topic.consumer.backOffMultiplier=10.0
spring.cloud.stream.bindings.input_topic.consumer.backOffMaxInterval=100000
spring.cloud.stream.bindings.iinput_topic.consumer.defaultRetryable=true
public interface MyStreams {
String INPUT_TOPIC = "input_topic";
String INPUT_TOPIC2 = "input_topic2";
String ERROR = "apperror";
String OUTPUT = "output";
#Input(INPUT_TOPIC)
KStream<String, InObject> inboundTopic();
#Input(INPUT_TOPIC2)
KStream<Object, InObject> inboundTOPIC2();
#Output(OUTPUT)
KStream<Object, outObject> outbound();
#Output(ERROR)
MessageChannel outboundError();
}
#StreamListener(MyStreams.INPUT_TOPIC)
#SendTo(MyStreams.OUTPUT)
public KStream<Key, outObject> processSwft(KStream<Key, InObject> myStream) {
return myStream.mapValues(this::transform);
}
The metadataRetryOperations in KafkaTopicProvisioner.java is always null and hence it creates a new RetryTemplate in the afterPropertiesSet().
public KafkaTopicProvisioner(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties, KafkaProperties kafkaProperties) {
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
this.adminClientProperties = kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaBinderConfigurationProperties;
this.normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties, kafkaBinderConfigurationProperties);
}
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
this.metadataRetryOperations = metadataRetryOperations;
}
public void afterPropertiesSet() throws Exception {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(10);
retryTemplate.setRetryPolicy(simpleRetryPolicy);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(100L);
backOffPolicy.setMultiplier(2.0D);
backOffPolicy.setMaxInterval(1000L);
retryTemplate.setBackOffPolicy(backOffPolicy);
this.metadataRetryOperations = retryTemplate;
}
}
The retry configuration only works with MessageChannel-based binders. With the KStream binder, Spring just helps with building the topology in a prescribed way, it's not involved with the message flow once the topology is built.
The next version of spring-kafka (used by the binder) has added the RecoveringDeserializationExceptionHandler (commit here); while it can't help with retry, it can be used with a DeadLetterPublishingRecoverer send the record to a dead-letter topic.
You can use a RetryTemplate within your processors/transformers to retry specific operations.
Spring cloud Kafka stream does not retry upon deserialization error even after specific configuration.
The behavior you are seeing matches the default settings of Kafka Streams when it encounters a deserialization error.
From https://docs.confluent.io/current/streams/faq.html#handling-corrupted-records-and-deserialization-errors-poison-pill-records:
LogAndFailExceptionHandler implements DeserializationExceptionHandler and is the default setting in Kafka Streams. It handles any encountered deserialization exceptions by logging the error and throwing a fatal error to stop your Streams application. If your application is configured to use LogAndFailExceptionHandler, then an instance of your application will fail-fast when it encounters a corrupted record by terminating itself.
I am not familiar with Spring's facade for Kafka Streams, but you probably need to configure the desired org.apache.kafka.streams.errors.DeserializationExceptionHandler, instead of configuring retries (they are meant for a different purpose). Or, you may want to implement your own, custom handler (see link above for more information), and then configure Spring/KStreams to use it.