Get Partition and Offset number in which kafka message is being processed using StreamBridge - spring

I need to print/log/store the kafka partition and offset number in which my message is being processed.
How can I achieve that?
I am using StreamBridge to send the message from producer and also using functional spring kafka streams approach
Public delegateToSupplier(String id, Abc obj) {
Message<Abc> message = MessageBuilder.withPayload(obj).seHeaders(KafkaHeaders.MESSAGE_KEY, id.getBytes()).build();
streamBridge.send("out-topic", message);
}

The record metadata is available (asynchronously) via the metadata channel:
#SpringBootApplication
public class So66436499Application {
public static void main(String[] args) {
SpringApplication.run(So66436499Application.class, args);
}
#Autowired
StreamBridge bridge;
#Bean
public ApplicationRunner runner() {
return args -> {
this.bridge.send("myBinding", "test");
Thread.sleep(5000);
};
}
#ServiceActivator(inputChannel = "meta")
void meta(Message<?> sent) {
System.out.println("Sent: " + sent.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class));
}
}
spring.cloud.stream.bindings.myBinding.destination=foo
spring.cloud.stream.kafka.bindings.myBinding.producer.record-metadata-channel=meta
Sent: foo-0#5

Related

Spring Integration - Convert Service Activator with Java Configuration

I try to convert the "Hello World example" from Spring Integration samples (https://github.com/spring-projects/spring-integration-samples/tree/master/basic/helloworld) from XML, to Java Configuration, (so with the #Configuration annotation).
The configuration class looks like this :
#Configuration
#EnableIntegration
public class BasicIntegrationConfig{
#Bean
public DirectChannel inputCHannel() {
return new DirectChannel();
}
#Bean
public QueueChannel outputChannel() {
return new QueueChannel();
}
#Bean
#ServiceActivator(inputChannel= "inputChannel", outputChannel= "outputChannel" )
public MessageHandler fileWritingMessageHandler() {
MessageHandler mh = new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println("Message payload: " + message.getPayload());
}
};
return mh;
}
}
To test it, I use the main() supplied from sample project :
DirectChannel fileChannel = applicationContext.getBean("inputChannel", DirectChannel.class);
QueueChannel outputChannel = applicationContext.getBean("outputChannel", QueueChannel.class);
System.out.println("********** SENDING MESSAGE");
fileChannel.send(new GenericMessage<>("test"));
System.out.println(outputChannel.receive(0).getPayload());
I see in the console "Message payload: test", but unfortunately, I don't receive the message on the outputchannel (I have a NullPointerException on outputChannel.receive(0).
Do you have an idea why the Service Activator does not send the message to the output channel?
Your MessageHandler returns void.
You need to subclass AbstractReplyProducingMessageHandler instead.
Thank you Gary, it works perfectly after switching to :
#Bean
#ServiceActivator(inputChannel= "inputChannel")
public AbstractReplyProducingMessageHandler fileWritingMessageHandler() {
AbstractReplyProducingMessageHandler mh = new AbstractReplyProducingMessageHandler() {
#Override
protected Object handleRequestMessage(Message<?> message) {
String payload= (String)message.getPayload();
return "Message Payload : ".concat(payload);
}
};
mh.setOutputChannelName("outputChannel");
return mh;
}
As a side note, I had to remove the output channel attribute in #ServiceActivator annotation, and put it in method body instead (Bean Validation Exception if not).

Stop consume message for Stream listener

I am looking for a way to stop consume messages with stream listener.
#StreamListener(MBinding.M_INPUT)
public void consumeMessage(Message<MerchantEvent> message) {
//handle when receive message
}
cloud:
stream:
bindings:
MInput:
destination: topicName
group: groupName
I have googled it but right now still have no idea how to stop consuming. Is there anyone who know it?
You can do it using the actuator (see Binding Visualization and Control). Or you can invoke the endpoint programmatically.
#SpringBootApplication
#EnableBinding(Sink.class)
public class So58795176Application {
public static void main(String[] args) {
SpringApplication.run(So58795176Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
System.out.println();
}
#Autowired
BindingsEndpoint endpoint;
#Bean
public ApplicationRunner runner() {
return args -> {
System.in.read();
endpoint.changeState("input", State.STOPPED);
System.in.read();
endpoint.changeState("input", State.STARTED);
};
}
}

Kafka Consumer is not receiving message in Spring Boot

My spring/java consumer is not able to access the message produced by producer. However, when I run the consumer from console/terminal it is able to receive the message produced by spring/java producer.
Consumer Configuration :
#Component
#ConfigurationProperties(prefix="kafka.consumer")
public class KafkaConsumerProperties {
private String bootstrap;
private String group;
private String topic;
public String getBootstrap() {
return bootstrap;
}
public void setBootstrap(String bootstrap) {
this.bootstrap = bootstrap;
}
public String getGroup() {
return group;
}
public void setGroup(String group) {
this.group = group;
}
public String getTopic() {
return topic;
}
public void setTopic(String topic) {
this.topic = topic;
}
}
Listener Configuration :
#Configuration
#EnableKafka
public class KafkaListenerConfig {
#Autowired
private KafkaConsumerProperties kafkaConsumerProperties;
#Bean
public Map<String, Object> getConsumerProperties() {
Map<String, Object> properties = new HashMap<>();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConsumerProperties.getBootstrap());
properties.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaConsumerProperties.getGroup());
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
return properties;
}
#Bean
public Deserializer stringKeyDeserializer() {
return new StringDeserializer();
}
#Bean
public Deserializer transactionJsonValueDeserializer() {
return new JsonDeserializer(Transaction.class);
}
#Bean
public ConsumerFactory<String, Transaction> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(getConsumerProperties(), stringKeyDeserializer(), transactionJsonValueDeserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Transaction> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Transaction> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(1);
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
Kafka Listener :
#Service
public class TransactionConsumer {
private static final Logger LOGGER = LoggerFactory.getLogger(Transaction.class);
#KafkaListener(topics={"transactions"}, containerFactory = "kafkaListenerContainerFactory")
public void onReceive(Transaction transaction) {
LOGGER.info("transaction = {}",transaction);
}
}
Consumer Application :
#SpringBootApplication
public class ConsumerApplication {
public static void main(String[] args) {
SpringApplication.run(ConsumerApplication.class, args);
}
}
TEST CASE 1 : PASS
I started my spring/java producer and run the consumer from console. When I produce message form producer my console consumer is able to access the message.
TEST CASE 2 : FAILED
I started my spring/java consumer and run the producer from console. When I produce message form console producer my spring/java consumer is not able to access the message.
TEST CASE 3 : FAILED
I started my spring/java consumer and run the spring/java producer. When I produce message form spring/java producer my spring/java consumer is not able to access the message.
Question
Is there anything wrong in my consumer code ?
Am I missing any configuration for my kafka-listener?
Do I need to explicitly run the listener? (I don't think so since I can see on the terminal log connecting to topic, still I am not sure)
Okay you are missing AUTO_OFFSET_RESET_CONFIG in Consumer Configs
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
auto.offset.reset
What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):
earliest: automatically reset the offset to the earliest offset
latest: automatically reset the offset to the latest offset
none: throw exception to the consumer if no previous offset is found for the consumer's group
anything else: throw exception to the consumer
Note : auto.offset.reset to earliest will work only if kafka does not have offset for that consumer group (So in you case need to add this property with new consumer group and restart the application)

Spring cloud stream kafka - A subscribable channel has no output

I have an application which does a lot of data processing (in the order of ~1.3 million at a time) which happens in bursts. The application consumes data from a kafka topic.
I'm using a version 2.0.1 of spring-cloud-stream-starter-kafka to consume data.
My code is as follows:
Listener:
#Service
public class ListenerService {
#Autowired
private Application2<Foo> application;
#Override
#StreamListener(FooStreams.INPUT)
public void subscribe(#Payload Foo foo) {
application.sync(foo);
}
}
Streams:
public interface FooStreams {
String INPUT = "Foo";
#Input(value = INPUT)
SubscribableChannel subscribe();
}
In the main application, I've bound the stream to kafka like this:
#SpringBootApplication
#EnableBinding({FooStreams.class})
public class Application {
private static final Logger logger = LoggerFactory.getLogger(Application.class);
public static void main(String[] args) {
try {
SpringApplication.run(Application.class, args);
}
catch (Exception e) {
logger.error("Application failed to start");
}
}
}
Is there something I am missing? The issue is that I can see that the memory utilization spikes up during the time of data processing, which doesn't come down after the processing is done.

Spring Kafka asynchronous send calls block

I'm using Spring-Kafka version 1.2.1 and, when the Kafka server is down/unreachable, the asynchronous send calls block for a time. It seems to be the TCP timeout. The code is something like this:
ListenableFuture<SendResult<K, V>> future = kafkaTemplate.send(topic, key, message);
future.addCallback(new ListenableFutureCallback<SendResult<K, V>>() {
#Override
public void onSuccess(SendResult<K, V> result) {
...
}
#Override
public void onFailure(Throwable ex) {
...
}
});
I've taken a really quick look at the Spring-Kafka code and it seems to just pass the task along to the kafka client library, translating a callback interaction to a future object interaction. Looking at the kafka client library, the code gets more complex and I didn't take the time to understand it all, but I guess it may be making remote calls (metadata, at least?) in the same thread.
As a user, I expected the Spring-Kafka methods that return a future to return immediately, even if the remote kafka server is unreachable.
Any confirmation if my understanding is wrong or if this is a bug would be welcome. I ended up making it asynchronous on my end for now.
Another problem is that Spring-Kafka documentation says, at the beginning, that it provides synchronous and asynchronous send methods. I couldn't find any methods that do not return futures, maybe the documentation needs updating.
I'm happy to provide any further details if needed. Thanks.
In addition to the #EnableAsync annotation on a configuration class, the #Async annotation needs to be used on the method were you invoke this code.
http://www.baeldung.com/spring-async
Here some code fragements. Kafka producer config:
#EnableAsync
#Configuration
public class KafkaProducerConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaProducerConfig.class);
#Value("${kafka.brokers}")
private String servers;
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return props;
}
#Bean
public ProducerFactory<String, GenericMessage> producerFactory(ObjectMapper objectMapper) {
return new DefaultKafkaProducerFactory<>(producerConfigs(), new StringSerializer(), new JsonSerializer(objectMapper));
}
#Bean
public KafkaTemplate<String, GenericMessage> kafkaTemplate(ObjectMapper objectMapper) {
return new KafkaTemplate<String, GenericMessage>(producerFactory(objectMapper));
}
#Bean
public Producer producer() {
return new Producer();
}
}
And the producer itself:
public class Producer {
public static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
#Autowired
private KafkaTemplate<String, GenericMessage> kafkaTemplate;
#Async
public void send(String topic, GenericMessage message) {
ListenableFuture<SendResult<String, GenericMessage>> future = kafkaTemplate.send(topic, message);
future.addCallback(new ListenableFutureCallback<SendResult<String, GenericMessage>>() {
#Override
public void onSuccess(final SendResult<String, GenericMessage> message) {
LOGGER.info("sent message= " + message + " with offset= " + message.getRecordMetadata().offset());
}
#Override
public void onFailure(final Throwable throwable) {
LOGGER.error("unable to send message= " + message, throwable);
}
});
}
}
If I look at the KafkaProducer itself, there are two parts of sending a message:
Storing the message into the internal buffer.
Uploading the message from the buffer into Kafka.
KafkaProducer is asynchronous for the second part, not the first part.
The send() method can still be blocked on the first part and eventually throw TimeoutExceptions, e.g:
The metadata for the topics is not cached or stale, so the producer tries to get the metadata from the server to know if the topic still exists and how many partitions it has.
The buffer is full (32MB by default).
If the server is completely unresponsive, you will probably encounter both issues.
Update:
I tested and confirmed this in Kafka 2.2.1. It looks like this behaviour might be different in 2.4 and/or 2.6: KAFKA-3720
Best solution is to add a 'Callback' Listener at the level of the Producer.
#Bean
public KafkaTemplate<String, WebUserOperation> operationKafkaTemplate() {
KafkaTemplate<String, WebUserOperation> kt = new KafkaTemplate<>(operationProducerFactory());
kt.setProducerListener(new ProducerListener<String, WebUserOperation>() {
#Override
public void onSuccess(ProducerRecord<String, WebUserOperation> record, RecordMetadata recordMetadata) {
System.out.println("### Callback :: " + recordMetadata.topic() + " ; partition = "
+ recordMetadata.partition() +" with offset= " + recordMetadata.offset()
+ " ; Timestamp : " + recordMetadata.timestamp() + " ; Message Size = " + recordMetadata.serializedValueSize());
}
#Override
public void onError(ProducerRecord<String, WebUserOperation> producerRecord, Exception exception) {
System.out.println("### Topic = " + producerRecord.topic() + " ; Message = " + producerRecord.value().getOperation());
exception.printStackTrace();
}
});
return kt;
}
Just to be sure. Do you have the #EnableAsync annotation applied? I want to say that could be the key to specifying the behavior of Future<>
Below code works for me to get response asynchronously
ProducerRecord<UUID, Person> person = new ProducerRecord<>(kafkaTemplate.getDefaultTopic(), messageKey,Person);
Runnable runnable = () -> kafkaTemplate.send(person).addCallback(new MessageAckHandler());
new Thread(runnable).start();
public class MessageAckHandler implements ListenableFutureCallback<SendResult<UUID,Person>> {
#Override
public void onFailure(Throwable exception) {
log.error("unable to send message: " + exception.getMessage());
}
#Override
public void onSuccess(SendResult<UUID, ScreeningEvent> result) {
log.debug("sent message with offset={} messageID={}", result.getRecordMetadata().offset(), result.getProducerRecord().key());
}
}
public class SendResult<K, V> {
private final ProducerRecord<K, V> producerRecord;
private final RecordMetadata recordMetadata;
public SendResult(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
this.producerRecord = producerRecord;
this.recordMetadata = recordMetadata;
}
public ProducerRecord<K, V> getProducerRecord() {
return this.producerRecord;
}
public RecordMetadata getRecordMetadata() {
return this.recordMetadata;
}
#Override
public String toString() {
return "SendResult [producerRecord=" + this.producerRecord + ", recordMetadata=" + this.recordMetadata + "]";
}
}

Resources