Kafka refresh event is not broadcasted to all subscriber on single topic - spring-boot

I am getting an unexpected scenario where all subscribers(Spring boot application) of a single Kafka topic are not getting Spring Cloud Config configuration change refresh notifications. Only one subscriber is getting refresh notification who has Kafka partition. Other subscriber isnot assigned with Kafka partitions and not getting refresh event.

That is how Kafka works, and so should be expected; only one active consumer in a consumer group can read any single message from a partition.
You'll need external libraries that distribute that consumed event to other channels.

Related

How to change offset of a topic during runtime?

I have a Producer for kafka topic which keeps on pushing some messages to kafka topic. And also I have another service reading these messages from topic.
I have an business use-case, where sometimes consumer need to ignore all the messages which are already there in queue and start processing only new upcoming messages. Can this be archived without stopping and restarting the kafka server.
I am working on GO. So if kafka supports such requirement, is there any way I can change configuration of consumer to start consuming from latest message using sarama GO client.
Thank you in advance.
You could use a random UUID for consumer group id, and/or disable auto commits, then you can start at the latest offset with
config := sarama.NewConfig()
config.Consumer.Offsets.Initial = sarama.OffsetOldest
(copied from Sarama example code)
Otherwise, Kafka consumer API should have a seekToEnd function, but it seems to be exposed in Sarama as getting high watermarks from consumer for every partition, then calling ResetOffets on a ConsumerGroup instance. Note: the group should be paused before doing that.

How can i connect my spring boot application to kafka topic as soon as it restarts

How can I connect my Springboot application to Kafka topic as soon as the application start,
so that when send method is invoked there is no need to fetch the metadata information?
Kafka clients are required to do an initial metadata fetch to determine the leader broker to actually send the data, but this shouldn't drastically change the startup time of any application and wouldn't prevent you from calling any Kafka producer actions

Does Spring Kafka producer guarantee delivery by default?

I wonder whether spring kafka Producer within spring boot guarantee delivery or not.
Does anybody know what happens if some random listener fails to receive message? Would spring kafka retry to send the message?
There are some concepts here:
Producer will produce events and send them to kafka server. You must be aware on the producer side for retries and things like that if Kafka will have downtime or other error scenarios that are specific to your context.
Consumers will have assign partitions by Kafka, each partition will deliver events and each event will have an offset. Consumers will poll for data from kafka (they will request for data, kafka will not push data to consumers, but consumers will go to kafka and require data). Every event that is delivered with success by Kafka to the consumers will produce and Acknowledgment and Kafka will commit the offset of the event. So the next event, with a higher offeset will be delivered to the consumer. If a consumer goes down, partitions will be reasigned to other consumers, so you won't lose your data. If you have only one consumer, the data will be stored in Kafka and when the consumer will be back, it will go and request data from the latest/earliest offset.

What is the ideal way to store the consumer offset using spring boot kafka consumer client?

I have spring kafka consumer application. The application acts as pass through which polls the messages from kafka broker and send to IBM MQ. What would be a best/simplistic approach to store the offset in case of failure?
The simplest approach is to use the default mechanism of storing the offsets in kafka itself.
If you add a SeekToCurrentErrorHandler, the container will keep redelivering records that are failed in the listener, up to 10 times by default but it can be configured for infinite retries.
If you add stateful retry, the listener adapter can add a delay between each delivery attempt.
See Stateful Retry.
ackOnError should be set to false.

How can i send a message to disconnected customers with spring kafka?

I can not send messages to disconnected clients.
I use spring boot with apach kafka as a message broker.
If you assign a consumer group id to a user's inbox, then the consumer protocol will read from the last unread message automatically until you commit the consumed offset back to Kafka
Kafka persists messages itself and consumers are not required to be online to receive events sent immediately from producers

Resources