We need to delay start of consumer.
Here's what we need:
Start consumer A (reading topic "xyz")
When consumer A will process all messages, we need to start consumer B (reading topic "zyx")
After reading this:
How to find no more messages in kafka topic/partition & reading only after writing to topic is done
We set idleEventInterval on containerProperties of consumer A:
containerProperties.setIdleEventInterval(30000L);
and on consumer B:
container.setAutoStartup(false);
then we have:
#EventListener
public void handleListenerContainerIdleEvent(ListenerContainerIdleEvent event) {
if(canStartContainer(event.getListenerId())) {
Optional.ofNullable(containers.get("container-a"))
.ifPresent(AbstractMessageListenerContainer::start);
}
}
We found that it's exactly what we need - it works fine, but we faced one problem: when consumer B is starting, it forces rebalance of all other consumers.
Can we avoid it?
Request joining group due to: group is already rebalancing
Revoke previously assigned partitions
(Re-)joining group
It's not a big issue, but we use ConsumerSeekAware to reset offset using seekToBeginning, so topic is read twice
You should not use the same group.id with consumers on different topics; it will cause an unnecessary rebalance, as you have found out.
Use different group.ids for consumers on different topics.
Related
We have 5 topics and we want to have a service that scales for example to 5 instances of the same app.
This would mean that i would want to dynamically (via for example Redis locking or similar mechanism) determine which instance should listen to what topic.
I know that we could have 1 topic that has 5 partitions - and each node in the same consumer group would pick up a partition. Also if we have a separately deployed service we can set the topic via properties.
The issue is that those two are not suitable for our situation and we want to see if it is possible to do that via what i explained above.
#PostConstruct
private void postConstruct() {
// Do logic via redis locking or something do determine topic
dynamicallyDeterminedVariable = // SOME LOGIC
}
#KafkaListener(topics = "{dynamicallyDeterminedVariable")
void listener(String data) {
LOG.info(data);
}
Yes, you can use SpEL for the topic name.
#{#someOtherBean.whichTopicToUse()}.
Does Shopify/sarama provide an option similar to transactional.id in JVM API?
The library supports idempotence (Config.Producer.Idemponent, similar to enable.idempotence), but I don't understand how to use it without transactional.id.
Please, correct me if I'm wrong, there is a bit lack of documentation about these options in Sarama. But according to JVM docs, idempotence without the identifier will be limited by a single producer session. In other words, we will loss the guarantee when producer fails and restart.
I found relevant properties in the source code and some tests (for example), but don't understand how to use them externally.
Shopify/sarama Provides Kafka Exactly Once (Idempotency) with idempotent enabled producer. But For that below configuration setup need to be there.
From Shopify/sarama/config.go
if c.Producer.Idempotent {
if !c.Version.IsAtLeast(V0_11_0_0) {
return ConfigurationError("Idempotent producer requires Version >= V0_11_0_0")
}
if c.Producer.Retry.Max == 0 {
return ConfigurationError("Idempotent producer requires Producer.Retry.Max >= 1")
}
if c.Producer.RequiredAcks != WaitForAll {
return ConfigurationError("Idempotent producer requires Producer.RequiredAcks to be WaitForAll")
}
if c.Net.MaxOpenRequests > 1 {
return ConfigurationError("Idempotent producer requires Net.MaxOpenRequests to be 1")
}
}
In Shopify/sarama How they do this is, There is a producerEpoch ID in AsyncProducer's transactionManager. You can refer the file in Shopify/sarama/async_producer.go. This Id initialise with the producer initialisation and increment when successfully producing each message. read bumpEpoch() function to see that in async_producer.go file.
This is the sequence id for that producer session with the broker and it is sending with each message. Increment when message published successfully.
Read this example. It describes how idempotence works.
You are correct on producer session fact. That exactly once promised for single producer session. When restating producer just after the sequence failure, there can be a duplicate.
When producer restarts, new PID gets assigned. So the idempotency is promised only for a single producer session. Even though producer retries requests on failures, each message is persisted in the log exactly once. There can still be duplicates depending on the source where the producer is getting data. Kafka won’t take care of the duplicate data received by the producer. So, in some cases, you may require an additional de-duplication system.
I am trying to consume multiple message from a topic with manual ack but ack work if all message only by ack one time.
#KafkaListener(
id = "${kafka.buyers.product-sales-pricing.id}",
topics = "${kafka.buyers.product-sales-pricing.topic}",
groupId = "${kafka.buyers.group-id}",
concurrency = "${kafka.buyers.concurrency}"
)
public void listen( List<String> message, Acknowledgment ack ){}
In above code i am getting 5 message per consume if i put following configuration in spring boot property file:
kafka:
max-poll-records: 5 # Maximum number of records returned in a single call to poll()
but if i ack that listen then it ack all 5 message at same time.
Actually i want to ack separately for each message(means 5 message with 5 ack).
How can i do this in spring boot project?
When using a batch listener, the entire batch is acked when Acknowledgment.acknowledge() is called.
I would recommend using a single record listener rather than a batch listener for this use case.
listen(String msg, Acknowledgment ack)
It's not clear why you would commit offsets for only part of the batch.
If you must use a batch listener, it can still be done, but rather more complicated - you would need to get List<ConsumerRecord<?, ?>> to get topic/partition/offset information and also add Consumer<?, ?> consumer to the method parameters (and remove the Acknowledgment; you can then call commitOffsets() on the consumer however you want. But you MUST call it on the listener thread - the consumer is not thread-safe.
Yesterday I found from log the kafka was reconsuming some messages after the Kafka group coordinator initiated a group rebalance. These messages had been consumed two days ago (confirmed from log).
There were two other rebalancing reported in the log, but they didn't reconsume messages anymore. So why the first time reblancing would cause reconsuming messages? What were the problems?
I am using the golang kafka client. here are the code
config := sarama.NewConfig()
config.Version = version
config.Consumer.Offsets.Initial = sarama.OffsetOldest
and we are handling messges before claiming messages, so seems we are using the Send At Least Once strategy for kafka. We have three brokers in one machine, and only one consumer thread (go routine) in the other machine.
Any explanations for this phoenomenon?
I think the messages must have been committed, coz they were consumed two days ago, or why would kafka keep offsets for more than two days without committing?
Consuming Code sample:
func (consumer *Consumer) ConsumeClaim(session
sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error {
for message := range claim.Messages() {
realHanlder(message) // consumed data here
session.MarkMessage(message, "") // mark offset
}
return nil
}
Added:
Rebalancing happened after app restarted. There were two other restarts which didn't cuase reconnsume
configs of kafka
log.retention.check.interval.ms=300000
log.retention.hours=168
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable = true
auto.create.topics.enable=false
By reading the source code of both golang saram client and kafka server, finally I found the reason as below
Consumer group offset retention time is 24hours, which is a default setting by kafka, while log retention is 7days explicitly set by us.
My server app is running in test environment where few people can visit, which means there may be few messages produced by kafka producer, and then the consumer group has few messages to consumes, thus the consumer may not commit any offset for long time.
When the consume offset is not updated for more than 24hours, due to the offset config, the kafka broker/coordinator will remove the consume offset from partitions. Next time the saram queries from kafka broker where the offset is, of course client gets nothing. Notice we are using sarama.OffsetOldest as initial value, then sarama client will consume messages from the start of messages kept by kafka broker, which results in messages reconsuming, and this is likely to happen because log retention is 7days
I profiled my kafka producer spring boot application and found many "kafka-producer-network-thread"s running (47 in total). Which would never stop running, even when no data is sending.
My application looks a bit like this:
var kafkaSender = KafkaSender(kafkaTemplate, applicationProperties)
kafkaSender.sendToKafka(json, rs.getString("KEY"))
with the KafkaSender:
#Service
class KafkaSender(val kafkaTemplate: KafkaTemplate<String, String>, val applicationProperties: ApplicationProperties) {
#Transactional(transactionManager = "kafkaTransactionManager")
fun sendToKafka(message: String, stringKey: String) {
kafkaTemplate.executeInTransaction { kt ->
kt.send(applicationProperties.kafka.topic, System.currentTimeMillis().mod(10).toInt(), System.currentTimeMillis().rem(10).toString(),
message)
}
}
companion object {
val log = LoggerFactory.getLogger(KafkaSender::class.java)!!
}
}
Since each time I want to send a message to Kafka I instantiate a new KafkaSender, I thought a new thread would be created which then sends the message to the kafka queue.
Currently it looks like a pool of producers is generated, but never cleaned up, even when none of them has anything to do.
Is this behaviour intended?
In my opinion the behaviour should be nearly the same as datasource pooling, keep the thread alive for some time, but when there is nothing to do, clear it up.
When using transactions, the producer cache grows on demand and is not reduced.
If you are producing messages on a listener container (consumer) thread; there is a producer for each topic/partition/consumer group. This is required to solve the zombie fencing problem, so that if a rebalance occurs and the partition moves to a different instance, the transaction id will remain the same so the broker can properly handle the situation.
If you don't care about the zombie fencing problem (and you can handle duplicate deliveries), set the producerPerConsumerPartition property to false on the DefaultKafkaProducerFactory and the number of producers will be much smaller.
EDIT
Starting with version 2.8 the default EOSMode is now V2 (aka BETA); which means it is no longer necessary to have a producer per topic/partition/group - as long as the broker version is 2.5 or later.