How can i connect my spring boot application to kafka topic as soon as it restarts - spring-boot

How can I connect my Springboot application to Kafka topic as soon as the application start,
so that when send method is invoked there is no need to fetch the metadata information?

Kafka clients are required to do an initial metadata fetch to determine the leader broker to actually send the data, but this shouldn't drastically change the startup time of any application and wouldn't prevent you from calling any Kafka producer actions

Related

Kafka refresh event is not broadcasted to all subscriber on single topic

I am getting an unexpected scenario where all subscribers(Spring boot application) of a single Kafka topic are not getting Spring Cloud Config configuration change refresh notifications. Only one subscriber is getting refresh notification who has Kafka partition. Other subscriber isnot assigned with Kafka partitions and not getting refresh event.
That is how Kafka works, and so should be expected; only one active consumer in a consumer group can read any single message from a partition.
You'll need external libraries that distribute that consumed event to other channels.

What is the ideal way to store the consumer offset using spring boot kafka consumer client?

I have spring kafka consumer application. The application acts as pass through which polls the messages from kafka broker and send to IBM MQ. What would be a best/simplistic approach to store the offset in case of failure?
The simplest approach is to use the default mechanism of storing the offsets in kafka itself.
If you add a SeekToCurrentErrorHandler, the container will keep redelivering records that are failed in the listener, up to 10 times by default but it can be configured for infinite retries.
If you add stateful retry, the listener adapter can add a delay between each delivery attempt.
See Stateful Retry.
ackOnError should be set to false.

Spring Integration - Kafka Message Driven Channel - Auto Acknowledge

I have used the sample configuration as was listed in the spring io docs and it is working fine.
<int-kafka:message-driven-channel-adapter
id="kafkaListener"
listener-container="container1"
auto-startup="false"
phase="100"
send-timeout="5000"
channel="nullChannel"
message-converter="messageConverter"
error-channel="errorChannel" />
However, when i was testing it with downstream application where i consume from kafka and publish it to downstream. If downstream is down, the messages were still getting consumed and was not replayed.
Or lets say after consuming from kafka topic , in case i find some exception in service activator, i want to throw some exception as well which should rollback the transaction so that kafka messages can be replayed.
In brief, if the consuming application is having some issue , then i want to roll back the transaction so that messages are not automatically acknowledged and are replayed back again and again unless it is succesfuly processed.
That's not how Apache Kafka works. There is the TX semantics similar to JMS. The offset in Kafka topic has nothing with rallback or redelivery.
I suggest you to study Apache Kafka closer from their official resource.
Spring Kafka brings nothing over the regular Apache Kafka protocol, however you can consider to use retry capabilities in the Spring Kafka to redeliver the same record locally : http://docs.spring.io/spring-kafka/docs/1.2.2.RELEASE/reference/html/_reference.html#_retrying_deliveries
And yes, the ack mode must be MANUAL, do not commit offset into the Kafka automatically after consuming.

Stop Spring standalone service

I am using Spring Integration in my project.
We have a requirement that in case where we will have stop Spring standalone service if database goes down.
In Message listener when I persist the data into database I check if I get CannotGetJdbcConnectionException then stop the Spring service using applicationContext.close() method.
Problem here is if I received any message on to the Queue and database goes down.
I tried to close Spring service then all resource goes down except DefaultMessageListenerContainer that holds that message.
If I terminate the process manually then message goes into inbound Queue which is correct.
Is there any way I could stop Spring service forcefully and put the message back to Inbound Queue?
I hope I am clear with my point here.
Thanks
Sachin
You should configure the DMLC with setSessionTransacted(true) (acknowledge="transacted" when using the namespace to define the endpoints).
Then any in-flight messages will be rolled-back onto the queue.

Does Spring XD re-process the same message when one of it's container goes down while processing the message?

Application Data Flow:
JSon Messages--> Active MQ --> Spring XD-- Business Login(Transform JSon to Java Object)--> Save Data to Target DB--> DB.
Question:
Sprin-Xd is running in cluster mode, configured with Radis.
Spring XD picks up the message from the Active message queue(AMQ). So message is no longer in AMQ. Now while one of the containers where this message is being processed with some business logic suddenly goes down. In this scenarios-
Will Spring-XD framework automatically re-process that particular message ? what's mechanism behind that?
Thanks,
Abhi
Not with a Redis transport; Redis has no infrastructure to support such a requirement ("transactional" reads). You would need to use a rabbit or kafka transport.
EDIT:
See Application Configuration (scroll down to RabbitMQ) and Message Bus Configuration.
Specifically, the default ackMode is AUTO which means messages are acknowledged on success.

Resources