Spring Kafka with Spring JPA - spring-boot

I have two micro services A and B.
A service is sending message to Kafka topic "A-Topic". B service is consuming the message.
In the B service, kafka listener will do the below steps
1. Persist the data in the database (repo.save(entity))
2. Publish the response message to "B-Topic". (kafkatemplte.send("B-Topic",message))
I am using the Spring #Transactional annotation at the service level in both services.
In the success scenario, data is getting persisted and success message is published to the topic only once
where as
in the Failure scenario, database save was failed due to integrity constraint violation issue. In this case, failure message is published to the Kafka 10 times continuously.
If I remove Transactional annotation from the service class then the message is published only once in failure scenario also.
I don't understand, how come transactional annotation is causing the message to be published 10 times to kafka.
Please let me know your inputs.
Thanks in advance.

The default error handler will attempt delivery 10 times; you need to enable Kafka transactions in the listener container so the kafka sends will be rolled back (and the consumer on B-Topic needs isolation.level=read_committed).
https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Related

Guaranteed Processing - How to implement between two message based services?

I am developing an application consisting of 2 services. The services are message based and communicate over Apache Kafka (1 Topic for service 1 -> service 2, another Topic for service 2 -> service 1).
Workflow:
Service 1 is connected to a database where it reads messages and sends them to service 2.
In service 2 the messages get enriched with additional information and then get sent back to service 1.
Service 1 saves the message back to the database.
Database <--> Service1 <-- Kafka --> Service2
The problem I am facing is: I have to make sure, that every message gets processed by service 2 and afterwards saved back to the database by service 1.
Example: Service 2 reads a message from Kafka and crashes. The message is lost.
Is there a design pattern to achieve this by simpling changing/adding code? Or do I have to make architectural changes?
Further information: The services are Spring Boot based and use Apache Camel / Google Protocol Buffer for messaging.
Firstly, I want to be absolutely clear that this is not the ideal use for kafka. Kafka is to be used in a pub sub architecture to provide loose coupling between producers and consumers. However in your case, the consumer processes the message and is supposed to return enriched message back to producer. If implementing from scratch, we could have implemented service2 as GRPC server that takes in message and returns the enriched message.
That being said, we can achieve what you want using kafka. We need to make sure that we always acknowledge the message post the message is completely processed. In context of service2 - when message is read from topic1 it would enrich it, persist to topic2 and only then acknowledge topic1 that message processing is complete.
So in your example - service2 reads message from kafka but goes down. Since the message was not acknowledged by service2, whenever it restarts - the message would be redelivered to it from kafka (since service2 never acknowledged it). However, it also means there is a chance of duplicate messages.
I would also suggest you to read this link. It gives you idea regarding kafka transactions. Your process can read from kafka, process it and write back to kafka in a single kafka transaction.

How can i connect my spring boot application to kafka topic as soon as it restarts

How can I connect my Springboot application to Kafka topic as soon as the application start,
so that when send method is invoked there is no need to fetch the metadata information?
Kafka clients are required to do an initial metadata fetch to determine the leader broker to actually send the data, but this shouldn't drastically change the startup time of any application and wouldn't prevent you from calling any Kafka producer actions

JMS message processing with spring integration in cloud environment

I'm currently trying to refactor the processing of JMS messages to work in a distributed/cloud environment. To allow a better retry and error handling the messages are first stored to the database with a JPA entity and then read by spring integration jpa inbound adapter. This works fine as long as just a single instance of my service is running. However when multiple instances are running, the instances try to process the same message even after introducing a processing state on the persisted messages.
I have already tried to save the JMS messages in a JDBC message store, however then I would have to define a group identifier according to which an instance could select a message which is not really possible since the number of instances is dynamic and I can not assign a group id for each instance. Another possibility could be some kind of distributed lock with a LockRegistry but I couldn't make that work.
Do you have any hint/advice how I could implement the following requirements the best with spring integration:
JMS message should be persisted
Any instance can pick up the message and process it
If the processing fails there will be a retry for x times (could also be retried by another instance)
If an instance crashes or gets killed during the processing the message must not be lost
Is there maybe some spring-cloud component which could be helpful?
I'm happy about every hint in which direction I should go.

Spring Integration - Kafka Message Driven Channel - Auto Acknowledge

I have used the sample configuration as was listed in the spring io docs and it is working fine.
<int-kafka:message-driven-channel-adapter
id="kafkaListener"
listener-container="container1"
auto-startup="false"
phase="100"
send-timeout="5000"
channel="nullChannel"
message-converter="messageConverter"
error-channel="errorChannel" />
However, when i was testing it with downstream application where i consume from kafka and publish it to downstream. If downstream is down, the messages were still getting consumed and was not replayed.
Or lets say after consuming from kafka topic , in case i find some exception in service activator, i want to throw some exception as well which should rollback the transaction so that kafka messages can be replayed.
In brief, if the consuming application is having some issue , then i want to roll back the transaction so that messages are not automatically acknowledged and are replayed back again and again unless it is succesfuly processed.
That's not how Apache Kafka works. There is the TX semantics similar to JMS. The offset in Kafka topic has nothing with rallback or redelivery.
I suggest you to study Apache Kafka closer from their official resource.
Spring Kafka brings nothing over the regular Apache Kafka protocol, however you can consider to use retry capabilities in the Spring Kafka to redeliver the same record locally : http://docs.spring.io/spring-kafka/docs/1.2.2.RELEASE/reference/html/_reference.html#_retrying_deliveries
And yes, the ack mode must be MANUAL, do not commit offset into the Kafka automatically after consuming.

Does Spring XD re-process the same message when one of it's container goes down while processing the message?

Application Data Flow:
JSon Messages--> Active MQ --> Spring XD-- Business Login(Transform JSon to Java Object)--> Save Data to Target DB--> DB.
Question:
Sprin-Xd is running in cluster mode, configured with Radis.
Spring XD picks up the message from the Active message queue(AMQ). So message is no longer in AMQ. Now while one of the containers where this message is being processed with some business logic suddenly goes down. In this scenarios-
Will Spring-XD framework automatically re-process that particular message ? what's mechanism behind that?
Thanks,
Abhi
Not with a Redis transport; Redis has no infrastructure to support such a requirement ("transactional" reads). You would need to use a rabbit or kafka transport.
EDIT:
See Application Configuration (scroll down to RabbitMQ) and Message Bus Configuration.
Specifically, the default ackMode is AUTO which means messages are acknowledged on success.

Resources