Guaranteed Processing - How to implement between two message based services? - spring-boot

I am developing an application consisting of 2 services. The services are message based and communicate over Apache Kafka (1 Topic for service 1 -> service 2, another Topic for service 2 -> service 1).
Workflow:
Service 1 is connected to a database where it reads messages and sends them to service 2.
In service 2 the messages get enriched with additional information and then get sent back to service 1.
Service 1 saves the message back to the database.
Database <--> Service1 <-- Kafka --> Service2
The problem I am facing is: I have to make sure, that every message gets processed by service 2 and afterwards saved back to the database by service 1.
Example: Service 2 reads a message from Kafka and crashes. The message is lost.
Is there a design pattern to achieve this by simpling changing/adding code? Or do I have to make architectural changes?
Further information: The services are Spring Boot based and use Apache Camel / Google Protocol Buffer for messaging.

Firstly, I want to be absolutely clear that this is not the ideal use for kafka. Kafka is to be used in a pub sub architecture to provide loose coupling between producers and consumers. However in your case, the consumer processes the message and is supposed to return enriched message back to producer. If implementing from scratch, we could have implemented service2 as GRPC server that takes in message and returns the enriched message.
That being said, we can achieve what you want using kafka. We need to make sure that we always acknowledge the message post the message is completely processed. In context of service2 - when message is read from topic1 it would enrich it, persist to topic2 and only then acknowledge topic1 that message processing is complete.
So in your example - service2 reads message from kafka but goes down. Since the message was not acknowledged by service2, whenever it restarts - the message would be redelivered to it from kafka (since service2 never acknowledged it). However, it also means there is a chance of duplicate messages.
I would also suggest you to read this link. It gives you idea regarding kafka transactions. Your process can read from kafka, process it and write back to kafka in a single kafka transaction.

Related

Spring Kafka with Spring JPA

I have two micro services A and B.
A service is sending message to Kafka topic "A-Topic". B service is consuming the message.
In the B service, kafka listener will do the below steps
1. Persist the data in the database (repo.save(entity))
2. Publish the response message to "B-Topic". (kafkatemplte.send("B-Topic",message))
I am using the Spring #Transactional annotation at the service level in both services.
In the success scenario, data is getting persisted and success message is published to the topic only once
where as
in the Failure scenario, database save was failed due to integrity constraint violation issue. In this case, failure message is published to the Kafka 10 times continuously.
If I remove Transactional annotation from the service class then the message is published only once in failure scenario also.
I don't understand, how come transactional annotation is causing the message to be published 10 times to kafka.
Please let me know your inputs.
Thanks in advance.
The default error handler will attempt delivery 10 times; you need to enable Kafka transactions in the listener container so the kafka sends will be rolled back (and the consumer on B-Topic needs isolation.level=read_committed).
https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Can we restrict spring boot rabbitmq message processing only between specific timings?

Using Spring boot #RabbitListener, we are able to process the AMQP messages.
Whenever a message sent to queue its immediately publish to destination exchange.
Using #RabbitListener we are able to process the message immediately.
But we need to process the message only between specific timings example 1AM to 6AM.
How to achieve that ?
First of all you can take a look into Delayed Exchange feature of RabbitMQ: https://docs.spring.io/spring-amqp/docs/current/reference/html/#delayed-message-exchange
So, this way on the producer side you should determine how long the message should be delayed before it is routed to the main exchange for the actual consuming afterwards.
Another way is to take a look into Spring Integration and its Delayer component: https://docs.spring.io/spring-integration/docs/5.2.0.BUILD-SNAPSHOT/reference/html/messaging-endpoints.html#delayer
This way you will consume messages from the RabbitMQ, but will delay them in the target application logic.
And another way I see like start()/stop() the listener container for consumption and after according your timing requirements. This way the message is going to stay in the RabbitMQ until you start the listener container: https://docs.spring.io/spring-amqp/docs/current/reference/html/#containerAttributes

Spring JMS: How work with `ActiveMQ Advisory Message`

For a Spring Framework app working with ActiveMQ and with/without WebSocket
The requirement is prior to send any message to a Topic a check should be done about Number Of Consumers, if it returns 1 the message can be sent safely. If it returns 0, the message can't be sent.
The clients could be come from websocket and consider there is no durable subscription. Thus if a message is sent, and there are no clients, the message arrives to the Topic and practically is lost (never consumed) and Messages Enqueued increments +1
I already did a research and I have read the following:
JMSTemplate check if topic exists and get subscriber count
Notify ActiveMQ producer if consumer on the destination is down
ActiveMQ get number of consumers listening to a topic from java
Detect change in Consumers of an ActiveMQ topic
Practically all is based on Advisory Message. I already have read:
Advisory Message
Handling Advisory Messages
Apache ActiveMQ Advisory Example
I understand that if exists a Topic named abc.xyz then ActiveMQ creates ActiveMQ.Advisory.Consumer.Topic.abc.xyz, until here I am ok with this pattern. I can confirm this approach through ActiveMQ Web Console
What is confuse for me is that practically all the examples available from the previous links works around creating a Session and uses mostly the onMessage method. For the latter I know it works how a listener.
Question 01: Thus who is expected to call that ActiveMQ.Advisory.Consumer.Topic.abc.xyz? It to trigger that onMessage method? That is my confusion.
What I need is work with the Spring Framework API (The app is already working and running with a CachingConnectionFactory, thus a Connection can be retrieved and other #Beans about infrastructure about ActiveMQ) and get access to that ActiveMQ.Advisory.Consumer.Topic.abc.xyz destination and retrieve the Number Of Consumers value.
Note: even when exists ActiveMQTopic declared with #Bean and is possible to retrieve that Destination for some #Component, sadly the API does not offer a method such getConsumers().
Question 02: How can be accomplished this?
I am assuming the JMS API for 2.0.x could help perhaps in someway.

JMS with Spring Integration or Spring Batch

Our project is to integrate two applications, using the REST API of each and using JMS (to provide asynchronous nature). Application-1 writes the message on the queue. The next step is to read the message from the queue, process it, and send it to application2.
I have two questions:
Should we use one more queue for storing messages after processing and before sending them to application2?
Should we use spring batch or spring integration to read/process the data?
Or you don't show the whole premise, or you really try to overhead your app. If there is just need to read messages from the queue, there is just enough to use Spring JMS directly... From other side with the Spring Integration and its Adapters power you can just process messes from the <int-jms:message-driven-channel-adapter> to the <int-http:outbound-channel-adapter>.
Don't see reason to store message somewhere else in the reading and sending process. Just because with some exception here you just rollback your message to the JMS queue back.

Does Spring XD re-process the same message when one of it's container goes down while processing the message?

Application Data Flow:
JSon Messages--> Active MQ --> Spring XD-- Business Login(Transform JSon to Java Object)--> Save Data to Target DB--> DB.
Question:
Sprin-Xd is running in cluster mode, configured with Radis.
Spring XD picks up the message from the Active message queue(AMQ). So message is no longer in AMQ. Now while one of the containers where this message is being processed with some business logic suddenly goes down. In this scenarios-
Will Spring-XD framework automatically re-process that particular message ? what's mechanism behind that?
Thanks,
Abhi
Not with a Redis transport; Redis has no infrastructure to support such a requirement ("transactional" reads). You would need to use a rabbit or kafka transport.
EDIT:
See Application Configuration (scroll down to RabbitMQ) and Message Bus Configuration.
Specifically, the default ackMode is AUTO which means messages are acknowledged on success.

Resources