Apache kafka..produce and consume by the same application - spring

I need to have an Apache kafka producer-consumer rest microservice application wherein the once I trigger the producer rest end point i should immediately get an acknowledgement message and the workflow which triggers this service then waits at the next step (a wait event) which is to be triggered by the consumer application.
How can I implement this ?
Thanks and Regards,
Albin

The tech-stack your are looking at is
1. Dropwizard Framework : For spinning up a REST Service to serve the incoming requests
2. Apache Kafka java clients to Produce and Consume based on Triggers
Step #1. Follow this getting-started dropwizard tutorial https://www.dropwizard.io/0.9.2/docs/getting-started.html to build a Rest Service
Step #2. In the initialization step of the Dropwizard start Kafka Producer and Kafka Consumer Threads
Step #3. The API supports a POST endpoint to receive the trigger events. Based on whether its a Produce or Consume, the Api resource can proceed with the handling the event.

Related

Guaranteed Processing - How to implement between two message based services?

I am developing an application consisting of 2 services. The services are message based and communicate over Apache Kafka (1 Topic for service 1 -> service 2, another Topic for service 2 -> service 1).
Workflow:
Service 1 is connected to a database where it reads messages and sends them to service 2.
In service 2 the messages get enriched with additional information and then get sent back to service 1.
Service 1 saves the message back to the database.
Database <--> Service1 <-- Kafka --> Service2
The problem I am facing is: I have to make sure, that every message gets processed by service 2 and afterwards saved back to the database by service 1.
Example: Service 2 reads a message from Kafka and crashes. The message is lost.
Is there a design pattern to achieve this by simpling changing/adding code? Or do I have to make architectural changes?
Further information: The services are Spring Boot based and use Apache Camel / Google Protocol Buffer for messaging.
Firstly, I want to be absolutely clear that this is not the ideal use for kafka. Kafka is to be used in a pub sub architecture to provide loose coupling between producers and consumers. However in your case, the consumer processes the message and is supposed to return enriched message back to producer. If implementing from scratch, we could have implemented service2 as GRPC server that takes in message and returns the enriched message.
That being said, we can achieve what you want using kafka. We need to make sure that we always acknowledge the message post the message is completely processed. In context of service2 - when message is read from topic1 it would enrich it, persist to topic2 and only then acknowledge topic1 that message processing is complete.
So in your example - service2 reads message from kafka but goes down. Since the message was not acknowledged by service2, whenever it restarts - the message would be redelivered to it from kafka (since service2 never acknowledged it). However, it also means there is a chance of duplicate messages.
I would also suggest you to read this link. It gives you idea regarding kafka transactions. Your process can read from kafka, process it and write back to kafka in a single kafka transaction.

Create thread and route message from Camel to microservice and back

I'm using Camel with JMS and Spring Boot and would like to build a route for next scenario:
User 1 (MQTT client) sends a message (topic) to ActiveMQ Artemis.
Camel (by using from) catch that message and print it out with a help of log.
Thing I would like to do is - create a new thread (asynchronous) for caught message. Send that message from Camel to microservice (python program) that should take message and insert some extra strings, and then send changed message back to Camel and ActiveMQ.
On the end, from ActiveMQ changed message will be sent to User 2.
Can you give me some directions or route examples of how to do something like that?
So, key points are to create new thread for every message and create route to and back from that microservice.
The route could look like this
from("jms:queue:inputQueue")
.log("${body}")
.to("http://oldhost")
.to("jms:queue:outputQueue")
Some notes:
You can call a downstream HTTP endpoint with .to(). The current message body is used as request body. The response of the service overwrites the message body.
I don't know why you want to create a new thread for a synchronous call. You can leverage parallel processing by consuming multiple messages from Artemis in parallel with multiple consumers. Like this, every message is processed in its own thread. If your motivation is resilience, there is also a Circuit Breaker EIP in Camel
If you use Camel 2.x, use the HTTP4 Component ("http4://") to use the newer HTTP client lib. In Camel 3.x the old one was dropped and the new one is simply called HTTP component

Can we restrict spring boot rabbitmq message processing only between specific timings?

Using Spring boot #RabbitListener, we are able to process the AMQP messages.
Whenever a message sent to queue its immediately publish to destination exchange.
Using #RabbitListener we are able to process the message immediately.
But we need to process the message only between specific timings example 1AM to 6AM.
How to achieve that ?
First of all you can take a look into Delayed Exchange feature of RabbitMQ: https://docs.spring.io/spring-amqp/docs/current/reference/html/#delayed-message-exchange
So, this way on the producer side you should determine how long the message should be delayed before it is routed to the main exchange for the actual consuming afterwards.
Another way is to take a look into Spring Integration and its Delayer component: https://docs.spring.io/spring-integration/docs/5.2.0.BUILD-SNAPSHOT/reference/html/messaging-endpoints.html#delayer
This way you will consume messages from the RabbitMQ, but will delay them in the target application logic.
And another way I see like start()/stop() the listener container for consumption and after according your timing requirements. This way the message is going to stay in the RabbitMQ until you start the listener container: https://docs.spring.io/spring-amqp/docs/current/reference/html/#containerAttributes

Event Driven Architecture on Multi-instance services

We are using microservice architecture in our project. We deploy each service to a cluster by using Kubernetes. Services are developed by using Java programming language and Spring Boot framework.Three replicas exist for each service. Services communicate with each other using only events. RabbitMQ is used as a message queue. One of the services is used to send an email. The details of an email are provided by another service with an event. When a SendingEmail event is published by a service, three replicas of email service consume the event and the same email is sent three times.
How can I prevent that sending emails by other two services?
I think it depends on how you work with Rabbit MQ.
You can configure the rabbit mq with one queue for these events and make spring boot applications that represent the sending servers to be "Competing" Consumers.
If you configure it like this, only one replica will get an event at a time and only if it fails to process it the message will return to the queue and will become available to other consumers.
From what you've described all of them are getting the message, so it works like a pub-sub (which is also a possible way of work with rabbit mq, its just not good in this case).

JMS with Spring Integration or Spring Batch

Our project is to integrate two applications, using the REST API of each and using JMS (to provide asynchronous nature). Application-1 writes the message on the queue. The next step is to read the message from the queue, process it, and send it to application2.
I have two questions:
Should we use one more queue for storing messages after processing and before sending them to application2?
Should we use spring batch or spring integration to read/process the data?
Or you don't show the whole premise, or you really try to overhead your app. If there is just need to read messages from the queue, there is just enough to use Spring JMS directly... From other side with the Spring Integration and its Adapters power you can just process messes from the <int-jms:message-driven-channel-adapter> to the <int-http:outbound-channel-adapter>.
Don't see reason to store message somewhere else in the reading and sending process. Just because with some exception here you just rollback your message to the JMS queue back.

Resources