Simple domain class-based Spring Kafka integration - spring

I'm building a set of microservices in the Spring Boot framework, each of them integrating with Kafka for messaging. There appears to be 3 separate but related Spring libraries offering Kafka integration:
Spring Kafka
Spring Integration Kafka
Spring Cloud Stream
My goal is to abstract away the details of the underlying messaging system and provide a simple messaging service layer to my microservices to send and receive messages. I would like this service layer to work with my domain classes (POJOs) and not have the microservices being concerned with building Message instances. For example:
public interface MyMessagingService {
void send(MyPojo obj);
MyPojo receive();
}
Secondly, I would like to add Avro support, but first I will get it working with JSON.
To cut to the chase, there seems to be multiple ways of achieving this, which is very confusing, especially with the various Spring libraries available. What is the most straightforward way I can provide such a shared messaging layer to my microservices, where they only have to be concerned with domain classes?
I've come across #MessagingGateway from Spring Integration which looked promising, but this seems to tie in to send and reply semantics, and my services won't be expecting a reply message from Kafka.
The examples I have looked at, some linked below, still seem to have to construct Message instances themselves. Is there a simpler way of doing this?
https://codenotfound.com/spring-kafka-spring-integration-example.html
https://www.baeldung.com/spring-cloud-stream

If your ". . goal is to abstract away the details of the underlying messaging system and provide a simple messaging service layer to my microservices to send and receive messages", then why not just use spring-cloud-stream?
The code developer doesn't even have to know that the code he/she writes will be part of some message system. For example,
#SpringBootApplication
public class SampleStreamApplication {
public static void main(String[] args) throws Exception {
SpringApplication.run(SampleStreamApplication.class);
}
#Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
The above is a complete and fully functioning spring cloud stream application that (in the context of Kafka binder) will receive from "input" topic and send to "output" topic the value that was passed through uppercase(..) function.
Yes the type conversion is transparently handled for you for both JSON, Avro etc.
Obviously there are some details, but we can certainly discuss them when you have a more concrete questions. Fo now I would suggest going through some reference documentation first.

Related

What’s the difference between AbstractMessageSource and MessageProducerSupport in Spring Integration?

When developing inbound channel adapters, I couldn’t find any place that mentions the differences between AbstractMessageSource and MessageProducerSupport in Spring Integration. I’m asking this question in the context of reactive streams, so I’m actually looking at AbstractReactiveMessageSource, but I guess it doesn’t matter for my question. I also wonder whether MessageProducerSupport supports project reactor and doesn’t have an equivalent to AbstractReactiveMessageSource.
There is some documentation about these types of components: https://docs.spring.io/spring-integration/docs/current/reference/html/overview.html#finding-class-names-for-java-and-dsl-configuration
The inbound message flow side has its own components, which are divided into polling and listening behaviors.
So, the MessageProducerSupport is for those protocols which provide a listening callback for us. So, we can hook up into that, build message and produce it into a channel provided by the MessageProducer. So, it is self-eventing component which really has everything to listen to the source system and produce messages from the callback. This type of channel adapters called event-driven and samples of them are JMS, AMQP, HTTP, IMAP, Kinesis etc.
From here it is wrong to try to compare the MessageProducerSupport with an AbstractMessageSource because the are not relevant. The one you should look into is a SourcePollingChannelAdapter. Exactly this one is that kind of flow beginning endpoint which is similar to MessageProducerSupport. With only the problem that it is based on a periodic scheduled task to request for messages in the provided MessageSource. This type of component is for those protocols which don't provide listening callback, e.g. local file system, (S)FTP, JDBC, MongoDb, POP3, S3 etc.
You probable would expect something similar to MessageSource for the MessageProducer level, but there is not that kind of layer because everything single event-driven protocol has its own specifics, therefore we cannot extract some common abstraction like in case of polling protocols.
If your source system provides for you a reactive Publisher, you don't need to look into a SourcePollingChannelAdapter and MessageSource. You just need a MessageProducerSupport and call its subscribeToPublisher(Publisher<? extends Message<?>> publisher) from the start() implementation.
There is no need in the reactive implementation for the polling since Publisher is not pollable by itself it is event-driven. Although it has its own back-pressure specifics which is out of MessageProducerSupport scope.
There is also some explanation in this section of the doc: https://docs.spring.io/spring-integration/docs/current/reference/html/reactive-streams.html#source-polling-channel-adapter. And see a couple next paragraphs.

How to produce and consume a RabbitMQ message with inside Spring RestController and send it back to the user

Hello Anyone and Everyone. I am working on a Spring Boot application. Here is my problem. I have a Spring RestController with a post-mapping that takes in some data. I am then needing to send that data over RabbitMQ to another application which in return will perform some calculations on that data and then send it back to me which I then want to return back to the user.
I know that RabbitMQ is for async communication. But I need my controller to return the result that comes back from RabbitMQ all in one go. Right now I am using.
#EnableBinding(Sink::class)
class OptimizedScheduleMessageListener {
#StreamListener(Sink.INPUT)
fun handler(incomingMessage: MyDTO) {
println(incomingMessage)
}
}
to retrieve the results from RabbitMQ. Now I just need my Controller to return it.
#PostMapping( produces = ["application/json"])
fun retrieveOptimizedSchedule: Result<MyDTO> {
myUncalculatedDTO: MyDTO()
source.output().send(MessageBuilder.withPayload(myUncalculadeDTO).build())
return ???
}
Any help with this endeavor is much appreciated.
Thanks in Advance.
Spring Cloud Stream is not designed for request/reply processing.
See the Spring AMQP (Spring for RabbitMQ) project.
The RabbitTemplate has sendAndReceive and convertSendAndReceive methods to implement the RPC model.
On the server side, a #RabbitListener method can be used for request/reply.
What you are trying to do is not advised for couple of reasons.
1. The failure of the 'Another application' which consumes the Rabbit
MQ messages will result in Requests being blocked on the controller end.
2. There is a limit on how many requests you can have simultaneously from the server to clients.
What you can do is use any other communication protocol than REST for this specific part. May be Websocket will be an ideal solution. If not you need to have two REST endpoints. One to submit and get back an request-id, another to poll periodically with the request-id and get processed, completed response.

Spring-Kafka vs. Spring-Cloud-Stream (Kafka)

Using Kafka as a messaging system in a microservice architecture what are the benefits of using spring-kafka vs. spring-cloud-stream + spring-cloud-starter-stream-kafka ?
The spring cloud stream framework supports more messaging systems and has therefore a more modular design. But what about the functionality ? Is there a gap between the functionality of spring-kafka and spring-cloud-stream + spring-cloud-starter-stream-kafka ?
Which API is better designed?
Looking forward to read about your opinions
Spring Cloud Stream with kafka binder rely on Spring-kafka. So the former has all functionalities supported by later, but the former will be more heavyweight. Below are some points help you make the choice:
If you might change kafka into another message middleware in the future, then Spring Cloud stream should be your choice since it hides implementation details of kafka.
If you want to integrate other message middle with kafka, then you should go for Spring Cloud stream, since its selling point is to make such integration easy.
If you want to enjoy the simplicity and not accept performance overhead, then choose spring-kafka
If you plan to migrate to public cloud service such as AWS Kensis, Azure EventHub, then use spring cloud stream which is part of spring cloud family.
Use Spring Cloud Stream when you are creating a system where one channel is used for input does some processing and sends it to one output channel. In other words it is more of an RPC system to replace say RESTful API calls.
If you plan to do an event sourcing system, use Spring-Kafka where you can publish and subscribe to the same stream. This is something that Spring Cloud Stream does not allow you do do easily as it disallows the following
public interface EventStream {
String STREAM = "event_stream";
#Output(EventStream.STREAM)
MessageChannel publisher();
#Input(EventStream.STREAM)
SubscribableChannel stream();
}
A few things that Spring Cloud Stream helps you avoid doing are:
setting up the serializers and deserializers

Spring 5 Reactive WebSockets: recommended use

I've been learning a bit about Spring 5 WebFlux, reactive programming and websockets. I've watched Josh Long's Spring Tips: Reactive WebSockets with Spring Framework 5. The code that sends data from server to client through a WebSocket connection uses a Spring Integration IntegrationFlow that publishes to a PublishSubcribeChannel which has a custom MessageHandler subscribed to it that takes the message, converts it to an object that is then converted to Json and emitted to the FluxSink from the callback supplied to Flux.create(), which is used to send to the WebSocketConnection.
I was wondering if the use of IntegrationFlow and PublishSubscribeChannel is the recommended way to push events from a background process to the client, or if this is just more convenient in this particular example (monitoring the file system). I'd think if you have control over the background process, you could have it emit to the FluxSink directly?
I'm thinking about use cases similar to the following:
a machine learning process whose progress is monitored
updates to the state of a game world that are sent to players
chat rooms / team collaboration software
...
What I've done in the past that has worked for me is to create a Spring Component that implements WebSocketHandler:
#Component
public class ReactiveWebSocketHandler implements WebSocketHandler {
Then in the handle method, Spring injects the WebSocketSession object
#Override
public Mono<Void> handle(WebSocketSession session) {
Then create one or more Flux reactive publishers that emit messages(WebSocketMessage) for the client.
final var output = session.send(Flux.merge(flux1, flux2));
Then you can zip up the incoming and outgoing Flux objects in a Mono and then Spring will take it from there.
return Mono.zip(incomingWebsocketMsgResponse.getWebSocketMsgFlux().then(),
outputWithErrorMsgs)
.then();
Example: https://howtodoinjava.com/spring-webflux/reactive-websockets/
Since this question, Spring introduced RSocket support - you might think about it like the WebSocket STOMP support existing in Spring MVC, but much more powerful and efficient, supporting backpressure and advanced communication patterns at the protocol level.
For the use cases you're mentioning, I'd advise using RSocket as you'd get a powerful programming model with #MessageMapping and all the expected support in Spring (codecs for JSON and CBOR, security, etc).

Display results from Bolts of a storm cluster on browser

Is it possible to display the results that are derived from the processes in the bolts of a storm process on a web browser or a UI during runtime? How is it done?
Not sure what you are looking for but it is very much possible to write any bolt output to file. You simply need a working bolt which whites down whatever passed to it in a file. Your logic to write stream to a file should be inside the bolt's execute(Tuple tuple) method.
Is that what you are seeking?
UPDATE
How about putting a queue (Kafka/Krestel) in between your bolt and Websockets. I've found this article here which says
In order to easily integrate between Storm and the front-end (through
WebSockets) I chose Apache Camel to do the heavy lifting for me. By
having the bolts in the Storm topology write their output to an
ActiveMQ queue, I could create a Camel route that subscribes to this
queue and push the messages to WebSockets, like so:
public class StreamingRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("activemq:storm.queue")
.to("websocket://storm?sendToAll=true");
}
}
Also found this article talking about integration between JMS and Websockets

Resources