Backpressure in a Spring DeferredResult + Akka actors application - spring

I am thinking of using a chain of Akka workers to model a workflow inside a DeferredResult based Spring MVC web application. Essentially the controller will return a DeferredResult and the actors in the chain will work to populate a CompletableFuture which feeds the DeferredResult when completed.
What I am not able to figure out is:
* Will Akka exert back-pressure if this setup takes on too much load.
* If so, how can I detect that this is happening?

Consider using Alpakka's Spring Web connector, which allows integration of Akka Streams in a Spring Web application. Akka Streams provides backpressure as part of its adherence to the reactive streams specification, and the connector allows the exposure of streams as HTTP endpoints in a Spring application. An example from the Alpakka documentation:
#RestController
public class SampleController {
#RequestMapping("/")
public Source<String, NotUsed> index() {
return
Source.repeat("Hello world!")
.intersperse("\n")
.take(10);
}
}
In your case, you could model your workflow as a stream.
The Akka team recently published a blog post about this connector.

Related

How to produce and consume a RabbitMQ message with inside Spring RestController and send it back to the user

Hello Anyone and Everyone. I am working on a Spring Boot application. Here is my problem. I have a Spring RestController with a post-mapping that takes in some data. I am then needing to send that data over RabbitMQ to another application which in return will perform some calculations on that data and then send it back to me which I then want to return back to the user.
I know that RabbitMQ is for async communication. But I need my controller to return the result that comes back from RabbitMQ all in one go. Right now I am using.
#EnableBinding(Sink::class)
class OptimizedScheduleMessageListener {
#StreamListener(Sink.INPUT)
fun handler(incomingMessage: MyDTO) {
println(incomingMessage)
}
}
to retrieve the results from RabbitMQ. Now I just need my Controller to return it.
#PostMapping( produces = ["application/json"])
fun retrieveOptimizedSchedule: Result<MyDTO> {
myUncalculatedDTO: MyDTO()
source.output().send(MessageBuilder.withPayload(myUncalculadeDTO).build())
return ???
}
Any help with this endeavor is much appreciated.
Thanks in Advance.
Spring Cloud Stream is not designed for request/reply processing.
See the Spring AMQP (Spring for RabbitMQ) project.
The RabbitTemplate has sendAndReceive and convertSendAndReceive methods to implement the RPC model.
On the server side, a #RabbitListener method can be used for request/reply.
What you are trying to do is not advised for couple of reasons.
1. The failure of the 'Another application' which consumes the Rabbit
MQ messages will result in Requests being blocked on the controller end.
2. There is a limit on how many requests you can have simultaneously from the server to clients.
What you can do is use any other communication protocol than REST for this specific part. May be Websocket will be an ideal solution. If not you need to have two REST endpoints. One to submit and get back an request-id, another to poll periodically with the request-id and get processed, completed response.

Simple domain class-based Spring Kafka integration

I'm building a set of microservices in the Spring Boot framework, each of them integrating with Kafka for messaging. There appears to be 3 separate but related Spring libraries offering Kafka integration:
Spring Kafka
Spring Integration Kafka
Spring Cloud Stream
My goal is to abstract away the details of the underlying messaging system and provide a simple messaging service layer to my microservices to send and receive messages. I would like this service layer to work with my domain classes (POJOs) and not have the microservices being concerned with building Message instances. For example:
public interface MyMessagingService {
void send(MyPojo obj);
MyPojo receive();
}
Secondly, I would like to add Avro support, but first I will get it working with JSON.
To cut to the chase, there seems to be multiple ways of achieving this, which is very confusing, especially with the various Spring libraries available. What is the most straightforward way I can provide such a shared messaging layer to my microservices, where they only have to be concerned with domain classes?
I've come across #MessagingGateway from Spring Integration which looked promising, but this seems to tie in to send and reply semantics, and my services won't be expecting a reply message from Kafka.
The examples I have looked at, some linked below, still seem to have to construct Message instances themselves. Is there a simpler way of doing this?
https://codenotfound.com/spring-kafka-spring-integration-example.html
https://www.baeldung.com/spring-cloud-stream
If your ". . goal is to abstract away the details of the underlying messaging system and provide a simple messaging service layer to my microservices to send and receive messages", then why not just use spring-cloud-stream?
The code developer doesn't even have to know that the code he/she writes will be part of some message system. For example,
#SpringBootApplication
public class SampleStreamApplication {
public static void main(String[] args) throws Exception {
SpringApplication.run(SampleStreamApplication.class);
}
#Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
The above is a complete and fully functioning spring cloud stream application that (in the context of Kafka binder) will receive from "input" topic and send to "output" topic the value that was passed through uppercase(..) function.
Yes the type conversion is transparently handled for you for both JSON, Avro etc.
Obviously there are some details, but we can certainly discuss them when you have a more concrete questions. Fo now I would suggest going through some reference documentation first.

AWS kinesis consumer with Java and Spring

I want to write an AWS kinesis stream consumer in a Spring boot application. And I'm not sure if Spring has a native support of kinesis, or I have to use the kinesis client library.
According to this blog post org.springframework.integration:spring-integration-aws has it (RELEASE is available in maven repo). However, this example on GitHub uses org.springframework.cloud:spring-cloud-starter-stream-kinesis, which is available only on Spring snapshots repo under 1.0.0.BUILD-SNAPSHOT.
EDIT: The question is, where can I find an example of KinesisMessageDrivenChannelAdapter?
Not clear what is the question though.
If you are looking for a sample, there is indeed no one. Right the solution we have in Spring is definitely a Channel Adapter for Spring Integration. And that KinesisMessageDrivenChannelAdapter is exactly consumer implementation for AWS Kinesis:
#SpringBootApplication
public static class MyConfiguration {
#Bean
public KinesisMessageDrivenChannelAdapter kinesisInboundChannelChannel(AmazonKinesis amazonKinesis) {
KinesisMessageDrivenChannelAdapter adapter =
new KinesisMessageDrivenChannelAdapter(amazonKinesis, "MY_STREAM");
adapter.setOutputChannel(kinesisReceiveChannel());
return adapter;
}
}
The sample you found on GitHub is for Spring Cloud Stream and based on the Kinesis Binder which indeed is still under development.

Spring-Kafka vs. Spring-Cloud-Stream (Kafka)

Using Kafka as a messaging system in a microservice architecture what are the benefits of using spring-kafka vs. spring-cloud-stream + spring-cloud-starter-stream-kafka ?
The spring cloud stream framework supports more messaging systems and has therefore a more modular design. But what about the functionality ? Is there a gap between the functionality of spring-kafka and spring-cloud-stream + spring-cloud-starter-stream-kafka ?
Which API is better designed?
Looking forward to read about your opinions
Spring Cloud Stream with kafka binder rely on Spring-kafka. So the former has all functionalities supported by later, but the former will be more heavyweight. Below are some points help you make the choice:
If you might change kafka into another message middleware in the future, then Spring Cloud stream should be your choice since it hides implementation details of kafka.
If you want to integrate other message middle with kafka, then you should go for Spring Cloud stream, since its selling point is to make such integration easy.
If you want to enjoy the simplicity and not accept performance overhead, then choose spring-kafka
If you plan to migrate to public cloud service such as AWS Kensis, Azure EventHub, then use spring cloud stream which is part of spring cloud family.
Use Spring Cloud Stream when you are creating a system where one channel is used for input does some processing and sends it to one output channel. In other words it is more of an RPC system to replace say RESTful API calls.
If you plan to do an event sourcing system, use Spring-Kafka where you can publish and subscribe to the same stream. This is something that Spring Cloud Stream does not allow you do do easily as it disallows the following
public interface EventStream {
String STREAM = "event_stream";
#Output(EventStream.STREAM)
MessageChannel publisher();
#Input(EventStream.STREAM)
SubscribableChannel stream();
}
A few things that Spring Cloud Stream helps you avoid doing are:
setting up the serializers and deserializers

Spring 5 Reactive WebSockets: recommended use

I've been learning a bit about Spring 5 WebFlux, reactive programming and websockets. I've watched Josh Long's Spring Tips: Reactive WebSockets with Spring Framework 5. The code that sends data from server to client through a WebSocket connection uses a Spring Integration IntegrationFlow that publishes to a PublishSubcribeChannel which has a custom MessageHandler subscribed to it that takes the message, converts it to an object that is then converted to Json and emitted to the FluxSink from the callback supplied to Flux.create(), which is used to send to the WebSocketConnection.
I was wondering if the use of IntegrationFlow and PublishSubscribeChannel is the recommended way to push events from a background process to the client, or if this is just more convenient in this particular example (monitoring the file system). I'd think if you have control over the background process, you could have it emit to the FluxSink directly?
I'm thinking about use cases similar to the following:
a machine learning process whose progress is monitored
updates to the state of a game world that are sent to players
chat rooms / team collaboration software
...
What I've done in the past that has worked for me is to create a Spring Component that implements WebSocketHandler:
#Component
public class ReactiveWebSocketHandler implements WebSocketHandler {
Then in the handle method, Spring injects the WebSocketSession object
#Override
public Mono<Void> handle(WebSocketSession session) {
Then create one or more Flux reactive publishers that emit messages(WebSocketMessage) for the client.
final var output = session.send(Flux.merge(flux1, flux2));
Then you can zip up the incoming and outgoing Flux objects in a Mono and then Spring will take it from there.
return Mono.zip(incomingWebsocketMsgResponse.getWebSocketMsgFlux().then(),
outputWithErrorMsgs)
.then();
Example: https://howtodoinjava.com/spring-webflux/reactive-websockets/
Since this question, Spring introduced RSocket support - you might think about it like the WebSocket STOMP support existing in Spring MVC, but much more powerful and efficient, supporting backpressure and advanced communication patterns at the protocol level.
For the use cases you're mentioning, I'd advise using RSocket as you'd get a powerful programming model with #MessageMapping and all the expected support in Spring (codecs for JSON and CBOR, security, etc).

Resources