Reactor on-demand flux or a sink - spring-boot

Consider a HTTP controller endpoint that accepts requests, validates and then returns ack, but in a meantime in a background does some "heavy work".
There are 2 approaches with Reactor (that I'm interested in) that you can achieve this:
First approach
#PostMapping(..)
fun acceptRequest(request: Request): Response {
if(isValid(request)) {
Mono.just(request)
.flatMap(service::doHeavyWork)
.subscribe(...)
return Response(202)
} else {
return Response(400)
}
}
Second approach
class Controller {
private val service = ...
private val sink = Sinks.many().unicast().onBackpressureBuffer<Request>()
private val stream = sink.asFlux().flatMap(service::doHeavyWork).subscribe(..)
fun acceptRequest(request: Request): Response {
if(isValid(request)) {
sink.tryEmitNext(request) // for simplicity not handling errors
return Response(202)
} else {
return Response(400)
}
}
}
Which approach is better/worse and why?
The reason I'm asking is, that in Akka, building streams on demand was not really effective, since the stream needed to materialize every time, so it was better to have the "sink approach". I'm wondering if this might be a case for reactor as well or maybe there are other advantages / disadvantages of using those approaches.

I'm not too familiar with Akka, but building a reactive chain definitely doesn't attract a huge overhead with Reactor - that's the "normal" way of handling a request. So I don't see the need to use a separate sink like in your second approach - that just seems to be adding complexity for little gain. I'd therefore say the first approach is better.
That being said, generally, subscribing yourself as you do in both examples isn't recommended - but this kind of "fire and forget" work is one of the few cases it might make sense. There's just a couple of other potential caveats I'd raise here that may be worth considering:
You call the work "heavy", and I'm not sure if that means it's CPU heavy, or just IO heavy or takes a long time. If it just takes a long time due to firing off a bunch of requests, then that's no big deal. If it's CPU heavy however, then that could cause an issue if you're not careful - you probably don't want to execute CPU heavy tasks on your event loop threads. In this case, I'd probably create a separate scheduler backed by your own executor service, and then use subscribeOn() to offload those CPU heavy tasks.
Remember the "fire and forget" pattern in this case really is "forget" - you have absolutely no way of knowing if the heavy task you've offloaded has worked or not, since you've essentially thrown that information away by self-subscribing. Depending on your use-case that might be fine, but if the task is critical or you need some kind of feedback if it fails, then it's worth considering that may not be the best approach.

Related

Quarkus Mutiny and Imperitave vs Reactive

TL;DR: which is more pattern? Using Mutiny + Imperative resteasy, or just use Reactive resteasy?
My understanding is Mutiny allows for me to pass Quarkus a longer running action, and have it handle the specifics of how that code gets run in the context. Does using Reactive provide equal or more benefit than Mutiny+imperative? If from a functional point of view from a thread handling perspective it's equal or better, then Reactive would be great as it requires less code to maintain (Creating Unis, etc). However, if passing a Uni back is significantly better, then it might make sense to use that.
https://quarkus.io/guides/getting-started-reactive#imperative-vs-reactive-a-question-of-threads
Mutiny + imperative:
#GET
#Path("/getState")
#Produces(MediaType.APPLICATION_JSON)
public Uni<State> getState() throws InterruptedException {
return this.serialService.getStateUni();
}
Reactive:
#GET
#Path("/getState")
#Produces(MediaType.APPLICATION_JSON)
public State getState() throws InterruptedException {
return this.serialService.getState();
}
As always, it depends.
First, I recommend you to read https://quarkus.io/blog/resteasy-reactive-smart-dispatch/, which explains the difference between the two approaches.
It's not about longer action (async != longer); it's about dispatching strategies.
When using RESTEasy Reactive, it uses the I/O thread (event loop) to process the request and switches to a worker thread only if the signature of the endpoint requires it. Using the I/O thread allows better concurrency (as you do not use worker threads), reduces memory usage (because you do not need to create the worker thread), and also tends to make the response time lower (as you save a few context switches).
Quarkus detects if your method can be called on the I/O thread or not. The heuristics are based on the signature of the methods (including annotations). To reuse the example from the question:
a method returning a State is considered blocking and so will be called on a worker thread
a method returning a Uni<State> is considered as non-blocking and so will be called on the I/O thread
a method returning a State but explicitly annotated with #NonBlocking is considered as non-blocking and so will be called on the I/O thread
So, the question is, which dispatching strategy should you use?
It really depends on your application and context. If you do not expect many concurrent requests (it's hard to give a general threshold, but it's often between 200 and 500 req/sec), it is perfectly fine to use a blocking/imperative approach. If your application acts as an API Gateway with potential peaks of requests, non-blocking will provide better results.
Remember that even if you choose the imperative/blocking approach, RESTEasy Reactive provides many benefits. As most of the heavy-lifting request/response processing is done on the I/O thread, you get faster and use less memory for... free.

How to implement a cache in a Vertx application?

I have an application that at some point has to perform REST requests towards another (non-reactive) system. It happens that a high number of requests are performed towards exactly the same remote resource (the resulting HTTP request is the same).
I was thinking to avoid flooding the other system by using a simple cache in my app.
I am in full control of the cache and I have proper moments when to invalidate it, so this is not an issue. Without this cache, I'm running into other issues, like connection timeout or read timeout, the other system having troubles with high load.
Map<String, Future<Element>> cache = new ConcurrentHashMap<>();
Future<Element> lookupElement(String id) {
String key = createKey(id);
return cache.computeIfAbsent(key, key -> {
return performRESTRequest(id);
}.onSucces( element -> {
// some further processing
}
}
As I mentioned lookupElement() is invoked from different worker threads with same id.
The first thread will enter in the computeIfAbsent and perform the remote quest while the other threads will be blocked by ConcurrentHashMap.
However, when the first thread finishes, the waiting threads will receive the same Future object. Imagine 30 "clients" reacting to the same Future instance.
In my case this works quite fine and fast up to a particular load, but when the processing input of the app increases, resulting in even more invocations to lookupElement(), my app becomes slower and slower (although it reports 300% CPU usage, it logs slowly) till it starts to report OutOfMemoryException.
My questions are:
Do you see any Vertx specific issue with this approach?
Is there a more Vertx friendly caching approach I could use when there is a high concurrency on the same cache key?
Is it a good practice to cache the Future?
So, a bit unusual to respond to my own question, but I managed to solve the problem.
I was having two dilemmas:
Is ConcurentHashMap and computeIfAbsent() appropriate for Vertx?
Is it safe to cache a Future object?
I am using this caching approach in two places in my app, one for protecting the REST endpoint, and one for some more complex database query.
What was happening is that for the database query there was up to 1300 "clients" waiting for a response. Or 1300 listeners waiting for an onSuccess() of the same Future. When the Future was emitting strange things were happening. Some kind of thread strangulation.
I did a bit of refactoring eliminating this concurrency on the same resource/key, but I did kept both caches and things went back to normal.
In conclusion I think my caching approach is safe as long as we have enough spreading or in other words, we don't have such a high concurrency on the same resource. Having 20-30 listeners on the same Future works just fine.

How to avoid using Kotlin Coroutines' GlobalScope in a Spring WebFlux controller that performs long-running computations

I have a Rest API that is implemented using Spring WebFlux and Kotlin with an endpoint that is used to start a long running computation. As it's not really elegant to just let the caller wait until the computation is done, it should immediately return an ID which the caller can use to fetch the result on a different endpoint once it's available. The computation is started in the background and should just complete whenever it's ready - I don't really care about when exactly it's done, as it's the caller's job to poll for it.
As I'm using Kotlin, I thought the canonical way of solving this is using Coroutines. Here's a minimal example of how my implementation looks like (using Spring's Kotlin DSL instead of traditional controllers):
import org.springframework.web.reactive.function.server.coRouter
// ...
fun route() = coRouter {
POST("/big-computation") { request: ServerRequest ->
val params = request.awaitBody<LongRunningComputationParams>()
val runId = GlobalResultStorage.prepareRun(params);
coroutineScope {
launch(Dispatchers.Default) {
GlobalResultStorage.addResult(runId, longRunningComputation(params))
}
}
ok().bodyValueAndAwait(runId)
}
}
This doesn't do what I want, though, as the outer Coroutine (the block after POST("/big-computation")) waits until its inner Coroutine has finished executing, thus only returning runId after it's no longer needed in the first place.
The only possible way I could find to make this work is by using GlobalScope.launch, which spawns a Coroutine without a parent that awaits its result, but I read everywhere that you are strongly discouraged from using it. Just to be clear, the code that works would look like this:
POST("/big-computation") { request: ServerRequest ->
val params = request.awaitBody<LongRunningComputationParams>()
val runId = GlobalResultStorage.prepareRun(params);
GlobalScope.launch {
GlobalResultStorage.addResult(runId, longRunningComputation(params))
}
ok().bodyValueAndAwait(runId)
}
Am I missing something painfully obvious that would make my example work using proper structured concurrency or is this really a legitimate use case for GlobalScope? Is there maybe a way to launch the Coroutine of the long running computation in a scope that is not attached to the one it's launched from? The only idea I could come up with is to launch both the computation and the request handler from the same coroutineScope, but because the computation depends on the request handler, I don't see how this would be possible.
Thanks a lot in advance!
Maybe others won't agree with me, but I think this whole aversion to GlobalScope is a little exaggerated. I often have an impression that some people don't really understand what is the problem with GlobalScope and they replace it with solutions that share similar drawbacks or are effectively the same. But well, at least they don't use evil GlobalScope anymore...
Don't get me wrong: GlobalScope is bad. Especially because it is just too easy to use, so it is tempting to overuse it. But there are many cases when we don't really care about its disadvantages.
Main goals of structured concurrency are:
Automatically wait for subtasks, so we don't accidentally go ahead before our subtasks finish.
Cancelling of individual jobs.
Cancelling/shutting down of the service/component that schedules background tasks.
Propagating of failures between asynchronous tasks.
These features are critical for providing a reliable concurrent applications, but there are surprisingly many cases when none of them really matter. Let's get your example: if your request handler works for the whole time of the application, then you don't need both waiting for subtasks and shutting down features. You don't want to propagate failures. Cancelling of individual subtasks is not really applicable here, because no matter if we use GlobalScope or "proper" solutions, we do this exactly the same - by storing task's Job somewhere.
Therefore, I would say that the main reasons why GlobalScope is discouraged, do not apply to your case.
Having said that, I still think it may be worth implementing the solution that is usually suggested as a proper replacement for GlobalScope. Just create a property with your own CoroutineScope and use it to launch coroutines:
private val scope = CoroutineScope(Dispatchers.Default)
fun route() = coRouter {
POST("/big-computation") { request: ServerRequest ->
...
scope.launch {
GlobalResultStorage.addResult(runId, longRunningComputation(params))
}
...
}
}
You won't get too much from it. It won't help you with leaking of resources, it won't make your code more reliable or something. But at least it will help keep background tasks somehow categorized. It will be technically possible to determine who is the owner of background tasks. You can easily configure all background tasks in one place, for example provide CoroutineName or switch to another thread pool. You can count how many active subtasks you have at the moment. It will make easier to add graceful shutdown should you need it. And so on.
But most importantly: it is cheap to implement. You won't get too much, but it won't cost you much neither, so why not.

Synchronous calls in akka / actor model

I've been looking into Akka lately and it looks like a great framework for building scalable servers on the JVM. However most of the libraries on the JVM are blocking (e.g. JDBC) so don't your lose out on the performance benefits of using an event based model because your threads will always be blocked? Does Akka do something to get around this? Or is it just something you have to live with until we get more non-blocking libraries on the JVM?
Have a look at CQRS, it greatly improves scalability by separating reads from writes. This means that you can scale your reads separately from your writes.
With the types of IO blocking issues you mentioned Scala provides a language embedded solution that matches perfectly: Futures. For example:
def expensiveDBQuery(key : Key) = Future {
//...query the database
}
val dbResult : Future[Result] =
expensiveDBQuery(...) //non-blocking call
The dbResult returns immediately from the function call. The Result will be a available in the "Future". The cool part about a Future is that you can think about them like any old collection, except you can never call .size on the Future. Other than that all collection-ish functions (e.g. map, filter, foreach, ...) are fair game. Simply think of the dbResult as a list of Results. What would you do with such a list:
dbResult.map(_.getValues)
.filter(values => someTestOnValues(values))
...
That sequence of calls sets up a computation pipeline that is invoked whenever the Result is actually returned from the database. You can give a sequence of computing steps before the data has arrived. All asynchronously.

Transferring typical 3-tier architecture to actors

This question bothers me for some time now (I hope I'm not the only one). I want to take a typical 3-tier Java EE app and see how it possibly can look like implemented with actors. I would like to find out whether it actually makes any sense to make such transition and how I can profit from it if it does makes sense (maybe performance, better architecture, extensibility, maintainability, etc...).
Here are typical Controller (presentation), Service (business logic), DAO (data):
trait UserDao {
def getUsers(): List[User]
def getUser(id: Int): User
def addUser(user: User)
}
trait UserService {
def getUsers(): List[User]
def getUser(id: Int): User
def addUser(user: User): Unit
#Transactional
def makeSomethingWithUsers(): Unit
}
#Controller
class UserController {
#Get
def getUsers(): NodeSeq = ...
#Get
def getUser(id: Int): NodeSeq = ...
#Post
def addUser(user: User): Unit = { ... }
}
You can find something like this in many spring applications. We can take simple implementation that does not have any shared state and that's because does not have synchronized blocks... so all state is in the database and application relies on transactions. Service, controller and dao have only one instance. So for each request application server will use separate thread, but threads will not block each other (but will be blocked by DB IO).
Suppose we are trying to implement similar functionality with actors. It can look like this:
sealed trait UserActions
case class GetUsers extends UserActions
case class GetUser(id: Int) extends UserActions
case class AddUser(user: User) extends UserActions
case class MakeSomethingWithUsers extends UserActions
val dao = actor {
case GetUsers() => ...
case GetUser(userId) => ...
case AddUser(user) => ...
}
val service = actor {
case GetUsers() => ...
case GetUser(userId) => ...
case AddUser(user) => ...
case MakeSomethingWithUsers() => ...
}
val controller = actor {
case Get("/users") => ...
case Get("/user", userId) => ...
case Post("/add-user", user) => ...
}
I think it's not very important here how Get() and Post() extractors are implemented. Suppose I write a framework to implement this. I can send message to controller like this:
controller !! Get("/users")
The same thing would be made by controller and service. In this case the whole workflow would be synchronous. Even worse - I can process only one request at time (in meantime all other requests would land in controller's mailbox). So I need to make it all asynchronous.
Is there any elegant way to perform each processing step asynchronously in this setup?
As far as I understand each tier should somehow save the context of the message it receives and then send message to the tier beneath. When tier beneath replies with some result message I should be able to restore initial context and reply with this result to the original sender. Is this correct?
Moreover, at the moment I have only one instance of actor for each tier. Even if they will work asynchronously, I still can process in parallel only one controller, service and dao message. This means that I need more actors of the same type. Which leads me to LoadBalancer for each tier. This also means, that if I have UserService and ItemService I should LoadBalace both of them separately.
I have feeling, that I understand something wrong. All needed configuration seems to be overcomplicated. What do you think about this?
(PS: It would be also very interesting to know how DB transactions fit into this picture, but I think it's overkill for this thread)
Avoid asynchronous processing unless and until you have a clear reason for doing it. Actors are lovely abstractions, but even they don't eliminate the inherent complexity of asynchronous processing.
I discovered that truth the hard way. I wanted to insulate the bulk of my application from the one real point of potential instability: the database. Actors to the rescue! Akka actors in particular. And it was awesome.
Hammer in hand, I then set about bashing every nail in view. User sessions? Yes, they could be actors too. Um... how about that access control? Sure, why not! With a growing sense of un-ease, I turned my hitherto simple architecture into a monster: multiple layers of actors, asynchronous message passing, elaborate mechanisms to deal with error conditions, and a serious case of the uglies.
I backed out, mostly.
I retained the actors that were giving me what I needed - fault-tolerance for my persistence code - and turned all of the others into ordinary classes.
May I suggest that you carefully read the Good use case for Akka question/answers? That may give you a better understanding of when and how actors will be worthwhile. Should you decide to use Akka, you might like to view my answer to an earlier question about writing load-balanced actors.
Just riffing, but...
I think if you want to use actors, you should throw away all previous patterns and dream up something new, then maybe re-incorporate the old patterns (controller, dao, etc) as necessary to fill in the gaps.
For instance, what if each User is an individual actor sitting in the JVM, or via remote actors, in many other JVMs. Each User is responsible for receiving update messages, publishing data about itself, and saving itself to disk (or a DB or Mongo or something).
I guess what I'm getting at is that all your stateful objects can be actors just waiting for messages to update themselves.
(For HTTP (if you wanted to implement that yourself), each request spawns an actor that blocks until it gets a reply (using !? or a future), which is then formatted into a response. You can spawn a LOT of actors that way, I think.)
When a request comes in to change the password for user "foo#example.com", you send a message to 'Foo#Example.Com' ! ChangePassword("new-secret").
Or you have a directory process which keeps track of the locations of all User actors. The UserDirectory actor can be an actor itself (one per JVM) which receives messages about which User actors are currently running and what their names are, then relays messages to them from the Request actors, delegates to other federated Directory actors. You'd ask the UserDirectory where a User is, and then send that message directly. The UserDirectory actor is responsible for starting a User actor if one isn't already running. The User actor recovers its state, then excepts updates.
Etc, and so on.
It's fun to think about. Each User actor, for instance, can persist itself to disk, time out after a certain time, and even send messages to Aggregation actors. For instance, a User actor might send a message to a LastAccess actor. Or a PasswordTimeoutActor might send messages to all User actors, telling them to require a password change if their password is older than a certain date. User actors can even clone themselves onto other servers, or save themselves into multiple databases.
Fun!
Large compute-intensive atomic transactions are tricky to pull off, which is one reason why databases are so popular. So if you are asking whether you can transparently and easily use actors to replace all the transactional and highly-scalable features of a database (whose power you are very heavily leaning on in the Java EE model), the answer is no.
But there are some tricks you can play. For example, if one actor seems to be causing a bottleneck, but you don't want to go to the effort of creating a dispatcher/worker farm structure, you may be able to move the intensive work into futures:
val service = actor {
...
case m: MakeSomethingWithUsers() =>
Futures.future { sender ! myExpensiveOperation(m) }
}
This way, the really expensive tasks get spawned off in new threads (assuming that you don't need to worry about atomicity and deadlocks and so on, which you may--but again, solving these problems is not easy in general) and messages get sent along to wherever they should be going regardless.
For transactions with actors, you should take a look at Akka's "Transcators", which combine actors with STM (software transactional memory): http://doc.akka.io/transactors-scala
It's pretty great stuff.
As you said, !! = blocking = bad for scalability and performance, see this:
Performance between ! and !!
The need for transactions usually occur when you are persisting state instead of events.
Please have a look at CQRS and DDDD (Distributed Domain Driven Design) and Event Sourcing, because, as you say, we still haven't got a distributed STM.

Resources