Problem:
I would like to unblock my KTOR response from portions of the code that take longer and can be executed in async manner after the fact.
The core business logic for REST response should not wait for the async tasks such as sending email, kafka event etc..
What I have tried:
I get the desired results with this code example. I can see that the rest response is returned immediately and does not wait on the delayed tasks (email and Kafka message).
I am unclear at this point if I need to keep these lines inside the runBlocking code
val patient = PatientService.addPatient()
//Return REST response
call.respond(patient)
Question
If I keep it out of the runblocking code, the entire rest response is blocked until the email and kafka event code is complete.
Is this the right approach to offload certain delayed code execution
logic from the main REST API response in KTOR?
fun Route.patientRoute(){
route("/patient") {
post (""){
runBlocking {
val patient = PatientService.addPatient() //..Business logic to add a new patient
launch { //unblock the REST response from certain async. tasks
sendKafkaEvent()
sendEmail()
}
call.respond(patient) //Return REST response
}
}
}
}
suspend fun sendEmail() {
delay(5000L) //Mock some delay in the operation
}
suspend fun sendKafkaMessage() {
delay(5000L) //Mock some delay in the operation
}
I would firstly run asynchronous tasks and then call to PatientService.addPatient() to pass its returned value for call.respond.
Additionally, you can specify a different dispatcher for your tasks.
post("") {
launch(Dispatchers.IO) {
sendEmail()
}
launch(Dispatchers.IO) {
sendKafkaEvent()
}
call.respond(PatientService.addPatient())
}
Related
I have two data sources, each returning a Mono:
class CacheCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
class MasterCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
Callers to my application are hitting a Spring WebFlux controller:
#PostMapping
#ResponseStatus(HttpStatus.CREATED)
public Flux<Entity> createCustomer(#RequestBody Customer customer) {
return customerService.createNewCustomer(entity);
}
As long as either data source successfully completes its create operation, I want to immediately return a success response to the caller, however, I still want my service to continue processing the result of the other Mono stream, in the event that an error was encountered, so it can be logged.
The problem seems to be that as soon as a value is returned to the controller, a cancel signal is propagated back through the stream by Spring WebFlux and, thus, no information is logged about a failure.
Here's one attempt:
public Flux<Entity> createCustomer(final Customer customer) {
var cacheCreate = cacheClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in cache"));
var masterCreate = masterClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in master"));
return Flux.firstWithValue(cacheCreate, masterCreate)
.onErrorMap((err) -> new Exception("Customer creation failed in cache and master"));
}
Flux.firstWithValue() is great for emitting the first non-error value, but then whichever source is lagging behind is cancelled, meaning that any error is never logged out. I've also tried scheduling these two sources on their own Schedulers and that didn't seem to help either.
How can I perform these two calls asynchronously, and emit the first value to the caller, while continuing to listen for emissions on the slower source?
You can achieve that by transforming you operators to "hot" publishers using share() operator:
First subscriber launch the upstream operator, and additional subscribers get back result cached from the first subscriber:
Further Subscriber will share [...] the same result.
Once a second subscription has been done, the publisher is not cancellable:
It's worth noting this is an un-cancellable Subscription.
So, to achieve your requirement:
Apply share() on each of your operators
Launch a subscription on shared publishers to trigger processing
Use shared operators in your pipeline (here firstWithValue).
Sample example:
import java.time.Duration;
import reactor.core.publisher.Mono;
public class TestUncancellableMono {
// Mock a mono successing quickly
static Mono<String> quickSuccess() {
return Mono.delay(Duration.ofMillis(200)).thenReturn("SUCCESS !");
}
// Mock a mono taking more time and ending in error.
static Mono<String> longError() {
return Mono.delay(Duration.ofSeconds(1))
.<String>then(Mono.error(new Exception("ERROR !")))
.doOnCancel(() -> System.out.println("CANCELLED"))
.doOnError(err -> System.out.println(err.getMessage()));
}
public static void main(String[] args) throws Exception {
// Transform to hot publisher
var sharedQuick = quickSuccess().share();
var sharedLong = longError().share();
// Trigger launch
sharedQuick.subscribe();
sharedLong.subscribe();
// Subscribe back to get the cached result
Mono
.firstWithValue(sharedQuick, sharedLong)
.subscribe(System.out::println, err -> System.out.println(err.getMessage()));
// Wait for subscription to end.
Thread.sleep(2000);
}
}
The output of the sample is:
SUCCESS !
ERROR !
We can see that error message has been propagated properly, and that upstream publisher has not been cancelled.
I have an api call that I want to kick off a long running job and then return a 200 ok. Currently it will kick of the job and move on, but once the initial function finishes what it needs to do, it still seems to wait until the coroutine has finished. I'm sure this is related to my relatively new understanding of kotlin coroutines.
fun apiCall() {
log.info("started")
longJob()
log.info("finished")
return ResponseEntity.ok()
}
fun longJob() {
runBlocking{
launch {
do stufff...
}
}
}
So basically I'm expected to see the logs print and then kick off the longJob and then see 200 in postman. But I'm actually getting both logs printed out as well as the job kicked off, but I don't see my 200ok until the job finishes.
If I understand correctly, you want to launch the longJob in background, and return 200ok status without waiting for longJob to finish. If this is the case then you can't use runBlocking here, it blocks the current thread until all jobs, launched in it, finish. You can create a CoroutineScope and launch and forget a long running task. The sample code:
val scope = CoroutineScope(Dispatchers.IO) // or CoroutineScope(Dispatchers.IO + SupervisorJob())
fun apiCall() {
log.info("started")
scope.launch { longJob() }
log.info("finished")
return ResponseEntity.ok()
}
In this sample logs "started" and "finished" will be printed before longJob() starts executing.
I am using Masstransit with RabbitMQ. As part of some deployment procedure, At some point in time I need my service to disconnect and stop receiving any messages.
Assuming that I won't need the bus until the next restart of the service, will it be Ok to use bus.StopAsync()?
Is there a way to get list of end points and then remove them from listining ?
You should StopAsync the bus, and then when ready, call StartAsync to bring it back up (or start it at the next service restart).
To stop receiving messages without stopping the buss I needed a solution that will avoid the consume message pipeline from consuming any type of message. I tried with observers but unsuccessfully. My solution came up with custom consuming message filter.
The filter part looks like this
public class ComsumersBlockingFilter<T> :
IFilter<ConsumeContext<T>>
where T : class
{
public void Probe(ProbeContext context)
{
var scope = context.CreateFilterScope("messageFilter");
}
public async Task Send(ConsumeContext<T> context, IPipe<ConsumeContext<T>> next)
{
// Check if the service is degraded (true for this demo)
var isServiceDegraded = true;
if (isServiceDegraded)
{
//Suspend the message for 5 seconds
await Task.Delay(TimeSpan.FromMilliseconds(5000), context.CancellationToken);
if (!context.CancellationToken.IsCancellationRequested)
{
//republish the message
await context.Publish(context.Message);
Console.WriteLine($"Message {context.MessageId} has been republished");
}
// NotifyConsumed to avoid skipped message
await context.NotifyConsumed(TimeSpan.Zero, "messageFilter");
}
else
{
//Next filter in the pipe is called
await next.Send(context);
}
}
}
The main idea is to delay with cancellation token and the republish the message. After that call contect.NotifyConsumed to avoid the next pipeline filters and return normally.
I have an endpoint streamed as in the sample code block. When streaming, I call an async method through streamHelper.getStreamSuspendCount(). I am stopping this async method in changing state. But I can not access this async method when the browser is closed and the session is terminated. I am stopping the async method in session scope when changing state. But I can not access this async method when the browser is closed and the session is terminated. How can I access this scope when Session is closed?
#RequestMapping(value = "/stream/{columnId}/suspendCount", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
#ResponseBody
public Flux<Integer> suspendCount(#PathVariable String columnId) {
ColumnObject columnObject = streamHelper.findColumnObjectInListById(columnId);
return streamHelper.getStreamSuspendCount(columnObject);
}
getStreamSuspendCount(ColumnObject columnObject) {
...
//async flux
Flux<?> newFlux = beSubscribeFlow.get(i);
Disposable disposable = newFlux.subscribe();
beDisposeFlow.add(disposable); // my session scope variable. if change state, i will kill disposable (dispose()).
...
return Flux.fromStream(Stream.generate(() -> columnObject.getPendingObject().size())).distinctUntilChanged()
.doOnNext(i -> {
System.out.println(i);
}));
}
I think part of the problem is that you are attempting to get a Disposable that you want to call at the end of the session. But in doing so, you are subscribing to the sequence yourself. Spring Framework will also subscribe to the Flux returned by getStreamSuspendCount, and it is THAT subscription that needs to be cancelled for the SSE client to get notified.
Now how to achieve this? What you need is a sort of "valve" that will cancel its source upon receiving an external signal. This is what takeUntilOther(Publisher<?>) does.
So now you need a Publisher<?> that you can tie to the session lifecycle (more specifically the session close event): as soon as it emits, takeUntilOther will cancel its source.
2 options there:
the session close event is exposed in a listener-like API: use Mono.create
you really need to manually trigger the cancel: use MonoProcessor.create() and when the time comes, push any value through it
Here are simplified examples with made up APIs to clarify:
Create
return theFluxForSSE.takeUntilOther(Mono.create(sink ->
sessionEvent.registerListenerForClose(closeEvent -> sink.success(closeEvent))
));
MonoProcessor
MonoProcessor<String> processor = MonoProcessor.create();
beDisposeFlow.add(processor); // make it available to your session scope?
return theFluxForSSE.takeUntilOther(processor); //Spring will subscribe to this
Let's simulate the session close with a scheduled task:
Executors.newSingleThreadScheduledExecutor().schedule(() ->
processor.onNext("STOP") // that's the key part: manually sending data through the processor to signal takeUntilOther
, 2, TimeUnit.SECONDS);
Here is a simulated unit test example that you can run to better understand what happens:
#Test
public void simulation() {
Flux<Long> theFluxForSSE = Flux.interval(Duration.ofMillis(100));
MonoProcessor<String> processor = MonoProcessor.create();
Executors.newSingleThreadScheduledExecutor().schedule(() -> processor.onNext("STOP"), 2, TimeUnit.SECONDS);
theFluxForSSE.takeUntilOther(processor.log())
.log()
.blockLast();
}
I have the following middleware code:
public class UoWMiddleware : OwinMiddleware
{
readonly IUoW uow;
public UoWMiddleware(OwinMiddleware next, IUoW uow) : base(next)
{
this.uow = uow;
}
public override async Task Invoke(IOwinContext context)
{
try
{
await Next.Invoke(context);
}
catch
{
uow.RollBack();
throw;
}
finally
{
if (uow.Status == Base.SharedDomain.UoWStatus.Running)
{
var response = context.Response;
if (response.StatusCode < 400)
{
Thread.Sleep(1000);
uow.Commit();
}
else
uow.RollBack();
}
}
}
}
Occasionally we observe that the response returns to client before calling uow.Commit() via fiddler. For example we put a break point to uow.Commit and we see the response is returned to client despite that we are on the breakpoint waiting. This is somewhat unexpected. I would think the response will strictly return after the Invoke method ends. Am I missing something?
In Owin/Katana the response body (and, of course, the headers) are sent to the client at the precise moment when a middleware calls Write on the Response object of the IOwinContext.
This means that if your next middleware is writing the response body your client will receive it before your server-side code returns from the call to await Next.Invoke().
That's how Owin is designed, and depends on the fact that the Response stream may be written just once in a single Request/Response life-cycle.
Looking at your code, I can't see any major problem in such behavior, because you are simply reading the response headers after the response is written to the stream, and thus not altering it.
If, instead, you require to alter the response written by your next middleware, or you strictly need to write the response after you execute further logic server-side, then your only option is to buffer the response body into a memory stream, and than copy it into the real response stream (as per this answer) when you are ready.
I have successfully tested this approach in a different use case (but sharing the same concept) that you may find looking at this answer: https://stackoverflow.com/a/36755639/3670737
Reference:
Changing the response object from OWIN Middleware