Immediately return first emitted value from two Monos while continuing to process the other asynchronously - spring

I have two data sources, each returning a Mono:
class CacheCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
class MasterCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
Callers to my application are hitting a Spring WebFlux controller:
#PostMapping
#ResponseStatus(HttpStatus.CREATED)
public Flux<Entity> createCustomer(#RequestBody Customer customer) {
return customerService.createNewCustomer(entity);
}
As long as either data source successfully completes its create operation, I want to immediately return a success response to the caller, however, I still want my service to continue processing the result of the other Mono stream, in the event that an error was encountered, so it can be logged.
The problem seems to be that as soon as a value is returned to the controller, a cancel signal is propagated back through the stream by Spring WebFlux and, thus, no information is logged about a failure.
Here's one attempt:
public Flux<Entity> createCustomer(final Customer customer) {
var cacheCreate = cacheClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in cache"));
var masterCreate = masterClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in master"));
return Flux.firstWithValue(cacheCreate, masterCreate)
.onErrorMap((err) -> new Exception("Customer creation failed in cache and master"));
}
Flux.firstWithValue() is great for emitting the first non-error value, but then whichever source is lagging behind is cancelled, meaning that any error is never logged out. I've also tried scheduling these two sources on their own Schedulers and that didn't seem to help either.
How can I perform these two calls asynchronously, and emit the first value to the caller, while continuing to listen for emissions on the slower source?

You can achieve that by transforming you operators to "hot" publishers using share() operator:
First subscriber launch the upstream operator, and additional subscribers get back result cached from the first subscriber:
Further Subscriber will share [...] the same result.
Once a second subscription has been done, the publisher is not cancellable:
It's worth noting this is an un-cancellable Subscription.
So, to achieve your requirement:
Apply share() on each of your operators
Launch a subscription on shared publishers to trigger processing
Use shared operators in your pipeline (here firstWithValue).
Sample example:
import java.time.Duration;
import reactor.core.publisher.Mono;
public class TestUncancellableMono {
// Mock a mono successing quickly
static Mono<String> quickSuccess() {
return Mono.delay(Duration.ofMillis(200)).thenReturn("SUCCESS !");
}
// Mock a mono taking more time and ending in error.
static Mono<String> longError() {
return Mono.delay(Duration.ofSeconds(1))
.<String>then(Mono.error(new Exception("ERROR !")))
.doOnCancel(() -> System.out.println("CANCELLED"))
.doOnError(err -> System.out.println(err.getMessage()));
}
public static void main(String[] args) throws Exception {
// Transform to hot publisher
var sharedQuick = quickSuccess().share();
var sharedLong = longError().share();
// Trigger launch
sharedQuick.subscribe();
sharedLong.subscribe();
// Subscribe back to get the cached result
Mono
.firstWithValue(sharedQuick, sharedLong)
.subscribe(System.out::println, err -> System.out.println(err.getMessage()));
// Wait for subscription to end.
Thread.sleep(2000);
}
}
The output of the sample is:
SUCCESS !
ERROR !
We can see that error message has been propagated properly, and that upstream publisher has not been cancelled.

Related

Ktor REST response and async code execution

Problem:
I would like to unblock my KTOR response from portions of the code that take longer and can be executed in async manner after the fact.
The core business logic for REST response should not wait for the async tasks such as sending email, kafka event etc..
What I have tried:
I get the desired results with this code example. I can see that the rest response is returned immediately and does not wait on the delayed tasks (email and Kafka message).
I am unclear at this point if I need to keep these lines inside the runBlocking code
val patient = PatientService.addPatient()
//Return REST response
call.respond(patient)
Question
If I keep it out of the runblocking code, the entire rest response is blocked until the email and kafka event code is complete.
Is this the right approach to offload certain delayed code execution
logic from the main REST API response in KTOR?
fun Route.patientRoute(){
route("/patient") {
post (""){
runBlocking {
val patient = PatientService.addPatient() //..Business logic to add a new patient
launch { //unblock the REST response from certain async. tasks
sendKafkaEvent()
sendEmail()
}
call.respond(patient) //Return REST response
}
}
}
}
suspend fun sendEmail() {
delay(5000L) //Mock some delay in the operation
}
suspend fun sendKafkaMessage() {
delay(5000L) //Mock some delay in the operation
}
I would firstly run asynchronous tasks and then call to PatientService.addPatient() to pass its returned value for call.respond.
Additionally, you can specify a different dispatcher for your tasks.
post("") {
launch(Dispatchers.IO) {
sendEmail()
}
launch(Dispatchers.IO) {
sendKafkaEvent()
}
call.respond(PatientService.addPatient())
}

Masstransit How to disconnect from from RabbitMq

I am using Masstransit with RabbitMQ. As part of some deployment procedure, At some point in time I need my service to disconnect and stop receiving any messages.
Assuming that I won't need the bus until the next restart of the service, will it be Ok to use bus.StopAsync()?
Is there a way to get list of end points and then remove them from listining ?
You should StopAsync the bus, and then when ready, call StartAsync to bring it back up (or start it at the next service restart).
To stop receiving messages without stopping the buss I needed a solution that will avoid the consume message pipeline from consuming any type of message. I tried with observers but unsuccessfully. My solution came up with custom consuming message filter.
The filter part looks like this
public class ComsumersBlockingFilter<T> :
IFilter<ConsumeContext<T>>
where T : class
{
public void Probe(ProbeContext context)
{
var scope = context.CreateFilterScope("messageFilter");
}
public async Task Send(ConsumeContext<T> context, IPipe<ConsumeContext<T>> next)
{
// Check if the service is degraded (true for this demo)
var isServiceDegraded = true;
if (isServiceDegraded)
{
//Suspend the message for 5 seconds
await Task.Delay(TimeSpan.FromMilliseconds(5000), context.CancellationToken);
if (!context.CancellationToken.IsCancellationRequested)
{
//republish the message
await context.Publish(context.Message);
Console.WriteLine($"Message {context.MessageId} has been republished");
}
// NotifyConsumed to avoid skipped message
await context.NotifyConsumed(TimeSpan.Zero, "messageFilter");
}
else
{
//Next filter in the pipe is called
await next.Send(context);
}
}
}
The main idea is to delay with cancellation token and the republish the message. After that call contect.NotifyConsumed to avoid the next pipeline filters and return normally.

Restarting inifinite Flux on error with pubSubReactiveFactory

I'm developing an application which uses reactor libraries to connect with Google pubsub. So I have a Flux of messages. I want it to always consume from the queue, no matter what happens: this means handling all errors in order not to terminate the flux. I was thinking about the (very unlikely) event the connection to pubsub may be lost or whatever may cause the just created Flux to signal an error. I came up with this solution:
private final PubSubReactiveFactory pubSubReactiveFactory;
private final String requestSubscription;
private final Long requestPollTime;
private final Flux<AcknowledgeablePubsubMessage> requestFlux;
#Autowired
public FluxContainer(/* Field args...*/) {
// init stuff...
this.requestFlux = initRequestFlux();
}
private Flux<AcknowledgeablePubsubMessage> initRequestFlux() {
return pubSubReactiveFactory.poll(requestSubscription, requestPollTime);
.doOnError(e -> log.error("FATAL ERROR: could not retrieve message from queue. Resetting flux", e))
.onErrorResume(e -> initRequestFlux());
}
#EventListener(ApplicationReadyEvent.class)
public void configureFluxAndSubscribe() {
log.info("Setting up requestFlux...");
this.requestFlux
.doOnNext(AcknowledgeablePubsubMessage::ack)
// ...many more concatenated calls handling flux
}
Does it makes sense? I'm concerned about memory allocation (I'm relying on the gc to clean stuff). Any comment is welcome.
What I think you're looking for is basically a Flux that restarts itself when it is terminated for any situation except for the subscription being disposed. In my case I have a source that would generate infinite events from Docker daemon which can disconnect "successfully"
Let sourceFlux be the flux providing your data and would be something you'd want to restart on error or complete, but stop on subscription disposal.
create a recovery function
Function<Throwable, Publisher<Integer>> recoverFromThrow =
throwable -> sourceFlux
create a new flux that would recover from throw
var recoveringFromThrowFlux =
sourceFlux.onErrorResume(recoverFromThrow);
create a Flux generator that generates the flux that would recover from a throw. (Note the generic coercion is needed)
var foreverFlux =
Flux.<Flux<Integer>>generate((sink) -> sink.next(recoveringFromThrowFlux))
.flatMap(flux -> flux);
foreverFlux is the flux that does self recovery.

RxJava2 have remote data override local data in Observable

Currently I have a method in a repository class which fetches data from both a local cache and a remote API.
public Observable<List<Items>> getItemsForUser(String userId {
return Observable.concatArrayEager(
getUserItemsLocal(userId), // returns Observable<List<Items>>
getUserItemsRemote(userId) // returns Observable<List<Items>>
);
}
Currently, the method fetches the local data first (which may be outdated) and returns it, then updates it with the fresh data from the remote API.
I want to change the implementation to use Observable.merge so that if the remote API request completes first, that data gets shown first. However, if I just use Observable.merge I'm concerned that the local database request may return stale data, which will then overwrite the fresh data from the remote.
Basically, I want something like:
public Observable<List<ShoutContent>> getItemsForUser(String userId, ErrorCallback errorCallback) {
return Observable.merge(
getUserItemsRemote(userId),
getUserItemsLocal(userId)
.useOnlyIfFirstResponse()
}
So if the remote API request completes first, then that response is the only one that gets returned. But if the local request completes first, I want to return that, and then return the remote request once it is completed. Does RxJava have anything like this built in?
Edit: I would like to add that getUserItemsRemote does update the local database when the Observable emits, but I don't think that I can ensure that the database will be updated before the local request completes, which leaves the possibility that the local request will respond with stale data.
You can make use of the takeUntil operator.
takeUntil returns an Observable that emits the items emitted by the source Observable until a second ObservableSource emits an item.
In your case, you need to stop observing the local observable, once the remote Observable is emitted. The code is demonstrated below.
public Observable<String> getUserItemsLocal() {
return Observable.just("Local db response")
.delay(5, TimeUnit.SECONDS); // assume local db takes 5 seconds to emit
}
public Observable<String> getUserItemsRemote() {
return Observable.just("Remote Data")
.delay(1, TimeUnit.SECONDS); // remote data comes quicker, in 1 second
}
Your repository code goes like
Observable<String> remoteResponse = getUserItemsRemote();
getUserItemsLocal().takeUntil(remoteResponse)
.mergeWith(remoteResponse)
.subscribe(new Consumer<String>() {
#Override
public void accept(String s) throws Exception {
Log.d(TAG, "result: " + s);
}
});

How to handle sse connection closed?

I have an endpoint streamed as in the sample code block. When streaming, I call an async method through streamHelper.getStreamSuspendCount(). I am stopping this async method in changing state. But I can not access this async method when the browser is closed and the session is terminated. I am stopping the async method in session scope when changing state. But I can not access this async method when the browser is closed and the session is terminated. How can I access this scope when Session is closed?
#RequestMapping(value = "/stream/{columnId}/suspendCount", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
#ResponseBody
public Flux<Integer> suspendCount(#PathVariable String columnId) {
ColumnObject columnObject = streamHelper.findColumnObjectInListById(columnId);
return streamHelper.getStreamSuspendCount(columnObject);
}
getStreamSuspendCount(ColumnObject columnObject) {
...
//async flux
Flux<?> newFlux = beSubscribeFlow.get(i);
Disposable disposable = newFlux.subscribe();
beDisposeFlow.add(disposable); // my session scope variable. if change state, i will kill disposable (dispose()).
...
return Flux.fromStream(Stream.generate(() -> columnObject.getPendingObject().size())).distinctUntilChanged()
.doOnNext(i -> {
System.out.println(i);
}));
}
I think part of the problem is that you are attempting to get a Disposable that you want to call at the end of the session. But in doing so, you are subscribing to the sequence yourself. Spring Framework will also subscribe to the Flux returned by getStreamSuspendCount, and it is THAT subscription that needs to be cancelled for the SSE client to get notified.
Now how to achieve this? What you need is a sort of "valve" that will cancel its source upon receiving an external signal. This is what takeUntilOther(Publisher<?>) does.
So now you need a Publisher<?> that you can tie to the session lifecycle (more specifically the session close event): as soon as it emits, takeUntilOther will cancel its source.
2 options there:
the session close event is exposed in a listener-like API: use Mono.create
you really need to manually trigger the cancel: use MonoProcessor.create() and when the time comes, push any value through it
Here are simplified examples with made up APIs to clarify:
Create
return theFluxForSSE.takeUntilOther(Mono.create(sink ->
sessionEvent.registerListenerForClose(closeEvent -> sink.success(closeEvent))
));
MonoProcessor
MonoProcessor<String> processor = MonoProcessor.create();
beDisposeFlow.add(processor); // make it available to your session scope?
return theFluxForSSE.takeUntilOther(processor); //Spring will subscribe to this
Let's simulate the session close with a scheduled task:
Executors.newSingleThreadScheduledExecutor().schedule(() ->
processor.onNext("STOP") // that's the key part: manually sending data through the processor to signal takeUntilOther
, 2, TimeUnit.SECONDS);
Here is a simulated unit test example that you can run to better understand what happens:
#Test
public void simulation() {
Flux<Long> theFluxForSSE = Flux.interval(Duration.ofMillis(100));
MonoProcessor<String> processor = MonoProcessor.create();
Executors.newSingleThreadScheduledExecutor().schedule(() -> processor.onNext("STOP"), 2, TimeUnit.SECONDS);
theFluxForSSE.takeUntilOther(processor.log())
.log()
.blockLast();
}

Resources