How to cancel an ongoing Spring Flux? - spring

I'm using a spring flux to send parallel requests to a service, this is very simplified version of it:
Flux.fromIterable(customers)
.flatMap { customer ->
client.call(customer)
} ...
I was wondering how I could cancel this flux, as in, grab a reference to the flux somehow and tell it to shut down.

As you probably know, with reactive objects, all operators are lazy. This means execution of the pipeline is delayed until the moment you subscribe to the reactive stream.
So, in your example, there is nothing to cancel yet because nothing is happening at that point.
But supposing your example was extended to:
Disposable disp = Flux.fromIterable(customers)
.flatMap { customer ->
client.call(customer)
}
.subscribe();
Then, as you can see, your subscription returns a Disposable object that you can use to cancel the entire thing if you want, e.g.
disp.dispose()
Documentation of dispose says:
Cancel or dispose the underlying task or resource.
There’s another section of the documentation that says the following:
These variants [of operators] return a reference to the subscription
that you can use to cancel the subscription when no more data is
needed. Upon cancellation, the source should stop producing values and
clean up any resources it created. This cancel and clean-up behavior
is represented in Reactor by the general-purpose Disposable interface.
Therefore canceling the execution of stream is not free from complications on the reactive object side, because you want to make sure to leave the world in a consistent state if you cancel the stream in the middle of its processing. For example, if you were in the process of building something, you may want to discard resources, destroy any partial aggregation results, close files, channels, release memory or any other resources you have, potentially undoing changes or compensating for them.
You may want to read the documentation on cleanup about this, such that you also consider what you can do on the reactive object side.
Flux<String> bridge = Flux.create(sink -> {
sink.onRequest(n -> channel.poll(n))
.onCancel(() -> channel.cancel())
.onDispose(() -> channel.close())
});

Answer from #Edwin is precise. As long as you don't call subscribe, there is nothing to cancel, because no code will be executed.
Just wanted to add an example to make it clear.
public static void main(String[] args) throws InterruptedException {
List<String> lists = Lists.newArrayList("abc", "def", "ghi");
Disposable disposable = Flux.fromIterable(lists)
.delayElements(Duration.ofSeconds(3))
.map(String::toLowerCase)
.subscribe(System.out::println);
Thread.sleep(5000); //Sleeping so that some elements in the flux gets printed
disposable.dispose();
Thread.sleep(10000); // Sleeping so that we can prove even waiting for some time nothing gets printed after cancelling the flux
}
But I would say a much cleaner way (functional way) is to make use of functions like takeUntil or take. For instance I can stop the stream in the above example like this as well.
List<String> lists = Lists.newArrayList("abc", "def", "End", "ghi");
Flux.fromIterable(lists).takeUntil(s -> s.equalsIgnoreCase("End"))
.delayElements(Duration.ofSeconds(3))
.map(String::toLowerCase)
.subscribe(System.out::println);
or
List<String> lists = Lists.newArrayList("abc", "def", "ghi");
Flux.fromIterable(lists).take(2)
.delayElements(Duration.ofSeconds(2))
.map(String::toLowerCase)
.subscribe(System.out::println);

Another subscribe to my flux then calling a dispose did it for me:
// Setup flux and populate
Flux<String> myFlux = controller.get(json);
// Subscribe
FlowSubscriber<String> sub = new FlowSubscriber<String>();
myFlux.subscribe(sub);
// Work on elements in the subscription
String myString = sub.consumedElements.get(0);
... do work ...
// Cancel
myFlux.subscribe().dispose();

Related

How to tell RSocket to read data stream by Java 8 Stream which backed by Blocking queue

I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.

Restarting inifinite Flux on error with pubSubReactiveFactory

I'm developing an application which uses reactor libraries to connect with Google pubsub. So I have a Flux of messages. I want it to always consume from the queue, no matter what happens: this means handling all errors in order not to terminate the flux. I was thinking about the (very unlikely) event the connection to pubsub may be lost or whatever may cause the just created Flux to signal an error. I came up with this solution:
private final PubSubReactiveFactory pubSubReactiveFactory;
private final String requestSubscription;
private final Long requestPollTime;
private final Flux<AcknowledgeablePubsubMessage> requestFlux;
#Autowired
public FluxContainer(/* Field args...*/) {
// init stuff...
this.requestFlux = initRequestFlux();
}
private Flux<AcknowledgeablePubsubMessage> initRequestFlux() {
return pubSubReactiveFactory.poll(requestSubscription, requestPollTime);
.doOnError(e -> log.error("FATAL ERROR: could not retrieve message from queue. Resetting flux", e))
.onErrorResume(e -> initRequestFlux());
}
#EventListener(ApplicationReadyEvent.class)
public void configureFluxAndSubscribe() {
log.info("Setting up requestFlux...");
this.requestFlux
.doOnNext(AcknowledgeablePubsubMessage::ack)
// ...many more concatenated calls handling flux
}
Does it makes sense? I'm concerned about memory allocation (I'm relying on the gc to clean stuff). Any comment is welcome.
What I think you're looking for is basically a Flux that restarts itself when it is terminated for any situation except for the subscription being disposed. In my case I have a source that would generate infinite events from Docker daemon which can disconnect "successfully"
Let sourceFlux be the flux providing your data and would be something you'd want to restart on error or complete, but stop on subscription disposal.
create a recovery function
Function<Throwable, Publisher<Integer>> recoverFromThrow =
throwable -> sourceFlux
create a new flux that would recover from throw
var recoveringFromThrowFlux =
sourceFlux.onErrorResume(recoverFromThrow);
create a Flux generator that generates the flux that would recover from a throw. (Note the generic coercion is needed)
var foreverFlux =
Flux.<Flux<Integer>>generate((sink) -> sink.next(recoveringFromThrowFlux))
.flatMap(flux -> flux);
foreverFlux is the flux that does self recovery.

Reactor Flux conditional emit

Is it possible to allow emitting values from a Flux conditionally based on a global boolean variable?
I'm working with Flux delayUntil(...) but not able to fully grasp the functionality or my assumptions are wrong.
I have a global AtomicBoolean that represents the availability of a downstream connection and only want the upstream Flux to emit if the downstream is ready to process.
To represent the scenario, created a (not working) test sample
//Randomly generates a boolean value every 5 seconds
private Flux<Boolean> signalGenerator() {
return Flux.range(1, Integer.MAX_VALUE)
.delayElements(Duration.ofMillis(5000))
.map(integer -> new Random().nextBoolean());
}
and
Flux.range(1, Integer.MAX_VALUE)
.delayElements(Duration.ofMillis(1000))
.delayUntil(evt -> signalGenerator()) // ?? Only proceed when signalGenerator returns true
.subscribe(System.out::println);
I have another scenario where a downstream process can accept only x messages a second. In the current non-reactive implementation we have a Semaphore of x permits and the thread is blocked if no more permits are available, with Semaphore permits resetting every second.
In both scenarios I want upstream Flux to emit only when there is a demand from the downstream process, and I do not want to Buffer.
You might consider using Mono.fromRunnable() as an input to delayUntil() like below;
Helper class;
public class FluxCondition {
CountDownLatch latch = new CountDownLatch(10); // it depends, might be managed somehow
Runnable r = () -> { latch.await(); }
public void lock() { Mono.fromRunnable(r) };
public void release() { latch.countDown(); }
}
Usage;
FluxCondition delayCondition = new FluxCondition();
Flux.range(1, 10).delayUntil(o -> delayCondition.lock()).subscribe();
.....
delayCondition.release(); // shall call this for each element
I guess there might be a better solution by using sink.emitNext but this might also require a condition variable for controlling Flux flow.
According my understanding, in reactive programming, your data should be considered in every operator step. So it might be better for you to design your consumer as a reactive processor. In my case I had no chance and followed the way as I described above

Mono returned by ServerRequest.bodyToMono() method not extracting the body if I return ServerResponse immediately

I am using web reactive in spring web flux. I have implemented a Handler function for POST request. I want the server to return immediately. So, I have implemeted the handler as below -:
public class Sample implements HandlerFunction<ServerResponse>{
public Mono<ServerResponse> handle(ServerRequest request) {
Mono bodyMono = request.bodyToMono(String.class);
bodyMono.map(str -> {
System.out.println("body got is " + str);
return str;
}).subscribe();
return ServerResponse.status(HttpStatus.CREATED).build();
}
}
But the print statement inside the map function is not getting called. It means the body is not getting extracted.
If I do not return the response immediately and use
return bodyMono.then(ServerResponse.status(HttpStatus.CREATED).build())
then the map function is getting called.
So, how can I do processing on my request body in the background?
Please help.
EDIT
I tried using flux.share() like below -:
Flux<String> bodyFlux = request.bodyToMono(String.class).flux().share();
Flux<String> processFlux = bodyFlux.map(str -> {
System.out.println("body got is");
try{
Thread.sleep(1000);
}catch (Exception ex){
}
return str;
});
processFlux.subscribeOn(Schedulers.elastic()).subscribe();
return bodyFlux.then(ServerResponse.status(HttpStatus.CREATED).build());
In the above code, sometimes the map function is getting called and sometimes not.
As you've found, you can't just arbitrarily subscribe() to the Mono returned by bodyToMono(), since in that case the body simply doesn't get passed into the Mono for processing. (You can verify this by putting a single() call in that Mono, it'll throw an exception since no element will be emitted.)
So, how can I do processing on my request body in the background?
If you really still want to just use reactor to do a long task in the background while returning immediately, you can do something like:
return request.bodyToMono(String.class).doOnNext(str -> {
Mono.just(str).publishOn(Schedulers.elastic()).subscribe(s -> {
System.out.println("proc start!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("proc end!");
});
}).then(ServerResponse.status(HttpStatus.CREATED).build());
This approach immediately publishes the emitted element to a new Mono, set to publish on an elastic scheduler, that is then subscribed in the background. However, it's kind of ugly, and it's not really what reactor is designed to do. You may be misunderstanding the idea behind reactor / reactive programming here:
It's not written with the idea of "returning a quick result and then doing stuff in the background" - that's generally the purpose of a work queue, often implemented with something like RabbitMQ or Kafka. It's "raison d'être" is instead to be non-blocking, so a single thread is never idly blocked, waiting for something else to complete.
The map() method isn't designed for side effects, it's designed to transform each object into another. For side effects, you want doOnNext() instead;
Reactor uses a single thread by default, so your "additional processing" in your map() method would still block that thread.
If your application is for anything more than quick demo purposes, and/or you need to make heavy use of this pattern, then I'd seriously consider setting up a proper work queue instead.
This is not possible.
Web servers (including Reactor Netty, Tomcat, etc) clean up and recycle resources when request processing is done. This means that when your controller handler is done, the HTTP resources, the request itself, reusable buffers, etc are recycled or closed. At that point, you cannot read from the request body anymore.
In your case, you need to read and buffer the whole request body first, then return a response and kick off a task for processing that request in a separate execution.

How to handle sse connection closed?

I have an endpoint streamed as in the sample code block. When streaming, I call an async method through streamHelper.getStreamSuspendCount(). I am stopping this async method in changing state. But I can not access this async method when the browser is closed and the session is terminated. I am stopping the async method in session scope when changing state. But I can not access this async method when the browser is closed and the session is terminated. How can I access this scope when Session is closed?
#RequestMapping(value = "/stream/{columnId}/suspendCount", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
#ResponseBody
public Flux<Integer> suspendCount(#PathVariable String columnId) {
ColumnObject columnObject = streamHelper.findColumnObjectInListById(columnId);
return streamHelper.getStreamSuspendCount(columnObject);
}
getStreamSuspendCount(ColumnObject columnObject) {
...
//async flux
Flux<?> newFlux = beSubscribeFlow.get(i);
Disposable disposable = newFlux.subscribe();
beDisposeFlow.add(disposable); // my session scope variable. if change state, i will kill disposable (dispose()).
...
return Flux.fromStream(Stream.generate(() -> columnObject.getPendingObject().size())).distinctUntilChanged()
.doOnNext(i -> {
System.out.println(i);
}));
}
I think part of the problem is that you are attempting to get a Disposable that you want to call at the end of the session. But in doing so, you are subscribing to the sequence yourself. Spring Framework will also subscribe to the Flux returned by getStreamSuspendCount, and it is THAT subscription that needs to be cancelled for the SSE client to get notified.
Now how to achieve this? What you need is a sort of "valve" that will cancel its source upon receiving an external signal. This is what takeUntilOther(Publisher<?>) does.
So now you need a Publisher<?> that you can tie to the session lifecycle (more specifically the session close event): as soon as it emits, takeUntilOther will cancel its source.
2 options there:
the session close event is exposed in a listener-like API: use Mono.create
you really need to manually trigger the cancel: use MonoProcessor.create() and when the time comes, push any value through it
Here are simplified examples with made up APIs to clarify:
Create
return theFluxForSSE.takeUntilOther(Mono.create(sink ->
sessionEvent.registerListenerForClose(closeEvent -> sink.success(closeEvent))
));
MonoProcessor
MonoProcessor<String> processor = MonoProcessor.create();
beDisposeFlow.add(processor); // make it available to your session scope?
return theFluxForSSE.takeUntilOther(processor); //Spring will subscribe to this
Let's simulate the session close with a scheduled task:
Executors.newSingleThreadScheduledExecutor().schedule(() ->
processor.onNext("STOP") // that's the key part: manually sending data through the processor to signal takeUntilOther
, 2, TimeUnit.SECONDS);
Here is a simulated unit test example that you can run to better understand what happens:
#Test
public void simulation() {
Flux<Long> theFluxForSSE = Flux.interval(Duration.ofMillis(100));
MonoProcessor<String> processor = MonoProcessor.create();
Executors.newSingleThreadScheduledExecutor().schedule(() -> processor.onNext("STOP"), 2, TimeUnit.SECONDS);
theFluxForSSE.takeUntilOther(processor.log())
.log()
.blockLast();
}

Resources