quarkus/mutiny how to trigger a side job without waiting it - quarkus

quarkus reactive uses mutiny to handle task asynchronously.
But, the flow is always wait every job to finish, then returns the result.
Sometime, I just want to trigger a job and let it run in the background without waiting it to be done.
Any suggestion or example?
Uni<Integer> mainJob() {
// fake logic
return Uni.createFrom().item(1);
}
Uni<Void> sideJob(int n) {
// fake logic
logger.log("result = " + n);
}
#Path("test")
Uni<Integer> testExample() {
return mainJob().onItem().call(n -> sideJob(n));
}
The upper code only returns after sideJob() is done. But, I just want to return the result immediately once mainJob is done, with sideJob triggered and run in background.
Any suggestion on it?
ManagedExecutor may be a way to do but it seems not natural in this case. The side job may/not be long running.

According to the Uni interface documentation:
To trigger the computation, a UniSubscriber must subscribe to the Uni. It will be notified of the outcome once there is an item or failure event fired by the observed Uni. A subscriber receives (asynchronously) a UniSubscription and can cancel the demand at any time.
Thus, the only way to start the execution of a Uni is by subscribing to it, even by calling uni.await().indefinitely() you are, in fact, subscribing to the Uni as we can see in the documentation of the indefinitely() method:
Subscribes to the Uni and waits (blocking the caller thread) indefinitely until a item event is fired or a failure event is fired by the upstream uni.
Invoking the call() method is nothing more than chaining a new function that will be included in the stream that will be executed when the Uni is subscribed. This way, when the testExample() method returns the result of the call(), it is not executing and waiting for the Uni to finish, it is actually returning the result immediately.
However, whoever is going to receive the final result must wait for the Uni stream to finish, so the client waiting for the HTTP response will be waiting for the sideJob() to finish in order to receive the original value, but once again, your testExample() method is not waiting for anything, it returns the Uni immediately without waiting for it to be executed.

Related

Project reactor - react to timeout happened downstream

Project Reactor has a variety of timeout() operators.
The very basic implementation raises TimeoutException in case no item arrives within the given Duration. The exception is propagated downstream , and to upstream it sends cancel signal.
Basically my question is: is it possible to somehow react (and do something) specifically to timeout that happened downstream, not just to cancelation that sent after timeout happened?
My question is based on the requirements of my real business case and also I'm wondering if there is a straight solution.
I'll simplify my code for better understanding what I want to achieve.
Let's say I have the following reactive pipeline:
Flux.fromIterable(List.of(firstClient, secondClient))
.concatMap(Client::callApi) // making API calls sequentially
.collectList() // collecting results of API calls for further processing
.timeout(Duration.ofMillis(3000)) // the entire process should not take more than duration specified
.subscribe();
I have multiple clients for making API calls. The business requirement is to call them sequantilly, so I call them with concatMap(). Then I should collect all the results and the entire process should not take more than some Duration
The Client interface:
interface Client {
Mono<Result> callApi();
}
And the implementations:
Client firstClient = () ->
Mono.delay(Duration.ofMillis(2000L)) // simulating delay of first api call
.map(__ -> new Result())
// !!! Pseudo-operator just to demonstrate what I want to achieve
.doOnTimeoutDownstream(() ->
log.info("First API call canceled due to downstream timeout!")
);
Client secondClient = () ->
Mono.delay(Duration.ofMillis(1500L)) // simulating delay of second api call
.map(__ -> new Result())
// !!! Pseudo-operator just to demonstrate what I want to achieve
.doOnTimeoutDownstream(() ->
log.info("Second API call canceled due to downstream timeout!")
);
So, if I have not received and collected all the results during the amount of time specified, I need to know which API call was actually canceled due to downstream timeout and have some callback for this "event".
I know I could put doOnCancel() callback to every client call (instead of pseudo-operator I demonstrated) and it would work, but this callback reacts to cancelation, which may happen due to any error.
Of course, with proper exception handling (onErrorResume(), for example) it would work as I expect, however, I'm interesting if there is some straight way to somehow react specifically to timeout in this case.

Running a Mono in background while returning a response when using Spring Webflux

This questions is related to Return immediately in spring web flux but I don't think it's the same (at least the answer there is not satisfactory for me).
I have a function returning a Mono that when invoked starts a long-running job. This function is invoked when a call is made to a Spring Webflux HTTP API. Here's an example:
#PutMapping("/{jobId}")
fun startNewJob(#PathVariable("jobId") jobId: String,
request: ServerHttpRequest): Mono<ResponseEntity<Unit>> {
val longRunningJob : Mono<Job> = startNewJob(jobId)
longRunningJob.map { job ->
val jobUri = generateJobUri(request, job.id)
ResponseEntity.created(jobURI).build<Unit>()
}
}
The problem with the code above is that "201 Created" is created after the long running job is completed. I want to kick-off the longRunningJob in the background and return "201 Created" immediately.
I could perhaps do something like this:
#PutMapping("/{jobId}")
fun startNewJob(#PathVariable("jobId") jobId: String,
request: ServerHttpRequest): Mono<ResponseEntity<Unit>> {
startNewJob(jobId)
.subscribeOn(Schedulers.newSingle("thread"))
.subscribe()
val jobUri = generateJobUri(request, job.id)
val response = ResponseEntity.created(jobURI).build<Unit>()
Mono.just(response)
}
But it doesn't seem very idiomatic to me to have to call subscribe() manually (e.g. intellij is complaining that I call subscribe() in non-blocking scope). Isn't there a better way to compose the two "streams" without using an explicit subscribe? If so how do I modify the startNewJob function above to achieve this?
AFAIK, using one of the subscribe methods is the only way to really start a job in the background with its own lifecycle (not tied to the returned publisher).
If you were to use one of the operators to combine the job publisher and the response publisher (e.g. zip or merge), then the lifecycle of the job publisher would be tied to the response publisher, which is not what you want for a background job.
One thing you might want to consider is kicking off the background job within the response publisher stream, rather than directly in the method body. e.g. via doOnSubscibe or from an operator upstream of the response.
This would tie the start of the background job to the onSubscribe events of the response publisher, but still allow it to complete in the background.
Also note, that if you want to be able to cancel the background job (e.g. maybe during application shutdown), you'll need to save the Disposable returned from subscribe so you can later call dispose on it. This might be better done from some type of BackgroundJobManager that could keep track of all the jobs running.
private static final Scheduler backgroundTaskScheduler = Schedulers.newParallel("backgroundTaskScheduler", 2);
backgroundTaskScheduler.schedule(() -> doBackgroundJob());

Using Observables to process queue messages which require a callback at end of processing?

This is a bit of a conceptual question, so let me know if it's off topic.
I'm looking at writing yet another library to process messages off a queue - in this case an Azure storage queue. It's pretty easy to create an observable and throw a message into it every time a message is available.
However, there's a snag here that I'm not sure how to handle. The issue is this: when you're done processing the message, you need to call an API on the storage queue to actually delete the message. Otherwise the visibility timeout will expire and the message will reappear to be dequeued again.
As an example, here's how this loop looks in C#:
public event EventHandler<string> OnMessage;
public void Run()
{
while(true)
{
// Read message
var message = queue.GetMessage();
if (message != null)
{
// Run any handlers
OnMessage?.Invoke(this, message.AsString);
// Delete off queue when done
queue.DeleteMessage(message);
}
else
{
Thread.Sleep(2500);
}
}
}
The important thing here is that we read the message, trigger any registered event handlers to do things, then delete the message after the handlers are done. I've omitted error handling here, but in general if the handler fails we should NOT delete the message, but instead let it return to visibility automatically and get redelivered later.
How do you handle this kind of thing using Rx? Ideally I'd like to expose the observable for anyone to subscribe to. But I need to do stuff at the end of processing for that message, whatever the "end" happens to mean here.
I can think of a couple of possible solutions, but I don't really like any of them. One would be to have the library call a function supplied by the consumer, that takes in the source observable, hooks up whatever it wants, then returns a new observable that the library can then subscribe on to do the final cleanup. But that's pretty limiting, as consumers basically only have one shot to hook up to the messages, which seems pretty limiting.
I guess I could put the call to delete the message after the call to onNext, but then I don't know if the processing succeeded or failed unless there's some sort of back channel in that api I don't know about?
Any ideas/suggestions/previous experience here?
Try having a play with this:
IObservable<int> source =
Observable
.Range(0, 3)
.Select(x =>
Observable
.Using(
() => Disposable.Create(() => Console.WriteLine($"Removing {x}")),
d => Observable.Return(x)))
.Merge();
source
.Subscribe(x => Console.WriteLine($"Processing {x}"));
It produces:
Processing 0
Removing 0
Processing 1
Removing 1
Processing 2
Removing 2

Spring #Async cancel and start?

I have a spring MVC app where a user can kick off a Report generation via button click. This process could take few minutes ~ 10-20 mins.
I use springs #Async annotation around the service call so that report generation happens asynchronously. While I pop a message to user indicating job is currently running.
Now What I want to do is, if another user (Admin) can kick off Report generation via the button which should cancel/stop currently running #Async task and restart the new task.
To do this, I call the
.. ..
future = getCurrentTask(id); // returns the current task for given report id
if (!future.isDone())
future.cancel(true);
service.generateReport(id);
How can make it so that "service.generateReport" waits while the future cancel task kills all the running threads?
According to the documentation, after i call future.cancel(true), isDone will return true as well as isCancelled will return true. So there is no way of knowing the job is actually cancelled.
I can only start new report generation when old one is cancelled or completed so that it would not dirty data.
From documentation about cancel() method,
Subsequent calls to isCancelled() will always return true if this method returned true
Try this.
future = getCurrentTask(id); // returns the current task for given report id
if (!future.isDone()){
boolean terminatedImmediately=future.cancel(true);
if(terminatedImmediately)
service.generateReport(id);
else
//Inform user existing job couldn't be stopped.And to try again later
}
Assuming the code above runs in thread A, and your recently cancelled report is running in thread B, then you need thread A to stop before service.generateReport(id) and wait until thread B is completes / cancelled.
One approach to achieve this is to use Semaphore. Assuming there can be only 1 report running concurrently, first create a semaphore object acccessible by all threads (normally on the report runner service class)
Semaphore semaphore = new Semaphore(1);
At any point on your code where you need to run the report, call the acquire() method. This method will block until a permit is available. Similarly when the report execution is finished / cancelled, make sure release() is called. Release method will put the permit back and wakes up other waiting thread.
semaphore.acquire();
// run report..
semaphore.release();

android: AsyncTask onPostExecute keep working even if start new activity on doInBackground

i am building an application for clients to get questions from server and answer it, if the server doesn't have questions i want to go to new screen and print message that try again in few minutes, getting questions is in AsyncTask , if the server doesn't have questions , it will sends in the header of the responds, a header isFindAQuestion with the value false, here is the code on client to ensure if false , i print on LogCat and i see the message = false, but my problems that even if i start new activity with the intent, this activity keep working and show me exception and it is null pointer exception because on the onPostExceute will take a parmeter null and try to process it, i put finish() in the end of false statement but doesn't finish the activity
if (response.getFirstHeader("isFindAQuestion").getValue()
.toString().equals("false")) {
Log.d("message", "false");
Bundle basket = new Bundle();
basket.putString("Message", "sorry no enought questions");
Intent goToAnswerQuestion = new Intent(AnswerQuestion.this,
FinishTime.class);
goToAnswerQuestion.putExtras(basket);
startActivity(goToAnswerQuestion);
finish();
}
Editis it because AsyncTask is working on thread so if the activity is finished, that thread will keep working? and if so how can i stop that thread?
doInBackground is not executed in the UI thread, but in a separeted thread:
invoked on the background thread immediately after onPreExecute()
finishes executing. This step is used to perform background
computation that can take a long time.
If you want to stop your background operation and perform some activities on the UI thread the better thing is to call cancel() and then perform all the stuff you want in the onCancelled callback wich is executed on the UI thread.
From the AsyncTask documentation:
A task can be cancelled at any time by invoking cancel(boolean).
Invoking this method will cause subsequent calls to isCancelled() to return true. After invoking this method, onCancelled(Object), instead of onPostExecute(Object) will be invoked after doInBackground(Object[]) returns.
To ensure that a task is cancelled as quickly as possible, you should always check the return value of isCancelled() periodically from doInBackground(Object[]), if possible (inside a loop for instance.)
protected void onCancelled (Result result)
Runs on the UI thread after cancel(boolean) is invoked and doInBackground(Object[]) has finished.
The default implementation simply invokes onCancelled() and ignores the result. If you write your own implementation, do not call super.onCancelled(result).

Resources