what is the best way to get notified when a task finishes, in F#? - events

I have a pool of tasks and I am trying to figure out the best way to be notified, through an event, when one is finished.
Since the tasks are quite varied, I don't want to add a piece of code inside the task itself since that would mean putting it in several places. These are long running tasks, I'm not waiting for them to complete anywhere, they're just getting started, do their work (minutes to days) and then they finish.
The ugly-but-could-work solution is to wrap each work task into another task that awaits for the work task to be complete and then sends an event, but I'm hoping there would be something more elegant.

In a comment you explained that you're starting your tasks like this:
Async.StartAsTask (runner.Start(), TaskCreationOptions.LongRunning, cancellationSource.Token)
Instead of doing that, start them like this:
startMyTask runner cancellationSource (fun() -> printfn "Task completed!")
Where:
let startMyTask (runner: RunnerType) (s: CancellationTokenSource) onDone =
let wrapper = async {
do! runner.Start()
onDone()
}
Async.StartAsTask (wrapper, TaskCreationOptions.LongRunning, s.Token)

Related

How is the value extracted from a Task in F#?

If I make this call in F#
let mailServers = task{
let! ms = lookupClient.QueryAsync(domain, QueryType.MX, QueryClass.IN, CancellationToken.None)
return ms
}
mailServers is a Task<IDnsQueryResponse>.
I would like to get at the IDnsQueryResponse value wrapped in the task. How can I change this async call to get the actual value?
In your example you already have the IDnsQueryResponse as ms within the task expression. Usually when you start working with Tasks you want to keep working with Tasks until all the work is done so you stay inside the task expression.
If you don't mind blocking the thread you can just call mailServers.Result to wait.

Using wait_for with timeouts with list of tasks

So, I have a list of tasks which I want to schedule concurrently in a non-blocking fashion.
Basically, gather should do the trick.
Like
tasks = [ asyncio.create_task(some_task()) in bleh]
results = await asyncio.gather(*tasks)
But then, I also need a timeout. What I want is that any task which takes > timeout time cancels and I proceed with what I have.
I fould asyncio.wait primitive.
https://docs.python.org/3/library/asyncio-task.html#waiting-primitives
But then the doc says:
Run awaitable objects in the aws set concurrently and block until the condition specified by return_when.
Which seems to suggest that it blocks...
It seems that asyncio.wait_for will do the trick
https://docs.python.org/3/library/asyncio-task.html#timeouts
But how do i send in the list of awaitables rather than just an awaitable?
What I want is that any task which takes > timeout time cancels and I proceed with what I have.
This is straightforward to achieve with asyncio.wait():
# Wait for tasks to finish, but no more than a second.
done, pending = await asyncio.wait(tasks, timeout=1)
# Cancel the ones not done by now.
for fut in pending:
fut.cancel()
# Results are available as x.result() on futures in `done`
Which seems to suggest that [asyncio.wait] blocks...
It only blocks the current coroutine, the same as gather or wait_for.

Gradle - Capturing output written to out / err on a per task basis

I'm trying to capture output written from each task as it is executed. The code below works as expected when running Gradle with --max-workers 1, but when multiple tasks are running in parallel this code below picks up output written from other tasks running simultaneously.
The API documentation states the following about the "getLogging" method on Task. From what it says I judge that it should support capturing output from single tasks regardless of any other tasks running at the same time.
getLogging()
Returns the LoggingManager which can be used to control the logging level and standard output/error capture for this task. https://docs.gradle.org/current/javadoc/org/gradle/api/Task.html
graph.allTasks.forEach { Task task ->
task.ext.capturedOutput = [ ]
def listener = { task.capturedOutput << it } as StandardOutputListener
task.logging.addStandardErrorListener(listener)
task.logging.addStandardOutputListener(listener)
task.doLast {
task.logging.removeStandardOutputListener(listener)
task.logging.removeStandardErrorListener(listener)
}
}
Have I messed up something in the code above or should I report this as a bug?
It looks like every LoggingManager instance shares an OutputLevelRenderer, which is what your listeners eventually get added to. This did make me wonder why you weren't getting duplicate messages because you're attaching the same listeners to the same renderer over and over again. But it seems the magic is in BroadcastDispatch, which keeps the listeners in a map, keyed by the listener object itself. So you can't have duplicate listeners.
Mind you, for that to hold, the hash code of each listener must be the same, which seems surprising. Anyway, perhaps this is working as intended, perhaps it isn't. It's certainly worth an issue to get some clarity on whether Gradle should support listeners per task. Alternatively raise it on the dev mailing list.

Troubles trying to start two asynchronous tasks.

I want to start two Async Tasks but the second will not start until the first has completed.
From what I've googled, people usually suggest this approach:
new MyAsyncTask().execute(params);
new MyAsyncTask().execute(params);
However, I need to instantiate them separately and also keep the handles of the task's (to pass messages for example). Therefore, I SORT OF do this:
onStart()
{
taskA = new MyAsyncTask(paramsA);
taskB = new MyAsyncTask(paramsB);
}
onButtonPress()
{
taskA.execute();
taskB.execute();
}
Edit:
I've noticed that taskB does not actually start executing until taskA completes (which runs a tcp/ip server so it takes a long time). I cannot figure out why. Any thoughts or comments ?
The short answer is that, depending on your version of Android, all AsyncTask subclasses may be using the same thread, so you can only do one at a time. There are two ways around this:
Use Runnable instead of AsyncTask
Replace one call to execute with executeOnExecutor(Executor.THREAD_POOL_EXECUTOR, params)
Clearly, try #2 first - it's less of a code change. But if that doesn't work pretty quickly, I'd switch to #1. In that case, you don't have to worry about how Android might change in the future.
If you want more details about the threading model for AsyncTask, have a look at the Android doc entry.

Spring scheduler (quartz) how to pauseAll schedulers and reschedule them before resumeAll

The need is such clear that first I pauseAll schedulers and before resumeAll I want to reschedule jobs (I mean change the trigger expressions) and make them resume with THE NEW trigger expressions not the former ones.
Is it possible rescheduling a scheduler while it is paused? In other words is it ok by doing the following?
scheduler.pauseAll(); // pause first
scheduler.rescheduleJob(...); // reschedule while it is paused??
scheduler.resumeAll(); // resume All with the new job-trigger expressions as above
(I cannot test exact scenario because of restrictions about the project structure by now, I need time for build test and adapt to the project)
Thanks in advance.
I figured out today that it is possible to reschedule jobs even while scheduler is in pause state. Thus, I made it done by the following peace of code as I mentioned above:
scheduler.pauseAll(); // pause first
scheduler.rescheduleJob(...); // reschedule while it is paused
scheduler.resumeAll(); // resume All

Resources