Why am I not allowed to break a Promise? - promise

The following simple Promise is vowed and I am not allowed to break it.
my $my_promise = start {
loop {} # or sleep x;
'promise response'
}
say 'status : ', $my_promise.status; # status : Planned
$my_promise.break('promise broke'); # Access denied to keep/break this Promise; already vowed
# in block <unit> at xxx line xxx
Why is that?

Because the Promise is vowed, you cannot change it: only something that actually has the vow, can break the Promise. That is the intent of the vow functionality.
What are you trying to achieve by breaking the promise as you showed? Is it to stop the work being done inside of the start block? Breaking the Promise would not do that. And the vow mechanism was explicitly added to prevent you from thinking it can somehow stop the work inside a start block.
If you want work inside a start block to be interruptible, you will need to add some kind of semaphore that is regularly checked, for instance:
my int $running = 1;
my $my_promise = start {
while $running {
# do stuff
}
$running
}
# do other stuff
$running = 0;
await $my_promise;
Hope this made sense.

The reason why you cannot directly keep/break Promise from outside or stop it on Thread Pool are explained here in Jonathans comment.
Common misuse of Promises comes from timeout pattern.
await Promise.anyof(
start { sleep 4; say "finished"; },
Promise.in( 1 )
);
say "moving on...";
sleep;
This will print "finished". And when user realize that the next logical step for him is to try to kill obsolete Promise. While the only correct way to solve it is to make Promise aware that its work is no longer needed. For example through periodically checking some shared variable.
Things gets complicated if you have blocking code on Promise (for example database query) that runs for too long and you want to terminate it from main thread. That is not doable on Promises. All you can do is to ensure Promise will run in finite time (for example on MySQL by setting MAX_EXECUTION_TIME before running query). And then you have choice:
You can grind your teeth and patiently wait for Promise to finish. For example if you really must disconnect database in main thread.
Or you can move on immediately and allow "abandoned" Promise to finish on its own, without ever receiving its result. In this case you should control how many of those Promises can stack up in background by using Semaphore or running them on dedicated ThreadPoolScheduler.

Related

Wrap a call in a task, to add a timeout?

We have an SDK that we are using from a 3rd-party. We have no access or insight into the code at all, or ability to change anything with it.
We're running into a problem where we make a bunch of updates to an object in the SDK, and then when we call their .Commit() method, it goes off into oblivion, and never comes back. Their Commit has no timeout parameter or anything to tell it - hey, give up already.
So when their code goes into oblivion, so too does our program.
I'm wondering if there is a way that I can use async/await stuff to essentially add a timeout to the call to their Commit. I've not done any of async stuff before though, so I'm not sure this is possible. I would still need it to be synchronous in terms of our program's process flow.
Essentially, I'm envisioning something along the lines of
... <setting a bunch of sdkObject fields> ...
var done = false;
await new Task(function(ref sdkObject, ref done) {
sdkObject.Commit();
done = true;
}, timeout: 60000);
if (done) {
<perform post-success code>
} else {
<perform post-failure code>
}
This then would allow us to artificially put a timeout around their Commit method, so that even if it goes off into oblivion, never to be seen again, our code would at least be able to try to wrap up gracefully and continue on with the next record to process.
I'm wondering if there is a way that I can use async/await stuff to essentially add a timeout to the call to their Commit.
Well... sort of.
You can wrap the call into a Task.Run and then use WaitAsync to create a cancelable wait, as such:
try {
await Task.Run(() => sdkObject.Commit()).WaitAsync(TimeSpan.FromSeconds(60));
<perform post-success code>
} catch (TimeoutException {
<perform post-failure code>
}
However, this will probably not work as expected. WaitAsync gives you a way to cancel the wait - it doesn't give you a way to cancel Commit. The Commit will just keep on executing - it's just that your application no longer cares if or when or how it complete.
The library you're using may or may not like having another Commit called when the last one is still running. So this may not actually work for your use case.
The only way to truly cancel uncancelable code is to wrap the code into a separate process and kill the process when you want to force cancellation. This is quite involved but sometimes you have no choice.

Possible to use Promise.in with infinite time?

Is there a direct way to use Promise.in (or other sub/method/class) to achieve an indefinite amount of time? In other words the Promise is never resolved.
Currently I'm checking the $time when the promise is kept to see if an indefinite time was requested (indicated by negative or 0 value) and preventing the react block from exiting.
Is isn't a terrible solution, but is there are more idiomatic way of achieving this?
my $time=0;
react {
whenever Promise.in($time) {
#check if time is 0
done if $time > 0;
}
whenever signal(SIGINT) {
done;
}
#whenever Supply...{
#}
}
You can actually pass Inf to Promise.in, like this:
await Promise.in(Inf);
say "never happens";
whenever Promise.new {
pretty much gives you a promise that will never be kept, so the associated code will never fire. Not sure why you would do that, though.
If you want a promise that is never fulfilled, simply running Promise.new gives you one.
Somebody could still call .keep on that promise, unless you obtain a vow to prevent that.

queueScheduler in rx.js 6.3 is synchronous - why this example doesn't causes SO If I use queueScheduler?

I have interesting example, not a real-life task but anyway:
const signal = new Subject();
let count = 0;
const somecalculations = (count) => console.log('do some calculations with ', count);
console.log('Start');
signal.pipe(take(1500)/*, observeOn(queueScheduler)*/)
.subscribe(() => {
somecalculations(count);
signal.next(count++);
console.log('check if reached ', count)
});
signal.next(count++);
console.log('Stop');
codepen
Subject.next works in synchronous way, so if i comment out observeOn(queueScheduler) - it causes Stack overflow (I control number of iterations with take operator, and on my computer if number is bigger then 1370 - it causes SO).
But if I put queueScheduler there - it works good. QueueScheduler is synchronous and somehow it allows current onNext handler run to finish running and then start next scheduled run.
Can someone explain it to me deeply with source code details? I tried to dig it but with partial success at the moment. It is about how observeOn works with QueueScheduler but answer is escaping me.
observeOn src QueueScheduler.ts asyncScheduler
Thanks to cartant for support. Seems like I understood why queue scheduler is working without SO.
When signal.next is called first time from observeOn _next queueScheduler.schedule->AsyncScheduler.schedule->Scheduler.schedule causes QueueAction.schedule to be called
QueueAction.flush called. this.scheduler.flush - > QueueSchedulerFlush->AsyncScheduler.flush
First time queue is empty and no task is executed so this.active is false. bc of this action.execute is called. Everything is called in sync way.
action.execute causes onNext function to be run again. So onNext calls signal.next it goes through all 1-3 points but now this.active is true (because it is actually still previous signal.next run) and we just queue action
So second signal.next is handled and we return to action.execute of first signal.next call. It works in do-while and shift actions one by one. So it finished running first signal.next action - but now we have one more in queue from second signal.next recursive call. So we run action.execute for second signal.next
And situation is being repeated. First flush call manages all the other calls like: active is true, we add task to queue and then repeat to previous flush call and grab it from queue.

Why loop.run_forever() is locking my main thread?

While learning asyncio I was trying this code:
import asyncio
from asyncio.coroutines import coroutine
#coroutine
def coro():
counter: int = 0
while True:
print("Executed" + str(counter))
counter += 1
yield
loop = asyncio.get_event_loop()
loop.run_until_complete(coro())
loop.run_forever()
print("Finished!")
I was expecting the coroutine to be executed only once because it contains a yield and should have returned control to the caller. The output I was expecting was:
Executed 0
Finished!
I was expecting this behaviour because I thought the loop was going to run the coroutine forever once every "frame" returning to the caller after each execution (something like a background thread but in a cooperative way). But instead, it runs the coroutine forever without returning?. Output is the following:
Executed 0
Executed 1
Executed 2
Executed 3
...
Could anyone explain why this happens instead of my expectations?
Cheers.
You have a couple of problems. When you call run_until_complete, it waits for coro to finish before moving on to your run_forever call. As you've defined it, coro never finishes. It contains an infinite loop that does nothing to break out of the loop. You need a break or a return somewhere inside the loop if you want to move on to the next step in your application.
Once you've done that, though, your next call is to run_forever, which, just as its name suggests, will run forever. And in this case it won't have anything to do because you've scheduled nothing else with the event loop.
I was expecting the coroutine to be executed only once because it contains a yield and should have returned control to the caller.
Looking past the fact that your coroutine has no yield, awaiting (or yielding from depending on which syntax you choose to use) does not return control to the caller of run_until_complete or run_forever. It returns control to the event loop so that it can check for anything else that has been awaited and is ready to resume.

Efficient daemon in Vala

i'd like to make a daemon in Vala which only executes a task every X seconds.
I was wondering which would be the best way:
Thread.usleep() or Posix.sleep()
GLib.MainLoop + GLib.Timeout
other?
I don't want it to eat too many resources when it's doing nothing..
If you spend your time sleeping in a system call, there's won't be any appreciable difference from a performance perspective. That said, it probably makes sense to use the MainLoop approach for two reasons:
You're going to need to setup signal handlers so that your daemon can die instantaneously when it is given SIGTERM. If you call quit on your main loop by binding SIGTERM via Posix.signal, that's probably going to be a more readable piece of code than checking that the sleep was successful.
If you ever decide to add complexity, the MainLoop will make it more straight forward.
You can use GLib.Timeout.add_seconds the following way:
Timeout.add_seconds (5000, () => {
/* Do what you want here */
// Continue this "loop" every 5000 ms
return Source.CONTINUE;
// Or remove it
return Source.REMOVE;
}, Priority.LOW);
Note: The Timeout is set as Priority.LOW as it runs in background and should give priority to others tasks.

Resources