NSOperationQueue operations and operationsCount deprecated. How to cancel operations of a specific class? - nsoperationqueue

This documentation says that operations is deprecated with no hint of a replacement function:
https://developer.apple.com/documentation/foundation/nsoperationqueue/1415168-operations?language=objc
Xcode lists a possible replacement as addBarrierBlock: but with no documentation.
I have a dozen operations of class RenderOperation and a single operation of class RenderCompleteOperation that is dependent on all the RenderOperation objects.
My problem is that if I call cancelAllOperations, I still need my single RenderCompleteOperation to run - and if it is still pending completion of its dependencies, than its main method will never run.
So I need a way to cancel just the RenderOperation objects and can't see how to do this without calling operations.

An operation can be dependant on an operation in a different queue so you can put your final operation on a separate queue to the worker operations and use cancelAllOperations on the worker queue.
Another option is to override the final operation's cancel function to do nothing and set executing and finished manually when it finishes its task.
A third option is to keep an array of your worker operations and cancel each of them yourself in a loop (which is all cancelAllOperations does anyway)

You can call cancel on all the dependencies of your RenderCompleteOperation:
renderCompleteOperation.dependencies.forEach { $0.cancel() }
It will then execute immediately.

Related

Cancel currently running function/goroutine

I'm using gin-gonic as HTTP handler. I want to prerender some graphical resources after my users make POST request. For this, I put a middleware that assign a function (with a timer inside) to a map[string]func() and call this function directly after assignation.
The problem is, when the user make two subsequent request, the function is called twice.
Is there any way to clear function ref and/or his currently running call like a clearInterval or clearTimeout in Javascript ?
Thanks
No; whatever function you've scheduled to run as a goroutine needs to either return or call runtime.Goexit.
If you're looking for a way to build cancellation into your worker, Go provides a primitive to handle that, which is already part of any HTTP request - contexts. Check out these articles from the Go blog:
Concurrency Patterns: Context
Pipelines and cancellation
I suppose your rendering function is calling into a library, so you don't have control over the code where the bulk of the time is spent. If you do have such control, just pass a channel into the goroutine, periodically check if the channel is closed, and just return from the goroutine if that happens.
But actually I would recommend a different, and simpler, solution: keep track (in a map) of the file names (or hashes) of the files that are currently being processed, and check that map before launching a second one.

Why MvxCachingFragmentCompatActivity call ExecutePendingTransactions manually?

Use mvvmcross for Xamarin.Android. Why MvxCachingFragmentCompatActivity call ExecutePendingTransactions in ShowFragment method manually? What actions done with this call? As I see, it can takes about second for several devices while navigation.
Thanks
It is basically necessary, if the next actions rely on the handling of the commit() before doing any other action.
For MVVMCross especially:
OnFragmentChanging(fragInfo, ft);
ft.Commit();
SupportFragmentManager.ExecutePendingTransactions();
OnFragmentChanged(fragInfo);
The OnFragmentChanged-Event should obviously only be called if the changes got applied. Just to take one of the code snippets out of MvxCachingFragmentCompatActivity.
From the API documentation.
After a FragmentTransaction is committed with
FragmentTransaction.commit(), it is scheduled to be executed
asynchronously on the process's main thread. If you want to
immediately executing any such pending operations, you can call this
function (only from the main thread) to do so. Note that all callbacks
and other related behavior will be done from within this call, so be
careful about where this is called from.

What happens when an async_write() operation never ends and there is a strand involved?

I know that the next async_write()'s should be performed when the previous one finished (with or without errors, but when it finished).
I would like to know what happens when, while making async_write() calls, if one of these takes long time for some reason or even never ends (I assume there is no timeouts here like in synchronous operations). When this operation will be considered as failed? When that operation that never ends is finally removed by the OS internally?
Maybe, are there timeouts involved and my assumptions are wrong?
I mean, the write operation is sent to the OS and could possibly block, indefinitely?
So the handler is never called and the next async_write()'s are never called.
NOTE: I am assuming that we are calling run() in several threads but the write operations should be sent in order so I am also assuming that the write handlers are wrapped with a strand.
Thank you for your time.
There are no explicit timeouts for asynchronous operations, but they can be cancelled through the IO object's cancel() member function. These operations will be considered as having failed only when the underlying OS call itself fails in a manner where a retry cannot reasonable occur. For example, if the write fails from:
EINTR, then the write will immediately be reattempted.
EWOULDBLOCK, EAGAIN, or ERROR_RETRY, then Boost.Asio will push the operation back into the job queue. This could occur if the write buffer was full, so pushing the operation back into the queue defers its reattempt, allowing other operations to be attempted.
Other errors will cause the operation to fail.
There should not be an indefinitely block in the system call. Boost.Asio sets the underlying IO objects to non-blocking, and provides synchronous blocking writes behavior by waiting on the associated file descriptor if a write failed with EWOULDBLOCK, EAGAIN, or ERROR_RETRY.
A strand is not affected by long term asynchronous operations. Strands are used to provide strict sequential invocation of handlers, not the operations themselves. In the case of composed operations, such as boost::asio::async_write, the intermediate handlers will also be invoked through the same strand as the final handler. Overall, this behavior helps provide thread safety, as:
All async_write_some operations initiated from intermediate handlers are within the strand.
The operation itself is not within the strand. This allows other for other handlers to run while the actual write is occurring.
The user handler will be invoked within the strand.
This answer may provide some more insight into composed operations and strands.

What's the proper way to use coroutines for event handling?

I'm trying to figure out how to handle events using coroutines (in Lua). I see that a common way of doing it seems to be creating wrapper functions that yield the current coroutine and then resume it when the thing you're waiting for has occured. That seems like a nice solution, but what about these problems? :
How do you wait for multiple events at the same time, and branch depending on which one comes first? Or should the program be redesigned to avoid such situations?
How to cancel the waiting after a certain period? The event loop can have timeout parameters in its socket send/receive wrappers, but what about custom events?
How do you trigger the coroutine to change its state from outside? For example, I would want a function that when called, would cause the coroutine to jump to a different step, or start waiting for a different event.
EDIT:
Currently I have a system where I register a coroutine with an event, and the coroutine gets resumed with the event name and info as parameters every time the event occurs. With this system, 1 and 2 are not issues, and 3 can solved by having the coro expect a special event name that makes it jump to the different step, and resuming it with that name as an arg. Also custom objects can have methods to register event handlers the same way.
I just wonder if this is considered the right way to use coroutines for event handling. For example, if I have a read event and a timer event (as a timeout for the read), and the read event happens first, I have to manually cancel the timer. It just doesn't seem to fit the sequential nature or handling events with coroutines.
How do you wait for multiple events at the same time, and branch depending on which one comes first?
If you need to use coroutines for this, rather than just a Lua function that you register (for example, if you have a function that does stuff, waits for an event, then does more stuff), then this is pretty simple. coroutine.yield will return all of the values passed to coroutine.resume when the coroutine is resumed.
So just pass the event, and let the script decide for itself if that's the one it's waiting for or not. Indeed, you could build a simple function to do this:
function WaitForEvents(...)
local events = {...}
assert(#... ~= 0, "You must pass at least one parameter")
do
RegisterForAnyEvent(coroutine.running()) --Registers the coroutine with the system, so that it will be resumed when an event is fired.
local event = coroutine.yield()
for i, testEvt in ipairs(events) do
if(event == testEvt) then
return
end
end
until(false)
end
This function will continue to yield until one of the events it is given has been fired. The loop assumes that RegisterForAnyEvent is temporary, registering the function for just one event, so you need to re-register every time an event is fired.
How to cancel the waiting after a certain period?
Put a counter in the above loop, and leave after a certain period of time. I'll leave that as an exercise for the reader; it all depends on how your application measures time.
How do you trigger the coroutine to change its state from outside?
You cannot magic a Lua function into a different "state". You can only call functions and have them return results. So if you want to skip around within some process, you must write your Lua function system to be able to be skippable.
How you do that is up to you. You could have each set of non-waiting commands be a separate Lua function. Or you could just design your wait states to be able to skip ahead. Or whatever.

data structures for scheduling workflow?

I'm wondering what kind(s) of data structures / algorithms might help facilitate handling the following situation; I'm not sure if I need a single FIFO, or a priority queue, or multiple FIFOs.
I have N objects that must proceed through a predefined workflow. Each object must complete step 1, then step 2, then step 3, then step 4, etc. Each step is either done quickly or involves a "wait" that depends on something external to finish (like the completion of a file operation or whatever). Each object maintains its own state. If I had to define an interface for these objects, it would be something like this (written below in pseudo-Java, but this question is language-agnostic):
public interface TaskObject
{
public enum State { READY, WAITING, DONE };
// READY = ready to execute next step
// WAITING = awaiting some external condition
// DONE = finished all steps
public int getCurrentStep();
// returns # of current step
public int getEndStep();
// returns # of step which is the DONE case.
public State getState();
// checks state and returns it.
// multiple calls will always be identical,
// except WAITING which can transition to READY or DONE.
public State executeStep();
// if READY, executes next step and returns getState().
// otherwise, returns getState().
}
I need to write a single-threaded scheduler that calls executeStep() on the "next" object. My problem is, I'm not sure exactly what technique I should use to determine what the "next" object is. I want it to be fair (first-come, first-serve for objects not in the WAITING state).
My gut call is to have 3 FIFOs, READY, WAITING and DONE. In the beginning all objects are placed in the READY queue, and the scheduler repeats a loop where it takes the first object off the READY queue, calls executeStep(), and places it onto the queue that's appropriate the the result of executeStep(). Except that items in the WAITING queue need to be put into the READY or DONE queue when their state changes.... argh!
Any advice?
If this has to be single threaded you can use a single FIFO queue for the ready and waiting objects and use your thread to process each object as it comes out. If it's state changes to WAITING then simply stick it back into the queue and it will be reprocessed.
Something like (psuedocode):
var item = queue.getNextItem();
var state = item.executeStep ();
if (state == WAITING)
queue.AddItem (item);
else if (state == DONE)
// add to collection of done objects
Depending on the time executeStep takes to run you may need to introduce a delay (Sleep not for) to prevent a tight polling loop. Ideally you would have the objects publish state change events and do-away with the polling altogether.
This is the kind of timeslicing approach that was commonplace in hardware and comms software before multithreading was widespread.
You don't have any way for the task object to notify you when it changes from WAITING to READY except polling it, so the WAITING and READY queues could really just be one. You can just loop around it calling executeStep() on each one in turn. If as a return value from executeStep() you receive DONE, then you remove it from that queue and stick it on the DONE queue and forget about it.
If you wanted to give "more priority" towards READY objects and attempt to run through all possible READY objects before wasting any resources polling WAITING you can maintain 3 queues like you said and only process the WAITING queue when you have nothing in the READY queue.
I personally would spend some effort to eliminate the polling of the state, and instead define an interface that the object could use to notify your scheduler when a state changes.
You might want to study the design of an operating system scheduler. Check out the Linux and *BSD for example.
Some pointers for the Linux scheduler: Inside the Linux scheduler and Understanding the Linux Kernel
NOTE - this does not address your question of how to schedule, but I would use a separate state class that defines the states and transitions. The objects should not know what states they should go through. They can be informed of what "Step" they are at, etc.
there are some patterns for that as well.
You should read up a little on operating systems - specifically the scheduler. Your example is a scaled down set of that problem and if you copy the relevant parts it should work great for you.
You can then add priority, etc.
The simplest technique that satisfies the requirements in your question is to repeatedly iterate over all TaskObjects calling executeStep() on each one.
This requires only one construct to hold the TaskObjects, and it can be any iterable structure, e.g. an array.
Since a TaskObject can transition from WAITING to READY asynchronously, you have to poll every TaskObject that you don't know is DONE.
The performance gained from not polling the DONE TaskObjects may be negligible. It depends on the processing load of calling executeStep() on a DONE TaskObject, which should be small.
A simple round-robin polling assures that once a READY TaskObject has executed a step, it will not execute another step until all other TaskObjects have had a chance to execute.
One obvious additional requirement is detecting when all TaskObjects are in the DONE state so you can stop processing.
To avoid polling DONE TaskObjects you will need to either maintain a flag for each one, or chain the TaskObjects in two queues: READY/WAITING and DONE.
If you store the TaskObjects in an array, make it an array of records, with members DoneFlag and TaskObject.
If for some reason you are storing the TaskObjects in a queue, with available enqueue() and dequeue() methods, then the overhead of two queues instead of one may be small.
-Al.
Take a look a this link.
Boost state machines vs uml
Boost has state machines. Why reinvent?

Resources