Note that I am using Python but this could apply to any other bindings from glib.
I have a class that sets up several sockets connections via glib.io_add_watch() and a callback method called foo(). In addition, I have a glib.idle_add() callback to a method called bar(). foo() creates or update a list (class member) of elements that can be any value including None. bar() removes any None item form the above list -- we done with those, we no longer care. In effect it cleans things up.
Does glib grantee that only one callback will be called at any one time per thread?
If I were to run this code so that foo() is in thread one and bar() in thread two, there would be a race condition. I assume that a simple mutex would solve this but is there a more efficient way to do this?
Callbacks added via g_io_add_watch and g_add_idle are executed in the main loop's thread regardless of what thread they were added from.
Related
This documentation says that operations is deprecated with no hint of a replacement function:
https://developer.apple.com/documentation/foundation/nsoperationqueue/1415168-operations?language=objc
Xcode lists a possible replacement as addBarrierBlock: but with no documentation.
I have a dozen operations of class RenderOperation and a single operation of class RenderCompleteOperation that is dependent on all the RenderOperation objects.
My problem is that if I call cancelAllOperations, I still need my single RenderCompleteOperation to run - and if it is still pending completion of its dependencies, than its main method will never run.
So I need a way to cancel just the RenderOperation objects and can't see how to do this without calling operations.
An operation can be dependant on an operation in a different queue so you can put your final operation on a separate queue to the worker operations and use cancelAllOperations on the worker queue.
Another option is to override the final operation's cancel function to do nothing and set executing and finished manually when it finishes its task.
A third option is to keep an array of your worker operations and cancel each of them yourself in a loop (which is all cancelAllOperations does anyway)
You can call cancel on all the dependencies of your RenderCompleteOperation:
renderCompleteOperation.dependencies.forEach { $0.cancel() }
It will then execute immediately.
I noticed the asyncio library has a loop.add_signal_handler(signum, callback, *args) method.
So far I have just been catching unix signals in the main file using the signals module in with my asynchronous code like this:
signal.signal(signal.SIGHUP, callback)
async def main():
...
Is that an oversight on my part?
The add_signal_handler documentation is sparse1, but looking at the source, it appears that the main added value compared to signal.signal is that add_signal_handler will ensure that the signal wakes up the event loop and allow the loop to invoke the signal handler along with other queued callbacks and runnable coroutines.
So far I have just been catching unix signals in the main file using the signals module [...] Is that an oversight on my part?
That depends on what the signal handler is doing. Printing a message or updating a global is fine, but if it is invoking anything in any way related to asyncio, it's most likely an oversight. A signal can be delivered at (almost) any time, including during execution of an asyncio callback, a coroutine, or even during asyncio's own bookkeeping.
For example, the implementation of asyncio.Queue freely assumes that the access to the queue is single-threaded and non-reentrant. A signal handler adding something to a queue using q.put_nowait() would be disastrous if it interrupted an on-going invocation of q.put_nowait() on the same queue. Similar to typical race conditions experienced in multi-threaded code, an interruption in the middle of assignment to _unfinished_tasks might well cause it to get incremented only once instead of twice (once for each put_nowait).
Asyncio code is designed for cooperative multi-tasking, where the points where a function may suspend defined are clearly denoted by the await and related keywords. The add_signal_handler function ensures that your signal handler gets invoked at such a point, and that you're free to implement it as you'd implement any other asyncio callback.
1 When this answer was originally written, the add_signal_handler documentation was briefer than today and didn't cover the difference to signal.signal at all. This question prompted it getting expanded in the meantime.
I remember there was a way to do this, something similar to unix signals, but not so widely used. But can't remember the term. No events/mutexes are used: the thread is just interrupted at random place, the function is called and when it returns, the thread continues.
Windows has Asynchronous Procedure Calls which can call a function in the context of a specific thread. APC's do not just interrupt a thread at a random place (that would be dangerous - the thread could be in the middle of writing to a file or obtaining a lock or in Kernel mode). Instead an APC will be dispatched when the calling thread enters an alterable wait by calling a specific function (See the APC documentation).
If the reason that you need to call code in a specific thread is because you are interacting with the user interface, it would be more direct to send or post a window message to the window handle that you want to update. Window messages are always processed in the thread that created the window.
you can search RtlRemoteCall, it's an undocumented routine though. there's APC in Windows semantically similar to Unix signal, however APC requires target thread is in an alertable state to get delivered, it's not guaranteed this condition is always met
I'm trying to figure out how to handle events using coroutines (in Lua). I see that a common way of doing it seems to be creating wrapper functions that yield the current coroutine and then resume it when the thing you're waiting for has occured. That seems like a nice solution, but what about these problems? :
How do you wait for multiple events at the same time, and branch depending on which one comes first? Or should the program be redesigned to avoid such situations?
How to cancel the waiting after a certain period? The event loop can have timeout parameters in its socket send/receive wrappers, but what about custom events?
How do you trigger the coroutine to change its state from outside? For example, I would want a function that when called, would cause the coroutine to jump to a different step, or start waiting for a different event.
EDIT:
Currently I have a system where I register a coroutine with an event, and the coroutine gets resumed with the event name and info as parameters every time the event occurs. With this system, 1 and 2 are not issues, and 3 can solved by having the coro expect a special event name that makes it jump to the different step, and resuming it with that name as an arg. Also custom objects can have methods to register event handlers the same way.
I just wonder if this is considered the right way to use coroutines for event handling. For example, if I have a read event and a timer event (as a timeout for the read), and the read event happens first, I have to manually cancel the timer. It just doesn't seem to fit the sequential nature or handling events with coroutines.
How do you wait for multiple events at the same time, and branch depending on which one comes first?
If you need to use coroutines for this, rather than just a Lua function that you register (for example, if you have a function that does stuff, waits for an event, then does more stuff), then this is pretty simple. coroutine.yield will return all of the values passed to coroutine.resume when the coroutine is resumed.
So just pass the event, and let the script decide for itself if that's the one it's waiting for or not. Indeed, you could build a simple function to do this:
function WaitForEvents(...)
local events = {...}
assert(#... ~= 0, "You must pass at least one parameter")
do
RegisterForAnyEvent(coroutine.running()) --Registers the coroutine with the system, so that it will be resumed when an event is fired.
local event = coroutine.yield()
for i, testEvt in ipairs(events) do
if(event == testEvt) then
return
end
end
until(false)
end
This function will continue to yield until one of the events it is given has been fired. The loop assumes that RegisterForAnyEvent is temporary, registering the function for just one event, so you need to re-register every time an event is fired.
How to cancel the waiting after a certain period?
Put a counter in the above loop, and leave after a certain period of time. I'll leave that as an exercise for the reader; it all depends on how your application measures time.
How do you trigger the coroutine to change its state from outside?
You cannot magic a Lua function into a different "state". You can only call functions and have them return results. So if you want to skip around within some process, you must write your Lua function system to be able to be skippable.
How you do that is up to you. You could have each set of non-waiting commands be a separate Lua function. Or you could just design your wait states to be able to skip ahead. Or whatever.
I've created two threads A & B using CreateThread windows API. I'm trying to send the data from thread A to B.
I know I can use Event object and wait for the Event object in another using "WaitForSingleObject" method. What this event does all is just signal the thread. That's it! But how I can send a data. Also I don't want thread B to wait till thread A signals. It has it own job to do. I can't make it wait.
I can't find a Windows function that will allow me to send data to / from the worker thread and main thread referencing the worker thread either by thread ID or by the returned HANDLE. I do not want to introduce the MFC dependency in my project and would like to hear any suggestions as to how others would or have done in this situation. Thanks in advance for any help!
First of all, you should keep in mind that Windows provides a number of mechanisms to deal with threading for you: I/O Completion Ports, old thread pools and new thread pools. Depending on what you're doing any of them might be useful for your purposes.
As to "sending" data from one thread to another, you have a couple of choices. Windows message queues are thread-safe, and a a thread (even if it doesn't have a window) can have a message queue, which you can post messages to using PostThreadMessage.
I've also posted code for a thread-safe queue in another answer.
As far as having the thread continue executing, but take note when a change has happened, the typical method is to have it call WaitForSingleObject with a timeout value of 0, then check the return value -- if it's WAIT_OBJECT_0, the Event (or whatever) has been set, so it needs to take note of the change. If it's WAIT_TIMEOUT, there's been no change, and it can continue executing. Either way, WaitForSingleObject returns immediately.
Since the two threads are in the same process (at least that's what it sounds like), then it is not necessary to "send" data. They can share it (e.g., a simple global variable). You do need to synchronize access to it via either an event, semaphore, mutex, etc.
Depending on what you are doing, it can be very simple.
Thread1Func() {
Set some global data
Signal semaphore to indicate it is available
}
Thread2Func() {
WaitForSingleObject to check/wait if data is available
use the data
}
If you are concerned with minimizing Windows dependencies, and assuming you are coding in C++, then I recommend using Boost.Threads, which is a pretty nice, Posix-like C++ threading interface. This will give you easy portability between Windows and Linux.
If you go this route, then use a mutex to protect any data shared across threads, and a condition variable (combined with the mutex) to signal one thread from the other.
DonĀ“t use a mutexes when only working in one single process, beacuse it has more overhead (since it is a system-wide defined object)... Place a critical section around Your data and try to enter it (as Jerry Coffin did in his code around for the thread safe queue).