How to deal with a second event-loop with message-dispatch? - windows

I am working on a program which is essentially single-threaded, and its only thread is the main event-loop thread. Consequently, all its data structures are basically not protected by anything like critical region.
Things work fine until it recently integrates some new functions based on DirectShow API. Some DirectShow APIs open a second event-loop and within that second loop it dispatch messages (i.e. invoke other event-handling callbacks unpredictably). So when a second event-handling function is invoked, it might damage the data struct which is being accessed by the function that invokes the DirectShow API.
I have some experience in kernel programming. And what comes in my mind is that, for a single-threaded program, how it should deal with its data structure is very like how kernel should deal with per-CPU data structure. And in kernel, when a function accesses per-CPU data, it must disable the interrupt (very like the message-dispatching in a second event-loop). However, I find there is no easy way to either avoid invoke DirectShow API or to prevent the create of a second event-loop within them, is there any way?

mutexes. semaphores. locking. whatever name you want to call it, that's what you need.

There are several possible solutions that come to mind, depending on exactly what's going wrong and your code:
Make sure your data structures are in a consistent state before calling any APIs that run a modal loop.
If that's not possible, you can use a simple boolean variable to protect the structure. If it's set, then simply abort any attempt to update it or queue the update for later. Another option is to abort the previous operation.
If the problem is user generated events, then disable the problematic menus or buttons while the operation is in progress. Alternatively, you could display a modal dialog.

Related

Monitoring files asynchronously

On Unix: I’ve been through FAM and Gamin, and both seem to provide a client/server file monitoring system. I would rather have a system where I tell the kernel to monitor some inodes and it pokes me back when events occur. Inotify looked promising at first on that side: inotify_init1 let me pass IN_NONBLOCK which in turn caused poll() to return directly. However I understood that I would have to call it regularly if I wanted to have news about the monitored files. Now I’m a bit short of ideas.
Is there something to monitor files asynchronously?
PS: I haven’t looked on Windows yet, but I would love to have some answers about it too.
As Celada says in the comments above, inotify and poll are the right way to do this.
Signals are not a mechanism for reasonable asynchronous programming -- and signal handlers are remarkably dangerous for the inexperienced and even for the experienced. One does not use them for such purposes voluntarily.
Instead, one should structure one's program around an event loop (see http://en.wikipedia.org/wiki/Event-driven_programming for an overall explanation) using poll, select, or some similar system call as the core of your program's event handling mechanism.
Alternatively, you can use threads, or threads plus an event loop.
However interesting are you answers, I am sorry but I can’t accept a mechanism based on blocking calls on poll or select, when the question states “asynchronously”, regardless of how deep it is hidden.
On the other hand, I found out that one could manage to run inotify asynchronously by passing to inotify_init1 the flag IN_NONBLOCK. Signals are not triggered as they would have with aio, and a read call that would block blocking would set errno to EWOULDBLOCK instead.

Should I use Mutex OR Critical Section for Windows Mobile RIL

I am using Radio Layer Interface (RIL) Native APIs in Windows Mobile application. In this API, the return values / results of most functions are not returned immediately but are passed through a callback function which is passed to the RIL API.
Some usage examples are found at XDA Develompent Tools and Google Gears Geolocation API.
My question is, in these two examples, a mutex is used to guard the data instead of other synchronization objects.
Now, will Critical Section do fine here in the use cases described by both examples? Which thread or process will actually call the callback functions?
Edit:
My data is accessed by my codes only from inside my process but which thread/process is calling the callback functions in RIL API? I mean, I passed a function callback to the RIL API, but are the callbacks called from other process? in that case, it will give another explanation why the samples are using Mutex. If the RIL API actually creates a thread inside my process and it calls my callback functions, then I think Critical Section would be fine (and it's faster than a mutex).
Update:
I have data which is (1) accessed by my codes from within my own process and is also (2) modified from a function callback. The callback is done by RIL API.
My Question: Which thread/process is calling the callback functions in RIL API?
The Story so far:
Me: Hi Mr RIL, please put some data into my office (a.k.a variables).
RIL: OK Sir. I will put the data later and I will signal you when it is done (I used an event here).
An access card is required to enter my office. If Mr RIL is from the same company as me, Mr RIL can use his own access card to enter my office (in my case, it means a Critical Section). If he is from other companies, I will need to set up an access card/visitor card for him (in my case, I need a mutex here).
If Mr RIL uses his own access card, it means I don't need to set up an access card/visitor card for him and that means less trouble for me. (i.e. Critical Section is faster than a Mutex)
The problem is, I just met this Mr RIL a few days ago and I don't know much about him. I don't know if he is from the same company as me. One option as mentioned by nobugz is to set up an access card for Mr RIL regardless whether Mr RIL is from the same company as me. This way, Mr RIL is guaranteed to be able to enter my office. (my data/variables are guaranteed to be safe)
Right now I use mutex in my code (set up a possibly redundant access card for Mr RIL).
Aha! Just got an idea when writing this. I think I will just ask Mr RIL from which company he is. That way, I don't have to set up access card for him in the future if he turns out to be in the same company as me. (i.e. put GetCurrentProcessId() and GetCurrentThreadId() in the callback function)
The Windows Mobile RIL normally resides in device.exe (for WM6.x). However, when your process invokes the RIL, your call passes via the RIL Proxy.
The RIL proxy is linked with, and resides in your process, and handles all of the issues associated with process boundaries for you (as an aside, this is at least part of the reason why all RIL data structures need to be packed into a single block of memory of known size). Internally the RIL Proxy creates a thread on which your callback is executed.
This means that your code can use a CRITICAL_SECTION object to provide the necessary synchronization/protection.
The point of using the mutex is that you don't know what thread might make the callback. Yes, a critical section would work too. Careful, getting it wrong causes random and very hard to diagnose failure.
A critical section is a mutex. A critical section is different from a normal mutex (at least primarily) in one way: it's specific to one process, where a mutex can be used across processes.
So, in this case, the basic question is exactly what you're protecting -- if it's the data inside your program, that won't be accessible to another process, then a critical section should do the job nicely. If you're protecting something that would be shared by the two processes if the user were to run two instances of your program at once, then you probably need a mutex.
Edit: As far as having to use a critical section to protect what RIL itself does, no, that isn't (or at least definitely shouldn't) be needed. With a mutex, you're counting on all the processes cooperate by opening a mutex with the same name to control access to the shared resource(s). You can't count on that, so if it is needed the interface is completely broken.
Update: unless they're doing something really unusual in RIL, the callback will happen within your process, so a critical should be adequate. If it's modifying your data, that means your data is mapped and visible to that code -- which means the data in the data in the critical section will also be mapped and visible, and it'll work. The time a critical section doesn't work is when you're dealing with separate processes, so the data in one isn't mapped/visible to the other.
Well, one other difference between a mutex and a critical section (Windows implementations, of course) is that a critical section is re-entrant - i.e. the same thread can acquire the critical section twice without having to release it.

How can I implement a blocking process in a single slot without freezing the GUI?

Let's say I have an event and the corresponding function is called. This function interacts with the outside world and so can sometimes have long delays. If the function waits or hangs then my UI will freeze and this is not desirable. On the other hand, having to break up my function into many parts and re-emitting signals is long and can break up the code alot which would make hard to debug and less readable and slows down the development process. Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting? For example, the compiler could reckognize a keyword then implement a return then re-emit signals connected to new slots automatically? Why do I think this would be a great idea ;) Im working with Qt
Your two options are threading, or breaking your function up somehow.
With threading, it sounds like your ideal solution would be Qt::Concurrent. If all of your processing is already in one function, and the function is pretty self-contained (doesn't reference member variables of the class), this would be easy to do. If not, things might get a little more complicated.
For breaking your function up, you can either do it as you suggested and break it into different functions, with the different parts being called one after another, or you can do it in a more figurative way, but scattering calls to allow other processing inside your function. I believe calling processEvents() would do what you want, but I haven't come across its use in a long time. Of course, you can run into other problems with that unless you understand that it might cause other parts of your class to run once more (in response to other events), so you have to treat it almost as multi-threaded in protecting variables that have an indeterminate state while you are computing.
"Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting?"
That would be a non-blocking process.
But your original query was, "How can I implement a blocking process in a single slot without freezing the GUI?"
Perhaps what you're looking for a way to stop other processing when some - any - process decides it's time to block? There are typically ways to do this, yes, by calling a method on one of the parental objects, which, of course, will depend on the specific objects you are using (eg a frame).
Look to the parent objects and see what methods they have that you'd like to use. You may need to overlay one of them to get your exactly desired results.
If you want to handle a GUI event by beginning a long-running task, and don't want the GUI to wait for the task to finish, you need to do it concurrently, by creating either a thread or a new process to perform the task.
You may be able to avoid creating a thread or process if the task is I/O-bound and occasional callbacks to handle I/O would suffice. I'm not familiar with Qt's main loop, but I know that GTK's supports adding event sources that can integrate into a select() or poll()-style loop, running handlers after either a timeout or when a file descriptor becomes ready. If that's the sort of task you have, you could make your event handler add such an event source to the application's main loop.

What can I access from a BackgroundWorker without "Cross Threading"?

I realise that I can't access Form controls from the DoWork event handler of a BackgroundWorker. (And if I try to, I get an Exception, as expected).
However, am I allowed to access other (custom) objects that exist on my Form?
For instance, I've created a "Settings" class and instantiated it in my Form and I seem to be able to read and write to its properties.
Is it just luck that this works?
What if I had a static class? Would I be able to access that safely?
#Engram:
You've got the gist of it - CrossThreadCalls are just a nice feature MS put into the .NET Framework to prevent the "bonehead" type of parallel programming mistakes. It can be overridden, as I'm guessing you've already found out, by setting the "AllowCrossThreadCalls" property on the class (and not on an instance of the class, e.g. set Label.AllowCrossThreadCalls and not lblMyLabel.AllowCrossThreadCalls).
But more importantly, you're right about the need to use some kind of locking mechanism. Whenever you have multiple threads of execution (be it threads, processes or whatever), you need to make sure that when you have one thread reading/writing to a variable, you probably don't want some other thread barging and changing that value under the feet of the first thread.
The .NET Framework actually provides several other mechanisms which might be more useful, depending on circumstances, than locking in code. The first is to use a Monitor class, which has the effect of locking a particular object. When you use this, other threads can continue to execute, as long as they don't try to lock that same object. Another very useful and common parallel-programming idea is the Mutex (or Semaphore). The Mutex is basically like a game of Capture the Flag between your threads. If one thread grabs the flag, no other threads can grab it until the first thread drops it. (A Semaphore is just like a Mutex, except that there can be more than one flag in a game.)
Obviously, none of these concepts will work in every particular problem - but having a few more tools to help you out might come in handy some day :)
You should communicate to the user interface through the ProgressChanged and RunWorkerCompleted events (and never the DoWork() method as you have noted).
In principle, you could call IsInvokeRequired, but the designers of the BackgroundWorker class created the ProgressChanged callback event for the purpose of updating UI elements.
[Note: BackgroundWorker events are not marshaled across AppDomain boundaries. Do not use a BackgroundWorker component to perform multithreaded operations in more than one AppDomain.]
MSDN Ref.
Ok, I've done some more research on this and I think have an answer. (Let the votes decide if I'm right!)
The answer is.. you can access any custom object that's in scope, however your access will not be thread-safe.
To ensure that it is thread-safe you should probably be using lock. The lock keyword prevents more than one thread executing a particular piece of code. (Subject to actually using it properly!)
The Cross Threading Exception that occurs when you try and access a Control is a safety mechanism designed especially for Controls. (It's easier and probably more efficient to get the user to make thread-safe calls then it is to design the controls themselves to be thread-safe).
You can't access controls that where created in one thread from another thread.
You can either use Settings class that you mentioned, or use InvokeRequired property and Invoke methods of control.
I suggest you look at the examples on those pages:
http://msdn.microsoft.com/en-us/library/ms171728.aspx
http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invokerequired.aspx

OS X inter thread communication question

I am developing a multi-threaded application in Cocoa. The main thread takes values from the user, and when a button is clicked I invoke a secondary thread in which a long calculation takes place. Now from this thread I have to return the output of every step of the calculation to the main thread. I want to periodically send data from one thread to the other. I can't find any simple example that does this. Any ideas?
There are a number of ways to do this, in rough order of complexity (easiest first):
use NSObject's performSelectorOnMainThread:withObject:waitUntilDone: which is pretty self explanatory.
use performSelector:onThread:withObject:waitUntilDone:, which will let you go the other way
use an NSNotification (and NSDistributedNotificationCenter), though you can easily run into a race condition if you're not careful
Use NSPorts to send data back and forth
Check out the doc that Abizer mentioned for details on all of these.
performSelectorOnMainThread:withObject:waitUntilDone: is often the easiest way to update the UI with a background thread's progress. You could also create your own storage area that's safe to access between threads using NSLock or a similar mechanism, or even use distributed objects (which also works between processes or over a network).
Then there's NSOperationQueue and NSOperation which does help a lot to simplify multi-threaded programming, although a lot of programmers have been avoiding it since it can cause a crash in certain circumstances under Leopard.
Have a look at the Apple docs for this.
You may need to create an ADC member account, but this is free
Multi-threaded Cocoa Programs

Resources