Monitoring files asynchronously - windows

On Unix: I’ve been through FAM and Gamin, and both seem to provide a client/server file monitoring system. I would rather have a system where I tell the kernel to monitor some inodes and it pokes me back when events occur. Inotify looked promising at first on that side: inotify_init1 let me pass IN_NONBLOCK which in turn caused poll() to return directly. However I understood that I would have to call it regularly if I wanted to have news about the monitored files. Now I’m a bit short of ideas.
Is there something to monitor files asynchronously?
PS: I haven’t looked on Windows yet, but I would love to have some answers about it too.

As Celada says in the comments above, inotify and poll are the right way to do this.
Signals are not a mechanism for reasonable asynchronous programming -- and signal handlers are remarkably dangerous for the inexperienced and even for the experienced. One does not use them for such purposes voluntarily.
Instead, one should structure one's program around an event loop (see http://en.wikipedia.org/wiki/Event-driven_programming for an overall explanation) using poll, select, or some similar system call as the core of your program's event handling mechanism.
Alternatively, you can use threads, or threads plus an event loop.

However interesting are you answers, I am sorry but I can’t accept a mechanism based on blocking calls on poll or select, when the question states “asynchronously”, regardless of how deep it is hidden.
On the other hand, I found out that one could manage to run inotify asynchronously by passing to inotify_init1 the flag IN_NONBLOCK. Signals are not triggered as they would have with aio, and a read call that would block blocking would set errno to EWOULDBLOCK instead.

Related

Async signal or notification between processes on Windows

There are 2 processes running on Windows. They communicate with each other through named pipe. When one of them is ready to send a message, I want to notificate the other process asynchronously like signal on Linux so that the other process don't need to check for the pipe continously. Are there some similar methods like the signal mechanism on Windows or other way to solve my problem?
A direct signal mechanism which conceptually works the same way does not exist (one could probably simulate it with a thread injection hack, but don't even think about that). It is not much of a problem, since you can do otherwise.
Every waitable kernel object which can take a name such as an event or a semaphore can be accessed by different processes.
You can WaitForSingleObject on the synchronization primitive until the other process signals it. That would be a Unix-like readiness notification mechanism (not quite as elegant, but to the same effect).
However, that isn't even necessary. Named pipes (not true for anyonymous pipes!) can be used with overlapped I/O. Which means you can use ReadFileEx to initiate a read from the pipe, and it will linger there in the background until it can complete.
You can think of this kind of I/O as "fire and forget". Your process continues running while the read operation is blocked. When the read operation completes, it signals an event or posts a completion message to a completion port (which you can query) or posts an asynchronous procedure call ("APC", a more fancy name for "callback") to the thread that originally called it. That's as close to a "signal" as you can get under Windows.
Unluckily, APCs don't quite work as one would wish, since they only execute at well-defined points (when a thread is in an "alertable wait state", which you must do explicitly by setting the altertable flag in a wait function or calling NtTestAlert).
The likely reasoning why the Windows designers made it that way that this is "safer", but it is also more annoying from an usability point of view. Alas, that is how it works.
Note that the overlapped I/O model is the exact opposite of the readiness notification system under e.g. Linux. Rather than asking the OS whether a descriptor is ready to be read, you tell the OS to read it, and you can have yourself be notified (or verify) whether this has completed.

Is it a good idea to implement a TCP/IP socket client-server with signals?

To clarify, I am wondering what are the cons and pros of writing a "multiple simultaneous clients to a single server" using TCP/IP sockets and signal handlers that are called in response to "can read / can write" signal conditions on client socket file descriptors? As far as I understand at least the Linux kernel uses signals to notify a process of conditions related to socket descriptors? Obviously one has to be careful in a signal handler, which, again as I understand, interrupts the process - reentrancy, atomicity, undefined state for variables, etc.
But one does not have to have signals do most work, in fact quite the opposite - add the socket to a set of sockets ready for reading, writing, much like select, poll and epoll_wait do, and let the default process code flow work with these sets? In effect, one emulates much the same pattern as with the functions mentioned, but purely principally, is it doable and how can it be worth it?
There is already a couple of such methods. One is using the SIGIO signal, check man 7 socket and look for the section named "Signals" for more information.
The other method is standardized by POSIX and called async I/O. The functions to use are all prefixed with aio_ (for example aio_read). See this link for an example on how to use this or check the manual page.

triggering kevent by force

I'm using kqueue for socket synchronization in OS X. I can register an event of interest like the following:
struct kevent change;
EV_SET(&change, connected_socket, EVFILT_READ, EV_ADD, 0, NULL, NULL);
kevent(k_queue_, &change, 1, NULL, 0, NULL);
And the question is, is there a way to trigger this event by force so that the waiting kevent call would return?
Some possibilities aside from natural writing of data to the other side of the socket :)
shutdown(2) the read side of that socket - you'll get EV_EOF in flags (silly),
Use the timeout argument and call the same handling function,
Use the self-pipe trick when you need to break the wait.
My question though: why do you need this?
Edit:
If I understand your comments correctly you are looking for a way to get around edge-triggered behavior (EV_CLEAR) for write events. I believe that the proper way of doing this is to un-register your socket from EVFILT_WRITE when you don't have anything in the outgoing queue, then re-register it again when there's data to send. It's a bit more work, but that's how it works, and you don't need any additional system calls since kevent(2) accepts both changes and results. Take a look into libevent and see how it handles this sort of stuff. And you are using non-blocking sockets, right?
I would recommend a slightly different solution.
Add another registered event to the kqueue. Specifically a EVFILT_USER.
You can use this to trigger whatever behavior you want to wake the kevent() thread up for without the code looking weird or being hard to maintain.
The OSX sources have a real rough test for it in
http://www.opensource.apple.com/source/xnu/xnu-1699.24.23/tools/tests/xnu_quick_test/kqueue_tests.c
OSX 10.6 and FreeBSD 8.1 add support for EVFILT_USER, which we can use to wake up the event loop from another thread.
Note that if you use this to implement your own condition and timedwait, you still need locks in order to avoid race conditions, as explained in this excellent answer.
See my other answer for a full code example: https://stackoverflow.com/a/31174803/432

How to deal with a second event-loop with message-dispatch?

I am working on a program which is essentially single-threaded, and its only thread is the main event-loop thread. Consequently, all its data structures are basically not protected by anything like critical region.
Things work fine until it recently integrates some new functions based on DirectShow API. Some DirectShow APIs open a second event-loop and within that second loop it dispatch messages (i.e. invoke other event-handling callbacks unpredictably). So when a second event-handling function is invoked, it might damage the data struct which is being accessed by the function that invokes the DirectShow API.
I have some experience in kernel programming. And what comes in my mind is that, for a single-threaded program, how it should deal with its data structure is very like how kernel should deal with per-CPU data structure. And in kernel, when a function accesses per-CPU data, it must disable the interrupt (very like the message-dispatching in a second event-loop). However, I find there is no easy way to either avoid invoke DirectShow API or to prevent the create of a second event-loop within them, is there any way?
mutexes. semaphores. locking. whatever name you want to call it, that's what you need.
There are several possible solutions that come to mind, depending on exactly what's going wrong and your code:
Make sure your data structures are in a consistent state before calling any APIs that run a modal loop.
If that's not possible, you can use a simple boolean variable to protect the structure. If it's set, then simply abort any attempt to update it or queue the update for later. Another option is to abort the previous operation.
If the problem is user generated events, then disable the problematic menus or buttons while the operation is in progress. Alternatively, you could display a modal dialog.

How can I implement a blocking process in a single slot without freezing the GUI?

Let's say I have an event and the corresponding function is called. This function interacts with the outside world and so can sometimes have long delays. If the function waits or hangs then my UI will freeze and this is not desirable. On the other hand, having to break up my function into many parts and re-emitting signals is long and can break up the code alot which would make hard to debug and less readable and slows down the development process. Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting? For example, the compiler could reckognize a keyword then implement a return then re-emit signals connected to new slots automatically? Why do I think this would be a great idea ;) Im working with Qt
Your two options are threading, or breaking your function up somehow.
With threading, it sounds like your ideal solution would be Qt::Concurrent. If all of your processing is already in one function, and the function is pretty self-contained (doesn't reference member variables of the class), this would be easy to do. If not, things might get a little more complicated.
For breaking your function up, you can either do it as you suggested and break it into different functions, with the different parts being called one after another, or you can do it in a more figurative way, but scattering calls to allow other processing inside your function. I believe calling processEvents() would do what you want, but I haven't come across its use in a long time. Of course, you can run into other problems with that unless you understand that it might cause other parts of your class to run once more (in response to other events), so you have to treat it almost as multi-threaded in protecting variables that have an indeterminate state while you are computing.
"Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting?"
That would be a non-blocking process.
But your original query was, "How can I implement a blocking process in a single slot without freezing the GUI?"
Perhaps what you're looking for a way to stop other processing when some - any - process decides it's time to block? There are typically ways to do this, yes, by calling a method on one of the parental objects, which, of course, will depend on the specific objects you are using (eg a frame).
Look to the parent objects and see what methods they have that you'd like to use. You may need to overlay one of them to get your exactly desired results.
If you want to handle a GUI event by beginning a long-running task, and don't want the GUI to wait for the task to finish, you need to do it concurrently, by creating either a thread or a new process to perform the task.
You may be able to avoid creating a thread or process if the task is I/O-bound and occasional callbacks to handle I/O would suffice. I'm not familiar with Qt's main loop, but I know that GTK's supports adding event sources that can integrate into a select() or poll()-style loop, running handlers after either a timeout or when a file descriptor becomes ready. If that's the sort of task you have, you could make your event handler add such an event source to the application's main loop.

Resources