Race: callbacks and removing callbacks during unload of kext in OSX - macos

There doesn't appear to be synchronization between establishing/removing callbacks (e.g. kauth_unlisten_scope) and the callbacks themselves (in the xnu codebase, yes, I know, it's dated). This puts the burden of tracking/draining callbacks and synchronizing with calls on the extension itself. But this is problematic as well in that there is a window in noting that a thread has exited the callback AND actually returning out of the extension code.
Is there any pattern that gives a correct avoidance of this race? Or, is there any documentation from Apple that indicates they've synchronized this correctly?

As far as I'm aware there's no 100% reliable way of preventing kauth callback deregistration race conditions; the API is just badly designed. Apple themselves implement/recommend a simple atomic counter based mechanism, which you can see in the Kauth-O-Rama example. Search for gActivationCount in the KauthORama.c source file. There's still a small chance that a thread is running code before the increment or after the decrement in the callback, but I have never seen a crash caused by this.

Related

What is the use-case for TryEnterCriticalSection?

I've been using Windows CRITICAL_SECTION since the 1990s and I've been aware of the TryEnterCriticalSection function since it first appeared. I understand that it's supposed to help me avoid a context switch and all that.
But it just occurred to me that I have never used it. Not once.
Nor have I ever felt I needed to use it. In fact, I can't think of a situation in which I would.
Generally when I need to get an exclusive lock on something, I need that lock and I need it now. I can't put it off until later. I certainly can't just say, "oh well, I won't update that data after all". So I need EnterCriticalSection, not TryEnterCriticalSection
So what exactly is the use case for TryEnterCriticalSection?
I've Googled this, of course. I've found plenty of quick descriptions on how to use it but almost no real-world examples of why. I did find this example from Intel that, frankly doesn't help much:
CRITICAL_SECTION cs;
void threadfoo()
{
while(TryEnterCriticalSection(&cs) == FALSE)
{
// some useful work
}
// Critical Section of Code
LeaveCriticalSection (&cs);
}
// other work
}
What exactly is a scenario in which I can do "some useful work" while I'm waiting for my lock? I'd love to avoid thread-contention but in my code, by the time I need the critical section, I've already been forced to do all that "useful work" in order to get the values that I'm updating in shared data (for which I need the critical section in the first place).
Does anyone have a real-world example?
As an example you might have multiple threads that each produce a high volume of messages (events of some sort) that all need to go on a shared queue.
Since there's going to be frequent contention on the lock on the shared queue, each thread can have a local queue and then, whenever the TryEnterCriticalSection call succeeds for the current thread, it copies everything it has in its local queue to the shared one and releases the CS again.
In C++11 therestd::lock which employs deadlock-avoidance algorithm.
In C++17 this has been elaborated to std::scoped_lock class.
This algorithm tries to lock on mutexes in one order, and then in another, until succeeds. It takes try_lock to implement this approach.
Having try_lock method in C++ is called Lockable named requirement, whereas mutexes with only lock and unlock are BasicLockable.
So if you build C++ mutex on top of CTRITICAL_SECTION, and you want to implement Lockable, or you'll want to implement lock avoidance directly on CRITICAL_SECTION, you'll need TryEnterCriticalSection
Additionally you can implement timed mutex on TryEnterCriticalSection. You can do few iterations of TryEnterCriticalSection, then call Sleep with increasing delay time, until TryEnterCriticalSection succeeds or deadline has expired. It is not a very good idea though. Really timed mutexes based on user-space WIndows synchronization objects are implemented on SleepConditionVariableSRW, SleepConditionVariableCS or WaitOnAddress.
Because windows CS are recursive TryEnterCriticalSection allows a thread to check whether it already owns a CS without risk of stalling.
Another case would be if you have a thread that occasionally needs to perform some locked work but usually does something else, you could use TryEnterCriticalSection and only perform the locked work if you actually got the lock.

Is os.File's Write() threadsafe?

I was wondering if calling Write() on an os.File is thread safe. I'm having a hard time finding any mention of thread safety in the docs.
The convention (at least for the standard library) is the following: No function/method is safe for concurrent use unless explicitly stated (or obvious from the context).
It is not safe to write concurrently to an os.File via Write() without external synchronization.
After browsing the source code a little bit I found the following method which is eventually called by file.Write(). Since there are race condition checks in place, I'm assuming that the call is in fact not thread-safe within Go (Source).
However, it seemed unlikely that those system calls wouldn't be thread-safe on an OS level. After some browsing I came upon this interesting answer that fueled my suspicions even more. For windows the source indicates a call to WriteFile which also appears to be thread safe.

Monitoring files asynchronously

On Unix: I’ve been through FAM and Gamin, and both seem to provide a client/server file monitoring system. I would rather have a system where I tell the kernel to monitor some inodes and it pokes me back when events occur. Inotify looked promising at first on that side: inotify_init1 let me pass IN_NONBLOCK which in turn caused poll() to return directly. However I understood that I would have to call it regularly if I wanted to have news about the monitored files. Now I’m a bit short of ideas.
Is there something to monitor files asynchronously?
PS: I haven’t looked on Windows yet, but I would love to have some answers about it too.
As Celada says in the comments above, inotify and poll are the right way to do this.
Signals are not a mechanism for reasonable asynchronous programming -- and signal handlers are remarkably dangerous for the inexperienced and even for the experienced. One does not use them for such purposes voluntarily.
Instead, one should structure one's program around an event loop (see http://en.wikipedia.org/wiki/Event-driven_programming for an overall explanation) using poll, select, or some similar system call as the core of your program's event handling mechanism.
Alternatively, you can use threads, or threads plus an event loop.
However interesting are you answers, I am sorry but I can’t accept a mechanism based on blocking calls on poll or select, when the question states “asynchronously”, regardless of how deep it is hidden.
On the other hand, I found out that one could manage to run inotify asynchronously by passing to inotify_init1 the flag IN_NONBLOCK. Signals are not triggered as they would have with aio, and a read call that would block blocking would set errno to EWOULDBLOCK instead.

What are event handles?

I had a leaking handle problem ("Not enough quota available to process this command.") in some inherited C# winforms code, so I went and used Sysinternals' Handle tool to track it down. Turns out it was Event Handles that were leaking, so I tried googled it (took a couple tries to find a query that didn't return "Did you mean: event handler?"). According to Junfeng Zhang, event handles are generated by the use of Monitor, and there may be some weird rules as far as event handle disposal and the synchonization primitives.
I'm not entirely sure that the source of my leaking handles are entirely due to simply long-lived objects calling lots of synchronization stuff, as this code is also dealing with HID interfaces and lots of win32 marshaling and interop, and was not doing any synchronization that I was aware of. Either way, I'm just going to run this in windbg and start tracing down where the handles are originating from, and also spend a lot of time learning this section of the code, but I had a very hard time finding information about what event handles are in the first place.
The msdn page for the event kernel object just links to the generic synchronization overview... so what are event handles, and how are they different from mutexes/semaphores/whatever?
The NT kernel uses event objects to allow signals to transferred to entities that wait on the signal. A mutex and a semaphore are also waitable kernel objects (Kernel Dispatcher Objects), but with different semantics. The only time I ever came across them was when waiting for IO to complete in drivers.
So my theory on your problem is possibly a faulty driver, or are you relying on specialised hardware?
Edit: More info (from Windows Internals 5th Edition - Chapter 3 System Mechanics)
Some Kernel Dispatcher Objects (e.g. mutex, semaphore) have the of concept ownership. So when signalled the released one waiting thread will be released will grab these resources. And others will have to continue to wait. Events are not owned hence are available to be reset by any thread.
Also there are three types of events:
Notification : On signalled all waiting threads are released
Synchronisation : On signalled one waiting thread is released but the event is reset
Keyed : On signalled one waiting thread in the same process as the signaller is released.
Another interesting thing that I've learned is that critical sections (the lock primitive in c#) are actually not kernel objects, rather they are implemented out of a keyed event, or mutex or semaphore as required.
If you're talking about kernel Event Objects, then an event handle will be a handle (Int) that the system keeps on this object so other objects can reference it. IE Keep a 'handle' on it.
Hope this helps!

How to deal with a second event-loop with message-dispatch?

I am working on a program which is essentially single-threaded, and its only thread is the main event-loop thread. Consequently, all its data structures are basically not protected by anything like critical region.
Things work fine until it recently integrates some new functions based on DirectShow API. Some DirectShow APIs open a second event-loop and within that second loop it dispatch messages (i.e. invoke other event-handling callbacks unpredictably). So when a second event-handling function is invoked, it might damage the data struct which is being accessed by the function that invokes the DirectShow API.
I have some experience in kernel programming. And what comes in my mind is that, for a single-threaded program, how it should deal with its data structure is very like how kernel should deal with per-CPU data structure. And in kernel, when a function accesses per-CPU data, it must disable the interrupt (very like the message-dispatching in a second event-loop). However, I find there is no easy way to either avoid invoke DirectShow API or to prevent the create of a second event-loop within them, is there any way?
mutexes. semaphores. locking. whatever name you want to call it, that's what you need.
There are several possible solutions that come to mind, depending on exactly what's going wrong and your code:
Make sure your data structures are in a consistent state before calling any APIs that run a modal loop.
If that's not possible, you can use a simple boolean variable to protect the structure. If it's set, then simply abort any attempt to update it or queue the update for later. Another option is to abort the previous operation.
If the problem is user generated events, then disable the problematic menus or buttons while the operation is in progress. Alternatively, you could display a modal dialog.

Resources