How to create locks in VC++? - windows

Lets say I am implementing a critical section and protecting some array in VC++, how do I do it using locks in VC++?

You need the API functions for critical sections:
InitializeCriticalSection Call once, from any thread, but typically the main thread, to initialize the lock. Initialize before you do anything else with it.
EnterCriticalSection Call from any thread to acquire the lock. If another thread has the lock, it will block until it can acquire the lock. Critical sections are re-entrant meaning a thread successfully acquires the lock even if it already holds it.
LeaveCriticalSection Release the lock. Each call to EnterCriticalSection must be paired with a matching call to LeaveCriticalSection. Don't let exceptions stop these acquire/release calls being paired up.
DeleteCriticalSection Call once, from any thread, but typically the main thread, to finalize the lock. Do this when no threads hold the lock. After you call this the lock is invalid and you can't attempt to acquire it again.
MSDN helpfully provide a trivial example.
If you are using MFC then you would probably use CCriticalSection which wraps up the Win32 critical section APIs in a class.
As for how you do it with your array. Well, your threads will only execute blocks of code protected by the lock one at a time. You need the lock to stop race conditions where two threads try to read/write to the same memory location simultaneously, or indeed other more subtle conditions that can break your algorithm.
If you were to describe the array, its contents, and how you operate on it, then it might be possible to give you some specific advice. Exactly how you operate on this array will have a large bearing on the ideal synchronisation strategy, and in certain cases you may be able to use lock-free methods.

Create a mutex via CreateMutex, take ownership of it via WaitForSingleObject, release ownership of the mutex via ReleaseMutex, and deleted it when you are done with CloseHandle.
Alternatives you can look up include CriticalSections, Semaphores, and Events.

If you're using VS 2010, a criticial_section object is included in the header file ppl.h.
Note there is also a concurrent_vector class template which is synchronized (i.e. locks aren't needed).

Related

What is the use-case for TryEnterCriticalSection?

I've been using Windows CRITICAL_SECTION since the 1990s and I've been aware of the TryEnterCriticalSection function since it first appeared. I understand that it's supposed to help me avoid a context switch and all that.
But it just occurred to me that I have never used it. Not once.
Nor have I ever felt I needed to use it. In fact, I can't think of a situation in which I would.
Generally when I need to get an exclusive lock on something, I need that lock and I need it now. I can't put it off until later. I certainly can't just say, "oh well, I won't update that data after all". So I need EnterCriticalSection, not TryEnterCriticalSection
So what exactly is the use case for TryEnterCriticalSection?
I've Googled this, of course. I've found plenty of quick descriptions on how to use it but almost no real-world examples of why. I did find this example from Intel that, frankly doesn't help much:
CRITICAL_SECTION cs;
void threadfoo()
{
while(TryEnterCriticalSection(&cs) == FALSE)
{
// some useful work
}
// Critical Section of Code
LeaveCriticalSection (&cs);
}
// other work
}
What exactly is a scenario in which I can do "some useful work" while I'm waiting for my lock? I'd love to avoid thread-contention but in my code, by the time I need the critical section, I've already been forced to do all that "useful work" in order to get the values that I'm updating in shared data (for which I need the critical section in the first place).
Does anyone have a real-world example?
As an example you might have multiple threads that each produce a high volume of messages (events of some sort) that all need to go on a shared queue.
Since there's going to be frequent contention on the lock on the shared queue, each thread can have a local queue and then, whenever the TryEnterCriticalSection call succeeds for the current thread, it copies everything it has in its local queue to the shared one and releases the CS again.
In C++11 therestd::lock which employs deadlock-avoidance algorithm.
In C++17 this has been elaborated to std::scoped_lock class.
This algorithm tries to lock on mutexes in one order, and then in another, until succeeds. It takes try_lock to implement this approach.
Having try_lock method in C++ is called Lockable named requirement, whereas mutexes with only lock and unlock are BasicLockable.
So if you build C++ mutex on top of CTRITICAL_SECTION, and you want to implement Lockable, or you'll want to implement lock avoidance directly on CRITICAL_SECTION, you'll need TryEnterCriticalSection
Additionally you can implement timed mutex on TryEnterCriticalSection. You can do few iterations of TryEnterCriticalSection, then call Sleep with increasing delay time, until TryEnterCriticalSection succeeds or deadline has expired. It is not a very good idea though. Really timed mutexes based on user-space WIndows synchronization objects are implemented on SleepConditionVariableSRW, SleepConditionVariableCS or WaitOnAddress.
Because windows CS are recursive TryEnterCriticalSection allows a thread to check whether it already owns a CS without risk of stalling.
Another case would be if you have a thread that occasionally needs to perform some locked work but usually does something else, you could use TryEnterCriticalSection and only perform the locked work if you actually got the lock.

Are boost::unique_locks granted in the order that they are called?

I've examined the documentation on Boost Synchronization, and I cannot seem to determine if a boost::unique_lock will attain its lock in order.
In other words, if two threads are contending to lock a mutex which is already locked, will the order that they attempt to lock be maintained after the lock is released?
Unique lock is not a lock (it's a lock adaptor that can act on any Lockable or TimedLockable type, see cppreference).
The order in which threads get the lock (or resources in general) is likely implementation defined. These implementations usually have well-documented scheduling semantics, so applications can avoid resource starvation, soft-locks and dead locks.
As an interesting example, condition variables preserve scheduling order only if you take care to signal them under the mutex (if you don't, things will usually work unless your scheduling crucially depends on fair scheduling):
man phtread_cond_signal
The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits; however, if predictable scheduling behavior is required, then that mutex shall be locked by the thread calling pthread_cond_broadcast() or pthread_cond_signal().

OSSpinLockLock when already locked?

What happens when I use OSSpinLockLock when the lock is already held in the same thread? (hence it should "let me in").
I know it doesn't have a counter, but it's a problem to implement one, because then I'd need to verify this is the thread, the count is zero, and all this would probably need to be locked as well...
If you attempt to lock a spin lock from the thread that already owns it, you will deadlock. Spin locks are not recursive.
You should either look at pthread recursive mutexes, or change your design to avoid having to lock recursively.

Recursive mutex on Windows?

As far as I understand, on Windows CRITICAL_SECTION can be used only as a non-recursive mutex. To get recursive mutex you have to use OpenMutex and friends.
However, AFAIU, Win32 Mutex cannot be used with condition variable (InitializeConditionVariable et al.)
Is there a way to use recursive mutex in conjunction with condition variable on Windows?
valdo's comment is right. CRITICAL_SECTION is recursive. Here's a quotation from MSDN: "After a thread has ownership of a critical section, it can make additional calls to EnterCriticalSection or TryEnterCriticalSection without blocking its execution." Problem solved.
That just wouldn't make any sense. Semantically, the point of a condition variable is that you atomically release the lock when you wait -- thus allowing other threads to acquire the lock to do the thing you are waiting for. However, the "release" operation on a recursive mutex may not actually unlock it, so waiting after a release could deadlock. The fact that you want a way to do this strongly suggests something is wrong with your design or your understanding of condition variables.
Think about it -- what should happen when a function that holds a lock on the recursive mutex calls a function that acquires a second lock and then calls the sleep function? If the lock is released, the first function's logic will break since the object will be modified while it held a lock on it. If the lock is not released, the wait will deadlock since the thing it is waiting for can never happen because it holds the lock another thread would need to make it happen.
There is no sensible way to use a condition variable without knowing whether or not you have a lock on it already. And if you know whether or not you have a lock, there's no need for a recursive lock function. If you know you already have a lock, don't bother calling the lock function. If you know you don't already have a lock, the lock function will work fine even if it's not recursive.

Is it a good idea to use the existence of a named mutex as an indicator?

I'm using a named mutex to detect other instances of my application and exit accordingly, and found that there are two ways of doing this:
Create the mutex; ignore the indication whether it already existed; try to acquire it; use the fact that acquire succeeded/failed.
Create the mutex; use the indication whether it already existed.
I can't decide whether to acquire the mutex (and release on exit). On the one hand, acquiring+releasing even though it makes no known difference looks like cargo culting, but on the other hand the existence of a mutex object sounds like a side-effect of its actual intended functionality.
So, should I do #1 or #2 to detect if the app is already running?
The indication that the mutex already existed is sufficient to let you know that there is at least one other process. There is no need to take the mutex for that.
But as long as you have the mutex, you can take it if you need to lock other instances out of some piece of code.
For instance, you can take the mutex until you get out of your initialization code. That way only one instance of your program can be in initialization at a time. If you take the mutex after opening it, the one that got the mutex first knows that no other instance is in its init code. But more importantly, the one that didn't create the mutex knows that the one that create is has finished initialization.
That way if instance 2 wants to talk to instance 1, it knows that instance 1 is ready to listen once it has been able to enter the mutex at least once. This works better if you create the mutex as initially signalled to be absolutely sure that the creator gets to be the first owner.
I'm not sure of it but the named mutex may still exists if the program crashes and doesn't terminate properly. If so, the existence test will succeed whereas no other instance were running. Thus, I personnaly would prefer to try to acquire it ;-)
#1 sounds the way you should go.
Create the mutex; ignore the indication whether it already existed; try to acquire it; use the fact that acquire succeeded/failed
Because your app launching code might be executed twice (on a resume or similar OS stuff), and the acquire will succeed even if the mutex is already existing as it was created by the same app id.

Resources