What exactlly is lock with adopt_lock_t - c++11

When I checked adopt_lock_t,
it says adopt_lock_t assume the calling thread already has ownership of the mutex.
So what's the meaning of the word assume? What if other thread already holding the mutex when I claim the lock(adopt_lock_t) with the same mutex?

If the assumption is wrong, the program's behavior is not constrained by the C++ standard.
So don't get it wrong.
What happens? That is outside of what the standard dictates. Anything. A compliant C++ compiler could check for that condition and, if true, transfer your web browsing history to your parents. Or it could crash. Or deadlock. Or your hard drive could be formatted.
As a matter of QoI, what would most likely happen is a crash, or your concurrency code being nonsense (an underflow on a lock counter maybe). The C++ standard states that implementors are not responsible for any kind of reasonable behavior there, and compilers are free to optimize based on the assumption the lock is already held.
The most likely implementation is that the constructor doesn't lock the mutex, but does unlock in the destructor. What happens when you unlock a mutex in a thread that doesn't have it locked, be it locked in another thread or not, is going to be implementation dependent.

Related

What is the use-case for TryEnterCriticalSection?

I've been using Windows CRITICAL_SECTION since the 1990s and I've been aware of the TryEnterCriticalSection function since it first appeared. I understand that it's supposed to help me avoid a context switch and all that.
But it just occurred to me that I have never used it. Not once.
Nor have I ever felt I needed to use it. In fact, I can't think of a situation in which I would.
Generally when I need to get an exclusive lock on something, I need that lock and I need it now. I can't put it off until later. I certainly can't just say, "oh well, I won't update that data after all". So I need EnterCriticalSection, not TryEnterCriticalSection
So what exactly is the use case for TryEnterCriticalSection?
I've Googled this, of course. I've found plenty of quick descriptions on how to use it but almost no real-world examples of why. I did find this example from Intel that, frankly doesn't help much:
CRITICAL_SECTION cs;
void threadfoo()
{
while(TryEnterCriticalSection(&cs) == FALSE)
{
// some useful work
}
// Critical Section of Code
LeaveCriticalSection (&cs);
}
// other work
}
What exactly is a scenario in which I can do "some useful work" while I'm waiting for my lock? I'd love to avoid thread-contention but in my code, by the time I need the critical section, I've already been forced to do all that "useful work" in order to get the values that I'm updating in shared data (for which I need the critical section in the first place).
Does anyone have a real-world example?
As an example you might have multiple threads that each produce a high volume of messages (events of some sort) that all need to go on a shared queue.
Since there's going to be frequent contention on the lock on the shared queue, each thread can have a local queue and then, whenever the TryEnterCriticalSection call succeeds for the current thread, it copies everything it has in its local queue to the shared one and releases the CS again.
In C++11 therestd::lock which employs deadlock-avoidance algorithm.
In C++17 this has been elaborated to std::scoped_lock class.
This algorithm tries to lock on mutexes in one order, and then in another, until succeeds. It takes try_lock to implement this approach.
Having try_lock method in C++ is called Lockable named requirement, whereas mutexes with only lock and unlock are BasicLockable.
So if you build C++ mutex on top of CTRITICAL_SECTION, and you want to implement Lockable, or you'll want to implement lock avoidance directly on CRITICAL_SECTION, you'll need TryEnterCriticalSection
Additionally you can implement timed mutex on TryEnterCriticalSection. You can do few iterations of TryEnterCriticalSection, then call Sleep with increasing delay time, until TryEnterCriticalSection succeeds or deadline has expired. It is not a very good idea though. Really timed mutexes based on user-space WIndows synchronization objects are implemented on SleepConditionVariableSRW, SleepConditionVariableCS or WaitOnAddress.
Because windows CS are recursive TryEnterCriticalSection allows a thread to check whether it already owns a CS without risk of stalling.
Another case would be if you have a thread that occasionally needs to perform some locked work but usually does something else, you could use TryEnterCriticalSection and only perform the locked work if you actually got the lock.

Are boost::unique_locks granted in the order that they are called?

I've examined the documentation on Boost Synchronization, and I cannot seem to determine if a boost::unique_lock will attain its lock in order.
In other words, if two threads are contending to lock a mutex which is already locked, will the order that they attempt to lock be maintained after the lock is released?
Unique lock is not a lock (it's a lock adaptor that can act on any Lockable or TimedLockable type, see cppreference).
The order in which threads get the lock (or resources in general) is likely implementation defined. These implementations usually have well-documented scheduling semantics, so applications can avoid resource starvation, soft-locks and dead locks.
As an interesting example, condition variables preserve scheduling order only if you take care to signal them under the mutex (if you don't, things will usually work unless your scheduling crucially depends on fair scheduling):
man phtread_cond_signal
The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits; however, if predictable scheduling behavior is required, then that mutex shall be locked by the thread calling pthread_cond_broadcast() or pthread_cond_signal().

OSSpinLockLock when already locked?

What happens when I use OSSpinLockLock when the lock is already held in the same thread? (hence it should "let me in").
I know it doesn't have a counter, but it's a problem to implement one, because then I'd need to verify this is the thread, the count is zero, and all this would probably need to be locked as well...
If you attempt to lock a spin lock from the thread that already owns it, you will deadlock. Spin locks are not recursive.
You should either look at pthread recursive mutexes, or change your design to avoid having to lock recursively.

How to create locks in VC++?

Lets say I am implementing a critical section and protecting some array in VC++, how do I do it using locks in VC++?
You need the API functions for critical sections:
InitializeCriticalSection Call once, from any thread, but typically the main thread, to initialize the lock. Initialize before you do anything else with it.
EnterCriticalSection Call from any thread to acquire the lock. If another thread has the lock, it will block until it can acquire the lock. Critical sections are re-entrant meaning a thread successfully acquires the lock even if it already holds it.
LeaveCriticalSection Release the lock. Each call to EnterCriticalSection must be paired with a matching call to LeaveCriticalSection. Don't let exceptions stop these acquire/release calls being paired up.
DeleteCriticalSection Call once, from any thread, but typically the main thread, to finalize the lock. Do this when no threads hold the lock. After you call this the lock is invalid and you can't attempt to acquire it again.
MSDN helpfully provide a trivial example.
If you are using MFC then you would probably use CCriticalSection which wraps up the Win32 critical section APIs in a class.
As for how you do it with your array. Well, your threads will only execute blocks of code protected by the lock one at a time. You need the lock to stop race conditions where two threads try to read/write to the same memory location simultaneously, or indeed other more subtle conditions that can break your algorithm.
If you were to describe the array, its contents, and how you operate on it, then it might be possible to give you some specific advice. Exactly how you operate on this array will have a large bearing on the ideal synchronisation strategy, and in certain cases you may be able to use lock-free methods.
Create a mutex via CreateMutex, take ownership of it via WaitForSingleObject, release ownership of the mutex via ReleaseMutex, and deleted it when you are done with CloseHandle.
Alternatives you can look up include CriticalSections, Semaphores, and Events.
If you're using VS 2010, a criticial_section object is included in the header file ppl.h.
Note there is also a concurrent_vector class template which is synchronized (i.e. locks aren't needed).

Correct lock to use in linux character driver

I am writing a simple character device driver. (kernel 2.6.26)
Multiple concurrent reader & writers are expected.
I am not sure what type of lock is best used to synchronize a short access to internal structures.
Any advice will be most appreciated
Compare with http://www.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/c214.html . An old document from before when mutexes existed, but given mutexes are a sleeping lock, they count towards user context.
spinlock — spinlock_bh — mutex — semaphore
If your data structures are only ever accessed by functions whose execution is triggered by userspace, all lock primitives are available to you. It depends on gut feeling of how short a "short access" is.
And then there is RCU as a fifth way of doing things, though it is somewhat not a locking primitive in its own right. (It is used together with one of the lock primitives.)
Start with a mutex. Once you've got it working you can think about reworking the locking.

Resources