synchronization, lock passing with broadcast - algorithm

I learned about synchronization in class recently, and I'm a little confused about the difference between signal and broadcast. I know for signal, when it happens it wakes up the first thread in its waitlist. That thread will claim the lock after the signal thread unlocks. Then what happens to broadcast? when broadcast is called all the waiting threads are woken up. Then when the broadcast thread unlocks, which of these threads get to take that lock?

All the threads are unblocked. All of them try to acquire the lock. Whichever one succeeds first returns from its wait function holding the lock. When that thread later releases the lock, one of the threads still trying to acquire it, will get it.
In practice I suspect that on a broadcast the OS will move the waitlist directly across and add it to the list of threads waiting to acquire the lock (accounting for priority if it orders such lists by priority). But that's an implementation detail.

Related

Is it possible to share the same message queue between two threads?

What I would like to do is to have one thread waiting for messages (WaitMessage) and another processing the logic of the application. The first thread would wake up on every message, signal somehow this event to the other thread, go to sleep again, etc. Is this possible?
UPDATE
Consider the following situation. We have a GUI thread, and this thread is busy in a long calculation. If there is no other thread, there is no option but to check for new messages from time to time. Otherwise, the GUI would become irresponsive during the long calculation. Right now my system uses this "polling" approach (it has a single thread that checks the message queue from time to time.) However, I would like to know whether this other solution is possible: Have another thread waiting on the OS message queue of the GUI so that when a Windows message arrives this thread will wake up and tell the other about the message. Note that I'm not asking how to communicate the news between threads but whether it is possible for the second thread to wait for OS messages that arrive in the queue of the first thread.
I should also add that I cannot have two different threads, one for the GUI and another for the calculations, because the system I'm working on is a Virtual Machine on top of which runs a Smalltalk image that is not thread safe. That's why having a thread that only signals new OS messages would be the ideal solution (if possible.)
This depends on what the second thread needs to do once the first thread has received a message.
If the second thread simply needs to know the first thread received a message, the first thread could signal an Event object using SetEvent() or PulseEvent(), and the second thread could wait on that event using WaitForSingleObject().
If the second thread needs data from the first thread, it could use an I/O Completion Port. The first thread could wrap the data inside a dynamically allocated struct and post it to the port using PostQueuedCompletionStatus(), and the second thread could wait for the data using GetQueuedCompletionStatus() and then free it when done using it.
Update: based on new information you have provided, it is not possible for one thread to wait on or service another thread's message queue. Only the thread that created and owns the queue can poll messages from its queue. Each thread has its own message queue.
You really need to move your long calculations to a different thread, they don't belong in the GUI thread to begin with. Let the GUI thread manage the GUI and service messages, do any long-running things in another thread.
If you can't do that because your chosen library is not thread safe, then you have 4 options:
find a different library that is thread safe.
have the calculations poll the message queue periodically when running in the GUI thread.
break up the calculations into small chunks that can be triggered by the GUI thread posting messages to itself. Post a message and return to the message loop. When the message is received, do a little bit of work, post the next message, and return to the message loop. Repeat as needed until the work is done. This allows the GUI thread to continue servicing the message queue in between each calculation step.
move the library to a separate process that communicates back with your main app as needed.

Deadlock avoidance with semaphore?

How to prevent deadlock? Is there any algorithm can do this? I have two processes: one holds a semaphore and the other waits for the semaphore. When the process which holds the semaphore is dead, the deadlock occurs. My question is there is anyway (in semaphore or
operating systme) to avoid such situation? Thanks!
Because threads can become blocked and because objects can have synchronized methods that prevent threads from accessing that object util waiting for another thread, it is possible for one thread to get stuck waiting for another thread, which in turn waits for another thread, etc.

Is Win32 Event object recursive mutexes?

I searched MSDN, Mutex could be locked twice, but there isn't any word on recursive acquire the same event object twice in the same thread.
can we lock the win32 events twice in the same thread?
Edit: what is the meaning of Lock events? here I assume event is auto-reset.
Lock: a thread is waken up from WaitForXXX (e.g. , WaitForSingleObject)
Un-Lock: a thread is calling SetEvent or PluseEvent.
A mutex is fundamentally different to an event. Whereas a mutex is used to provide MUTual EXclusion, so that only one thread may access a resource at a time, an event is just a notification mechanism. An auto-reset event provides single-wakeup notifications, whereas a manual-reset event provides multiple-wakeup notifications.
If you signal an auto-reset event, only one thread will receive that signal, and that thread only once; any other threads --- or any other calls to a wait function for that event from the same thread --- will wait until there is a second call to SetEvent.
If you signal a manual-reset event then it stays signalled until you reset it, so multiple threads can wake, and multiple calls to a wait function for that event from the same thread will succeed until some thread calls ResetEvent.
An event doesn't have an "owner" either way: just because thread A was woken from its call to a wait function last time by another thread setting the event, there is nothing that prevents it waiting again, and nothing that specifies whether thread A or B will be woken if both wait on the same auto-reset event. There is also nothing that requires any particular thread to call SetEvent: any thread in the system may do so, whether or not that thread ever calls a wait function for that event. Indeed, a common use case has one thread calling SetEvent, and one or more other threads waiting.
So: yes, you can wait for an event from a thread that just waited for that event, but this is not a lock, and other threads may also wait for the event, and may also wake if the event is signalled.
Update for edited question:
You can use an event to provide a lock, but that is not part of the inherent semantics. You may call WaitForSingleObject twice in succession using the same auto-reset event handle. This is not an error as far as Windows is concerned: you just need to ensure that some other thread or threads calls SetEvent twice, in such a way that the waiting thread wakes from the first call to WaitForSingleObject before the second call to SetEvent happens, in order to avoid "lost" wakeups: SetEvent doesn't count the calls, it just sets the flag.
Also: do not use PulseEvent. It does not guarantee that a thread will wake, even if there is one currently waiting.
I agree with Anthony Williams.
One note that I'd like to add is that many people (not just you) don't quite understand the difference between a mutex and an auto-reset event. They actually behave similarly and may (from the technical perspective) be used for resource locking.
The major difference between them is that mutex "knows" which thread holds it. That is, when WaitForSingleObject (or similar) acquires a mutex - it's automatically "assigned" to the calling thread. This has two consequences:
Mutex may be acquired recursively by the same thread. This won't work with an auto-reset event of course.
If the thread owning a mutex exits - the mutex is automatically "freed". The appropriate WaitXXXX function will return with WAIT_ABANDONED.
Events OTOH may be seen as particular cases of semaphores. Auto-reset event is equivalent to a semaphore charged by (at most) 1, and manual-reset event - equivalent to an infinitely-charged semaphore.

Does a thread waiting on Windows Events need to get scheduled on CPU to wake up from sleeping?

It is best to describe my question in an example:
We create a Windows Event handle by CreateEvent, with manualReset as FALSE.
We create 4 threads. Ensure that they all start running and waiting on the above event by WaitForSingleObject.
In the main thread, in a for loop, we signal this event 4 times, by SetEvent. such as:
for (int i = 0; i < 4; ++i) ::SetEvent(event);
My question is, can we say that all these 4 threads will certainly be waken up from waiting on this event?
According to my understanding of Windows Event, the answer is YES. Because when the event is set, there is always a thread waiting for it.
However, I read on MSDN that "Setting an event that is already set has no effect". Since the waiting threads probably do not get a chance to run while main thread setting event in the loop. Can they still be notified and reset the event to nonsignaled? If the event is not reset, the following SetEvent in the loop is obviously useless.
Or the OS kernel knows which thread should be notified when an event is set, and reset this event immediately if there is a waiting thread. So the waiting thread does not need to be schedule to reset the event to nonsignaled?
Any clarification or references are welcome. Thanks.
Because when the event is set, there is always a thread waiting for it.
No, you don't know that. A thread may indefinitely suspended for some reason just before the NtWaitForSingleObject system call.
Since the waiting threads probably do not get a chance to run while main thread setting event in the loop.
If a thread is waiting for an object, it doesn't run at all - that's the whole point of being able to block on a synchronization object.
Can they still be notified and reset the event to nonsignaled? If the event is not reset, the following SetEvent in the loop is obviously useless.
The thread that sets the event is the one that resets the signal state back to 0, not the thread that gets woken up. Of course, if there's no thread waiting the signal state won't be reset.
Or the OS kernel knows which thread should be notified when an event is set, and reset this event immediately if there is a waiting thread.
Yes, the kernel does know. Every dispatcher object has a wait list, and when a thread waits on an object it pushes a wait block onto that list.
In a word? No.
There's no guarantee that each and every call to Set() will signal a waiting thread. MSDN describes this behavior as follows:
There is no guarantee that every call
to the Set method will release a
thread from an EventWaitHandle whose
reset mode is
EventResetMode::AutoReset. If two
calls are too close together, so that
the second call occurs before a thread
has been released, only one thread is
released. It is as if the second call
did not happen. Also, if Set is called
when there are no threads waiting and
the EventWaitHandle is already
signaled, the call has no effect.
(Source)
If you want to ensure that a specific number of threads will be signaled, you should use a more suitable kind of synchronization primitive, such as a Semaphore.
When you do SetEvent(event), since your manual reset is set as false for the event, any thread (windows doesnt specify any preferences) from one of the four would get passed the waitforsingleobject() and on the subsequent calls the other 3 threads would randomly be selected since your event is autoreset after releasing every thread.
If you're trying to imply the threads are re-entrant, the threads getting released every time would again be one out of four randomly by OSes choice.

CriticalSection

i'm not sure about something.
when i use critical_section/mutex/semaphor in c++ for example , how does the busy_wait problem being prevented ?
what i mean is when a thread reaches a critical section and the critical section is occupied by other thread, what prevents the thread from wasting cycle time and wait for nothing ?
for example,
should i call TryEnterCriticalSection and check if the thread obtained ownership and otherwise call sleep(0) ?
i'm a bit perplexed
thanks
This is Windows specific, but Linux will be similar.
Windows has the concept of a ready queue of threads. These are threads that are ready to run, and will be run at some point on an available processor. Which threads are selected to run immediately is a bit complicated - threads can have different priorities, their priorities can be temporarily boosted, etc.
When a thread waits on a synchronization primitive like a CRITICAL_SECTION or mutex, it is not placed on the ready queue - Windows will not even attempt to run the thread and will run other threads if possible. At some point the thread will be moved back to the ready queue, for instance when the thread owning the CS or mutex releases it.
The thread is not going to be taking any system resources, because it will be marked as "waiting". As soon as the thread occupying the critical region finishes, it will send out a signal that will move the waiting thread to the ready queue.
These control structures stop the thread that can't enter from doing a busy wait by allowing it to sleep until an interrupt is generated by the thread that is in the critical section finishing execution. Because the thread is asleep it is not using processor cycles, so no busy_wait.

Resources