What I would like to do is to have one thread waiting for messages (WaitMessage) and another processing the logic of the application. The first thread would wake up on every message, signal somehow this event to the other thread, go to sleep again, etc. Is this possible?
UPDATE
Consider the following situation. We have a GUI thread, and this thread is busy in a long calculation. If there is no other thread, there is no option but to check for new messages from time to time. Otherwise, the GUI would become irresponsive during the long calculation. Right now my system uses this "polling" approach (it has a single thread that checks the message queue from time to time.) However, I would like to know whether this other solution is possible: Have another thread waiting on the OS message queue of the GUI so that when a Windows message arrives this thread will wake up and tell the other about the message. Note that I'm not asking how to communicate the news between threads but whether it is possible for the second thread to wait for OS messages that arrive in the queue of the first thread.
I should also add that I cannot have two different threads, one for the GUI and another for the calculations, because the system I'm working on is a Virtual Machine on top of which runs a Smalltalk image that is not thread safe. That's why having a thread that only signals new OS messages would be the ideal solution (if possible.)
This depends on what the second thread needs to do once the first thread has received a message.
If the second thread simply needs to know the first thread received a message, the first thread could signal an Event object using SetEvent() or PulseEvent(), and the second thread could wait on that event using WaitForSingleObject().
If the second thread needs data from the first thread, it could use an I/O Completion Port. The first thread could wrap the data inside a dynamically allocated struct and post it to the port using PostQueuedCompletionStatus(), and the second thread could wait for the data using GetQueuedCompletionStatus() and then free it when done using it.
Update: based on new information you have provided, it is not possible for one thread to wait on or service another thread's message queue. Only the thread that created and owns the queue can poll messages from its queue. Each thread has its own message queue.
You really need to move your long calculations to a different thread, they don't belong in the GUI thread to begin with. Let the GUI thread manage the GUI and service messages, do any long-running things in another thread.
If you can't do that because your chosen library is not thread safe, then you have 4 options:
find a different library that is thread safe.
have the calculations poll the message queue periodically when running in the GUI thread.
break up the calculations into small chunks that can be triggered by the GUI thread posting messages to itself. Post a message and return to the message loop. When the message is received, do a little bit of work, post the next message, and return to the message loop. Repeat as needed until the work is done. This allows the GUI thread to continue servicing the message queue in between each calculation step.
move the library to a separate process that communicates back with your main app as needed.
Related
Let's say I have a manual event handle h (created with CreateEvent, manual).
There are several threads in my application, some thread(s) might be waiting for this event (WaitForSingleObject, WaitForMultipleObject).
At certain times in my application, I want to assert that no thread is waiting for this handle h.
Is there a Windows API function that tells me if any thread waiting for event h at that moment in time ?
I don't believe that the Windows API provides any public mechanism for giving out that information (whether or not threads are waiting for a synchronization object). It is something that a typical application should not need to know and would likely result in race conditions if it were provided.
For example, if the application checked to verify that no threads were waiting and then made a decision based on that, it could easily be wrong because a thread may in the very next clock cycle actually start waiting for the event, so the information would be stale and potentially wrong immediately after the check.
I have a question haunting me for a long time.
Short version:
What's the working paradigm of Windows Message Loop?
Detailed version:
When we start a Windows application (not a console application), we can interact with it through mouse or keyboard. The application retrieve all kinds of messages representing our movements from its meesage queue. And it is Windows that is responsible for collecting our actions and properly feeding messages into this queue. But doesn't this scenario mean that Windows has to run infinitively?
I think the Windows scheduler should be running all the time. It could possibly be invoked by a time interrupt at a pre-defined interval. When the scheduler is trigged by the time interrupt, it swithes current thread for the next pending thread. A single thread can only get its message with GetMessage() when it is scheduled to run.
I am wondering if there's only one Windows application running, will this application got more chance to get its message?
Update - 1 (9:59 AM 11/22/2010)
Here is my latest finding:
According to < Windows via C/C++ 5th Edition > Chapter 7 Section: Thread Priorities
...For example, if your process'
primary thread calls GetMessage() and
the system sees that no messages are
pending, the system suspends your
porcess' thread, relinquishes the
remainder of the thread's time slice,
and immediately assigns the CPU to
another waiting thread.
If no messages show up for GetMessage
to retrieve, the process' primary
thread stays suspended and is never
assigned to a CPU. However, when a
message is placed in the thread's
queue, the system knows that the
thread should no longer be suspended
and assigns the thread to a CPU if no
higher-priority threads need to
execute.
My current understanding is:
In order for the system to know when a message is placed in a thread's queue, I can think of 2 possible approaches:
1 - Centralized approach: It is the system who is responsible to always check EVERY thread's queue. Even that thread is blocked for the lacking of messages. If any message is availabe, the system will change the state of that thread to schedulable. But this checking could be a real burden to the system in my opinion.
2 - Distributed approach: The system doesn't check every thread's queue. When a thread calls GetMessage and find that no message is available, the system will just change the thread's state to blocked, thus not schedulable any more. And in the future no matter who places a message into a blocked thread's queue, it is this "who"(not the system) that is responsible to change the the thread's state from blocked to ready (or whatever state). So this thread is dis-qualified for scheduling by the system and re-qualified by someone else in the regard of GetMessage. What the system cares is just to schedule the runable threads. The system doesn't care where these schedulable threads come from. This approach will avoid the burden in approach 1, and thus avoid the possible bottleneck.
In fact, the key point here is, how are the states of the threads changed? I am not sure if it is really a distributed paradigm as shown in appraoch 2, but could it be a good option?
Applications call GetMessage() in their message loop. If the message queue is empty, the process will just block until another message becomes available. Thus, GetMessage is a processes' way of telling Windows that it doesn't have anything to do at the moment.
I am wondering if there's only one
Windows application running, will this
application got more chance to get its
message?
Well yeah probably, but I think you might be missing a crucial point. Extracting a message from the queue is a blocking call. The data structure used is usually referred to as a blocking queue. The dequeue operation is designed to voluntarily yield the current thread's execution if the queue is empty. Threads can stay parked using a various different methods, but it is likely that thread remains in a waiting state using kernel level mechanisms in this case. Once the signal is given that the queue has items available the thread may go into a ready state and the scheduler will start assigning its fair share of the CPU. In other words, if there are no messages pending for that application then it just sits there in an idle state consuming close to zero CPU time.
The fewer threads you have running (time slices are scheduled to threads, not processes), the more chances any single application will have to pull messages from its queue. Actually, this has nothing to do with Windows messages; it's true for all multithreading; the more threads of the same or higher priority which are running, the fewer time slices any thread will get.
Beyond that, I'm not sure what you are really asking, though...
It is best to describe my question in an example:
We create a Windows Event handle by CreateEvent, with manualReset as FALSE.
We create 4 threads. Ensure that they all start running and waiting on the above event by WaitForSingleObject.
In the main thread, in a for loop, we signal this event 4 times, by SetEvent. such as:
for (int i = 0; i < 4; ++i) ::SetEvent(event);
My question is, can we say that all these 4 threads will certainly be waken up from waiting on this event?
According to my understanding of Windows Event, the answer is YES. Because when the event is set, there is always a thread waiting for it.
However, I read on MSDN that "Setting an event that is already set has no effect". Since the waiting threads probably do not get a chance to run while main thread setting event in the loop. Can they still be notified and reset the event to nonsignaled? If the event is not reset, the following SetEvent in the loop is obviously useless.
Or the OS kernel knows which thread should be notified when an event is set, and reset this event immediately if there is a waiting thread. So the waiting thread does not need to be schedule to reset the event to nonsignaled?
Any clarification or references are welcome. Thanks.
Because when the event is set, there is always a thread waiting for it.
No, you don't know that. A thread may indefinitely suspended for some reason just before the NtWaitForSingleObject system call.
Since the waiting threads probably do not get a chance to run while main thread setting event in the loop.
If a thread is waiting for an object, it doesn't run at all - that's the whole point of being able to block on a synchronization object.
Can they still be notified and reset the event to nonsignaled? If the event is not reset, the following SetEvent in the loop is obviously useless.
The thread that sets the event is the one that resets the signal state back to 0, not the thread that gets woken up. Of course, if there's no thread waiting the signal state won't be reset.
Or the OS kernel knows which thread should be notified when an event is set, and reset this event immediately if there is a waiting thread.
Yes, the kernel does know. Every dispatcher object has a wait list, and when a thread waits on an object it pushes a wait block onto that list.
In a word? No.
There's no guarantee that each and every call to Set() will signal a waiting thread. MSDN describes this behavior as follows:
There is no guarantee that every call
to the Set method will release a
thread from an EventWaitHandle whose
reset mode is
EventResetMode::AutoReset. If two
calls are too close together, so that
the second call occurs before a thread
has been released, only one thread is
released. It is as if the second call
did not happen. Also, if Set is called
when there are no threads waiting and
the EventWaitHandle is already
signaled, the call has no effect.
(Source)
If you want to ensure that a specific number of threads will be signaled, you should use a more suitable kind of synchronization primitive, such as a Semaphore.
When you do SetEvent(event), since your manual reset is set as false for the event, any thread (windows doesnt specify any preferences) from one of the four would get passed the waitforsingleobject() and on the subsequent calls the other 3 threads would randomly be selected since your event is autoreset after releasing every thread.
If you're trying to imply the threads are re-entrant, the threads getting released every time would again be one out of four randomly by OSes choice.
i'm not sure about something.
when i use critical_section/mutex/semaphor in c++ for example , how does the busy_wait problem being prevented ?
what i mean is when a thread reaches a critical section and the critical section is occupied by other thread, what prevents the thread from wasting cycle time and wait for nothing ?
for example,
should i call TryEnterCriticalSection and check if the thread obtained ownership and otherwise call sleep(0) ?
i'm a bit perplexed
thanks
This is Windows specific, but Linux will be similar.
Windows has the concept of a ready queue of threads. These are threads that are ready to run, and will be run at some point on an available processor. Which threads are selected to run immediately is a bit complicated - threads can have different priorities, their priorities can be temporarily boosted, etc.
When a thread waits on a synchronization primitive like a CRITICAL_SECTION or mutex, it is not placed on the ready queue - Windows will not even attempt to run the thread and will run other threads if possible. At some point the thread will be moved back to the ready queue, for instance when the thread owning the CS or mutex releases it.
The thread is not going to be taking any system resources, because it will be marked as "waiting". As soon as the thread occupying the critical region finishes, it will send out a signal that will move the waiting thread to the ready queue.
These control structures stop the thread that can't enter from doing a busy wait by allowing it to sleep until an interrupt is generated by the thread that is in the critical section finishing execution. Because the thread is asleep it is not using processor cycles, so no busy_wait.
I've read the documentation for ReadDirectoryChangesW() and also seen the CDirectoryChangeWatcher project, but neither say why one would want to call it asynchronously. I understand that the current thread will not block, but, at least for the CDirectoryChangeWatcher code that uses a completion port, when it calls GetQueuedCompletionStatus(), that thread blocks anyway (if there are no changes).
So if I call ReadDirectoryChangesW() synchronously in a separate thread in the first place that I don't care if it blocks, why would I ever want to call ReadDirectoryChangesW() asynchronously?
When you call it asynchronously, you have more control over which thread does the waiting. It also allows you to have a single thread wait for multiple things, such as a directory change, an event, and a message. Finally, even if you're doing the waiting in the same thread that set up the watch in the first place, it gives you control over how long you're willing to wait. GetQueuedCompletionStatus has a timeout parameter that ReadDirectoryChangesW doesn't offer by itself.
You would call ReadDirectoryChangesW such that it returns its results asynchronously if you ever needed the calling thread to not block. A tautology, but the truth.
Candidates for such threads: the UI thread & any thread that is solely responsible for servicing a number of resources (Sockets, any sort of IPC, independent files, etc.).
Not being familiar with the project, I'd guess the CDirectoryChangeWatcher doesn't care if its worker thread blocks. Generally, that's the nature of worker threads.
I tried using ReadDirectoryChanges in a worker thread synchronously, and guess what, it blocked so that the thread wouldn't exit by itself at the program exit.
So if you don't want to use evil things like TerminateThread, you should use asynchronous calls.