i'm not sure about something.
when i use critical_section/mutex/semaphor in c++ for example , how does the busy_wait problem being prevented ?
what i mean is when a thread reaches a critical section and the critical section is occupied by other thread, what prevents the thread from wasting cycle time and wait for nothing ?
for example,
should i call TryEnterCriticalSection and check if the thread obtained ownership and otherwise call sleep(0) ?
i'm a bit perplexed
thanks
This is Windows specific, but Linux will be similar.
Windows has the concept of a ready queue of threads. These are threads that are ready to run, and will be run at some point on an available processor. Which threads are selected to run immediately is a bit complicated - threads can have different priorities, their priorities can be temporarily boosted, etc.
When a thread waits on a synchronization primitive like a CRITICAL_SECTION or mutex, it is not placed on the ready queue - Windows will not even attempt to run the thread and will run other threads if possible. At some point the thread will be moved back to the ready queue, for instance when the thread owning the CS or mutex releases it.
The thread is not going to be taking any system resources, because it will be marked as "waiting". As soon as the thread occupying the critical region finishes, it will send out a signal that will move the waiting thread to the ready queue.
These control structures stop the thread that can't enter from doing a busy wait by allowing it to sleep until an interrupt is generated by the thread that is in the critical section finishing execution. Because the thread is asleep it is not using processor cycles, so no busy_wait.
Related
Let's say i want to create a thread, I want the necessary spaces allocated for the thread, however, i'd like to defer launching that thread.
I'm working on a threadpool, so i'd like to have some threads ready(but not running) before I start the threadpool.
Is there a way to do so in C++11?
You could have all the threads wait on a semaphore as soon as they start up. And then you can just signal them when it's time for them to actually start running.
This sounds similar to the "Thread Pool / Task" behavior present in a number of languages (and probably several C++ libraries like boost). A Thread Pool has one or more threads, and can queue Tasks. When it doesn't have tasks, a Thread Pool just waits for input. They can also, as implied, queue up tasks if the threads are busy.
What I would like to do is to have one thread waiting for messages (WaitMessage) and another processing the logic of the application. The first thread would wake up on every message, signal somehow this event to the other thread, go to sleep again, etc. Is this possible?
UPDATE
Consider the following situation. We have a GUI thread, and this thread is busy in a long calculation. If there is no other thread, there is no option but to check for new messages from time to time. Otherwise, the GUI would become irresponsive during the long calculation. Right now my system uses this "polling" approach (it has a single thread that checks the message queue from time to time.) However, I would like to know whether this other solution is possible: Have another thread waiting on the OS message queue of the GUI so that when a Windows message arrives this thread will wake up and tell the other about the message. Note that I'm not asking how to communicate the news between threads but whether it is possible for the second thread to wait for OS messages that arrive in the queue of the first thread.
I should also add that I cannot have two different threads, one for the GUI and another for the calculations, because the system I'm working on is a Virtual Machine on top of which runs a Smalltalk image that is not thread safe. That's why having a thread that only signals new OS messages would be the ideal solution (if possible.)
This depends on what the second thread needs to do once the first thread has received a message.
If the second thread simply needs to know the first thread received a message, the first thread could signal an Event object using SetEvent() or PulseEvent(), and the second thread could wait on that event using WaitForSingleObject().
If the second thread needs data from the first thread, it could use an I/O Completion Port. The first thread could wrap the data inside a dynamically allocated struct and post it to the port using PostQueuedCompletionStatus(), and the second thread could wait for the data using GetQueuedCompletionStatus() and then free it when done using it.
Update: based on new information you have provided, it is not possible for one thread to wait on or service another thread's message queue. Only the thread that created and owns the queue can poll messages from its queue. Each thread has its own message queue.
You really need to move your long calculations to a different thread, they don't belong in the GUI thread to begin with. Let the GUI thread manage the GUI and service messages, do any long-running things in another thread.
If you can't do that because your chosen library is not thread safe, then you have 4 options:
find a different library that is thread safe.
have the calculations poll the message queue periodically when running in the GUI thread.
break up the calculations into small chunks that can be triggered by the GUI thread posting messages to itself. Post a message and return to the message loop. When the message is received, do a little bit of work, post the next message, and return to the message loop. Repeat as needed until the work is done. This allows the GUI thread to continue servicing the message queue in between each calculation step.
move the library to a separate process that communicates back with your main app as needed.
I have hand-made thread pool. Threads read from completion port and do some other stuff. One particular thread has to be ended. How to interrupt it's waiting if it hangs on GetQueuedCompletionStatus() or GetQueuedCompletionStatusEx()?
Finite timeout (100-1000 ms) and exiting variable are far from elegant, cause delays and left as last resort.
CancelIo(completionPortHandle) within APC in target thread causes ERROR_INVALID_HANDLE.
CancelSynchronousIo(completionPortHandle) causes ERROR_NOT_FOUND.
PostQueuedCompletionStatus() with termination packet doesn't allow to choose thread.
Rough TerminateThread() with mutex should work. (I haven't tested it.) But is it ideologically good?
I tried to wait on special event and completion port. WaitForMultipleObjects() returned immediately as if completion port was signalled. GetQueuedCompletionStatus() shows didn't return anything.
I read Overlapped I/O: How to wake a thread on a completion port event or a normal event? and googled a lot.
Probably, the problem itself – ending thread's work – is sign of bad design and all my threads should be equal and compounded into normal thread pool. In this case, PostQueuedCompletionStatus() approach should work. (Although I have doubts that this approach is beautiful and laconic especially if threads use GetQueuedCompletionStatusEx() to get multiple packets at once.)
If you just want to reduce the size of the thread pool it doesn't matter which thread exits.
However if for some reason you need to signal to an particular thread that it needs to exit, rather than allowing any thread to exit, you can use this method.
If you use GetQueuedCompletionStatusEx you can do an alertable wait, by passing TRUE for fAlertable. You can then use QueueUserAPC to queue an APC to the thread you want to quit.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms684954(v=vs.85).aspx
If the thread is busy then you will still have to wait for the current work item to be completed.
Certainly don't call TerminateThread.
Unfortunately, I/O completion port handles are always in a signaled state and as such cannot really be used in WaitFor* functions.
GetQueuedCompletionStatus[Ex] is the only way to block on the completion port. With an empty queue, the function will return only if the thread becomes alerted. As mentioned by #Ben, the QueueUserAPC will make the the thread alerted and cause GetQueuedCompletionStatus to return.
However, QueueUserAPC allocates memory and thus can fail in low-memory conditions or when memory quotas are in effect. The same holds for PostQueuedCompletionStatus. As such, using any of these functions on an exit path is not a good idea.
Unfortunately, the only robust way seems to be calling the undocumented NtAlertThread exported by ntdll.dll.
extern "C" NTSTATUS __stdcall NtAlertThread(HANDLE hThread);
Link with ntdll.lib. This function will put the target thread into an alerted state without queuing anything.
I have a question haunting me for a long time.
Short version:
What's the working paradigm of Windows Message Loop?
Detailed version:
When we start a Windows application (not a console application), we can interact with it through mouse or keyboard. The application retrieve all kinds of messages representing our movements from its meesage queue. And it is Windows that is responsible for collecting our actions and properly feeding messages into this queue. But doesn't this scenario mean that Windows has to run infinitively?
I think the Windows scheduler should be running all the time. It could possibly be invoked by a time interrupt at a pre-defined interval. When the scheduler is trigged by the time interrupt, it swithes current thread for the next pending thread. A single thread can only get its message with GetMessage() when it is scheduled to run.
I am wondering if there's only one Windows application running, will this application got more chance to get its message?
Update - 1 (9:59 AM 11/22/2010)
Here is my latest finding:
According to < Windows via C/C++ 5th Edition > Chapter 7 Section: Thread Priorities
...For example, if your process'
primary thread calls GetMessage() and
the system sees that no messages are
pending, the system suspends your
porcess' thread, relinquishes the
remainder of the thread's time slice,
and immediately assigns the CPU to
another waiting thread.
If no messages show up for GetMessage
to retrieve, the process' primary
thread stays suspended and is never
assigned to a CPU. However, when a
message is placed in the thread's
queue, the system knows that the
thread should no longer be suspended
and assigns the thread to a CPU if no
higher-priority threads need to
execute.
My current understanding is:
In order for the system to know when a message is placed in a thread's queue, I can think of 2 possible approaches:
1 - Centralized approach: It is the system who is responsible to always check EVERY thread's queue. Even that thread is blocked for the lacking of messages. If any message is availabe, the system will change the state of that thread to schedulable. But this checking could be a real burden to the system in my opinion.
2 - Distributed approach: The system doesn't check every thread's queue. When a thread calls GetMessage and find that no message is available, the system will just change the thread's state to blocked, thus not schedulable any more. And in the future no matter who places a message into a blocked thread's queue, it is this "who"(not the system) that is responsible to change the the thread's state from blocked to ready (or whatever state). So this thread is dis-qualified for scheduling by the system and re-qualified by someone else in the regard of GetMessage. What the system cares is just to schedule the runable threads. The system doesn't care where these schedulable threads come from. This approach will avoid the burden in approach 1, and thus avoid the possible bottleneck.
In fact, the key point here is, how are the states of the threads changed? I am not sure if it is really a distributed paradigm as shown in appraoch 2, but could it be a good option?
Applications call GetMessage() in their message loop. If the message queue is empty, the process will just block until another message becomes available. Thus, GetMessage is a processes' way of telling Windows that it doesn't have anything to do at the moment.
I am wondering if there's only one
Windows application running, will this
application got more chance to get its
message?
Well yeah probably, but I think you might be missing a crucial point. Extracting a message from the queue is a blocking call. The data structure used is usually referred to as a blocking queue. The dequeue operation is designed to voluntarily yield the current thread's execution if the queue is empty. Threads can stay parked using a various different methods, but it is likely that thread remains in a waiting state using kernel level mechanisms in this case. Once the signal is given that the queue has items available the thread may go into a ready state and the scheduler will start assigning its fair share of the CPU. In other words, if there are no messages pending for that application then it just sits there in an idle state consuming close to zero CPU time.
The fewer threads you have running (time slices are scheduled to threads, not processes), the more chances any single application will have to pull messages from its queue. Actually, this has nothing to do with Windows messages; it's true for all multithreading; the more threads of the same or higher priority which are running, the fewer time slices any thread will get.
Beyond that, I'm not sure what you are really asking, though...
From my experience, when main thread is ready to exit, it should wait until other threads normally exit.
But from this link http://msdn.microsoft.com/en-us/library/ms686722(v=VS.85).aspx, it looks when process is terminated, all related resources are freed, so if certain worker thread is doing heavy work, waiting may be a litter longer. Can I ignore the waiting?
Also in the above link, I find
Do not terminate a process unless its
threads are in known states. If a
thread is waiting on a kernel object,
it will not be terminated until the
wait has completed. This can cause the
application to hang.
This is too short to understand why killing a thread in unknown states when process exits is wrong.
can someone give me more detail about the problem?
Thanks
So, when a thread is waiting on an object in the kernel, it will not exit until the waiting is over.
So, let's say you have an application with 3 threads, in the following states:
Main thread, currently idle
UI handling thread, currently idle
A thread waiting on a kernel object
If you kill the process, thread 2 will die, making the UI input handlers die, and therefore giving the appearance that the application is unresponsive (hung). Until thread #3 finishes waiting on the kernel, the main thread won't exit, and so the process remains running, and the process resources don't get released.
So, I think it's basically saying that it's better to ask a process to exit normally, instead of sending it a kill signal, because you can get yourself into a situation like the one described if any of the process threads are waiting on kernel objects.