Is there a way to create a new thread with space allocated for that thread, but defer the execution in c++11? - c++11

Let's say i want to create a thread, I want the necessary spaces allocated for the thread, however, i'd like to defer launching that thread.
I'm working on a threadpool, so i'd like to have some threads ready(but not running) before I start the threadpool.
Is there a way to do so in C++11?

You could have all the threads wait on a semaphore as soon as they start up. And then you can just signal them when it's time for them to actually start running.

This sounds similar to the "Thread Pool / Task" behavior present in a number of languages (and probably several C++ libraries like boost). A Thread Pool has one or more threads, and can queue Tasks. When it doesn't have tasks, a Thread Pool just waits for input. They can also, as implied, queue up tasks if the threads are busy.

Related

Is it possible to share the same message queue between two threads?

What I would like to do is to have one thread waiting for messages (WaitMessage) and another processing the logic of the application. The first thread would wake up on every message, signal somehow this event to the other thread, go to sleep again, etc. Is this possible?
UPDATE
Consider the following situation. We have a GUI thread, and this thread is busy in a long calculation. If there is no other thread, there is no option but to check for new messages from time to time. Otherwise, the GUI would become irresponsive during the long calculation. Right now my system uses this "polling" approach (it has a single thread that checks the message queue from time to time.) However, I would like to know whether this other solution is possible: Have another thread waiting on the OS message queue of the GUI so that when a Windows message arrives this thread will wake up and tell the other about the message. Note that I'm not asking how to communicate the news between threads but whether it is possible for the second thread to wait for OS messages that arrive in the queue of the first thread.
I should also add that I cannot have two different threads, one for the GUI and another for the calculations, because the system I'm working on is a Virtual Machine on top of which runs a Smalltalk image that is not thread safe. That's why having a thread that only signals new OS messages would be the ideal solution (if possible.)
This depends on what the second thread needs to do once the first thread has received a message.
If the second thread simply needs to know the first thread received a message, the first thread could signal an Event object using SetEvent() or PulseEvent(), and the second thread could wait on that event using WaitForSingleObject().
If the second thread needs data from the first thread, it could use an I/O Completion Port. The first thread could wrap the data inside a dynamically allocated struct and post it to the port using PostQueuedCompletionStatus(), and the second thread could wait for the data using GetQueuedCompletionStatus() and then free it when done using it.
Update: based on new information you have provided, it is not possible for one thread to wait on or service another thread's message queue. Only the thread that created and owns the queue can poll messages from its queue. Each thread has its own message queue.
You really need to move your long calculations to a different thread, they don't belong in the GUI thread to begin with. Let the GUI thread manage the GUI and service messages, do any long-running things in another thread.
If you can't do that because your chosen library is not thread safe, then you have 4 options:
find a different library that is thread safe.
have the calculations poll the message queue periodically when running in the GUI thread.
break up the calculations into small chunks that can be triggered by the GUI thread posting messages to itself. Post a message and return to the message loop. When the message is received, do a little bit of work, post the next message, and return to the message loop. Repeat as needed until the work is done. This allows the GUI thread to continue servicing the message queue in between each calculation step.
move the library to a separate process that communicates back with your main app as needed.

Deadlock avoidance with semaphore?

How to prevent deadlock? Is there any algorithm can do this? I have two processes: one holds a semaphore and the other waits for the semaphore. When the process which holds the semaphore is dead, the deadlock occurs. My question is there is anyway (in semaphore or
operating systme) to avoid such situation? Thanks!
Because threads can become blocked and because objects can have synchronized methods that prevent threads from accessing that object util waiting for another thread, it is possible for one thread to get stuck waiting for another thread, which in turn waits for another thread, etc.

Forcing context switch in Windows

Is there a way to force a context switch in C++ to a specific thread, assuming I have the thread handle or thread ID?
No, you won't be able to force operating system to run the thread you want. You can use yield to force a context switch though...
yield in Win32 API is function SwitchToThread. If there is no other thread available for running, then a ZERO value will be returned and current thread will keep running anyway.
You can only encourage the Windows thread scheduler to pick a certain thread, you can't force it. You do so first by making the thread block on a synchronization object and signaling it. Secondary by bumping up its priority.
Explicit context switching is supported, you'll have to use fibers. Review SwitchToFiber(). A fiber is not a thread by a long shot, it is similar to a co-routine of old. Fibers' heyday has come and gone, they are not competitive with threads anymore. They have very crappy cpu cache locality and cannot take advantage of multiple cores.
The only way to force a particular thread to run is by using process/thread affinity, but I can't imagine ever having a problem for which this was a reasonable solution.
The only way to force a context switch is to force a thread onto a different processor using affinity.
In other words, what you are trying to do isn't really viable.
Calling SwitchToThread() will result in a context switch if there is another thread ready to run that are eligible to run on this processor. The documentation states it as follows:
If calling the SwitchToThread function
causes the operating system to switch
execution to another thread, the
return value is nonzero.
If there are no other threads ready to
execute, the operating system does not
switch execution to another thread,
and the return value is zero.
You can temporarily bump the priority of the other thread, while looping with Sleep(0) calls: this passes control to other threads. Suppose that the other thread has increased a lock variable and you need to wait until it becomes zero again:
// Wait until other thread releases lock
SetThreadPriority(otherThread, THREAD_PRIORITY_HIGHER);
while (InterlockedRead(&lock) != 0)
Sleep(0);
SetThreadPriority(otherThread, THREAD_PRIORITY_NORMAL);
I would check out the book Concurrent Programming for Windows. The scheduler seems to do a few things worth noting.
Sleep(0) only yields to higher priority threads (or possibly others at the same priority). This means you cannot fix priority inversion situations with just a Sleep(0), where other lower priority threads need to run. You must use SwitchToThread, Sleep a non-zero duration, or fully block on some kernel HANDLE.
You can create two synchronization objects (such as two events) and use the API SignalObjectAndWait.
If the hObjectToWaitOn is non-signaled and your other thread is waiting on the hObjectToSignal, the OS can theoretically perform quick context switch inside this API, before end of time slice.
And if you want the current thread to automatically resume, simply inform a small value (such as 50 or 100) on the dwMilliseconds.

Question about message loop

I have a question haunting me for a long time.
Short version:
What's the working paradigm of Windows Message Loop?
Detailed version:
When we start a Windows application (not a console application), we can interact with it through mouse or keyboard. The application retrieve all kinds of messages representing our movements from its meesage queue. And it is Windows that is responsible for collecting our actions and properly feeding messages into this queue. But doesn't this scenario mean that Windows has to run infinitively?
I think the Windows scheduler should be running all the time. It could possibly be invoked by a time interrupt at a pre-defined interval. When the scheduler is trigged by the time interrupt, it swithes current thread for the next pending thread. A single thread can only get its message with GetMessage() when it is scheduled to run.
I am wondering if there's only one Windows application running, will this application got more chance to get its message?
Update - 1 (9:59 AM 11/22/2010)
Here is my latest finding:
According to < Windows via C/C++ 5th Edition > Chapter 7 Section: Thread Priorities
...For example, if your process'
primary thread calls GetMessage() and
the system sees that no messages are
pending, the system suspends your
porcess' thread, relinquishes the
remainder of the thread's time slice,
and immediately assigns the CPU to
another waiting thread.
If no messages show up for GetMessage
to retrieve, the process' primary
thread stays suspended and is never
assigned to a CPU. However, when a
message is placed in the thread's
queue, the system knows that the
thread should no longer be suspended
and assigns the thread to a CPU if no
higher-priority threads need to
execute.
My current understanding is:
In order for the system to know when a message is placed in a thread's queue, I can think of 2 possible approaches:
1 - Centralized approach: It is the system who is responsible to always check EVERY thread's queue. Even that thread is blocked for the lacking of messages. If any message is availabe, the system will change the state of that thread to schedulable. But this checking could be a real burden to the system in my opinion.
2 - Distributed approach: The system doesn't check every thread's queue. When a thread calls GetMessage and find that no message is available, the system will just change the thread's state to blocked, thus not schedulable any more. And in the future no matter who places a message into a blocked thread's queue, it is this "who"(not the system) that is responsible to change the the thread's state from blocked to ready (or whatever state). So this thread is dis-qualified for scheduling by the system and re-qualified by someone else in the regard of GetMessage. What the system cares is just to schedule the runable threads. The system doesn't care where these schedulable threads come from. This approach will avoid the burden in approach 1, and thus avoid the possible bottleneck.
In fact, the key point here is, how are the states of the threads changed? I am not sure if it is really a distributed paradigm as shown in appraoch 2, but could it be a good option?
Applications call GetMessage() in their message loop. If the message queue is empty, the process will just block until another message becomes available. Thus, GetMessage is a processes' way of telling Windows that it doesn't have anything to do at the moment.
I am wondering if there's only one
Windows application running, will this
application got more chance to get its
message?
Well yeah probably, but I think you might be missing a crucial point. Extracting a message from the queue is a blocking call. The data structure used is usually referred to as a blocking queue. The dequeue operation is designed to voluntarily yield the current thread's execution if the queue is empty. Threads can stay parked using a various different methods, but it is likely that thread remains in a waiting state using kernel level mechanisms in this case. Once the signal is given that the queue has items available the thread may go into a ready state and the scheduler will start assigning its fair share of the CPU. In other words, if there are no messages pending for that application then it just sits there in an idle state consuming close to zero CPU time.
The fewer threads you have running (time slices are scheduled to threads, not processes), the more chances any single application will have to pull messages from its queue. Actually, this has nothing to do with Windows messages; it's true for all multithreading; the more threads of the same or higher priority which are running, the fewer time slices any thread will get.
Beyond that, I'm not sure what you are really asking, though...

CriticalSection

i'm not sure about something.
when i use critical_section/mutex/semaphor in c++ for example , how does the busy_wait problem being prevented ?
what i mean is when a thread reaches a critical section and the critical section is occupied by other thread, what prevents the thread from wasting cycle time and wait for nothing ?
for example,
should i call TryEnterCriticalSection and check if the thread obtained ownership and otherwise call sleep(0) ?
i'm a bit perplexed
thanks
This is Windows specific, but Linux will be similar.
Windows has the concept of a ready queue of threads. These are threads that are ready to run, and will be run at some point on an available processor. Which threads are selected to run immediately is a bit complicated - threads can have different priorities, their priorities can be temporarily boosted, etc.
When a thread waits on a synchronization primitive like a CRITICAL_SECTION or mutex, it is not placed on the ready queue - Windows will not even attempt to run the thread and will run other threads if possible. At some point the thread will be moved back to the ready queue, for instance when the thread owning the CS or mutex releases it.
The thread is not going to be taking any system resources, because it will be marked as "waiting". As soon as the thread occupying the critical region finishes, it will send out a signal that will move the waiting thread to the ready queue.
These control structures stop the thread that can't enter from doing a busy wait by allowing it to sleep until an interrupt is generated by the thread that is in the critical section finishing execution. Because the thread is asleep it is not using processor cycles, so no busy_wait.

Resources