SetThreadPriority and SetPriorityClass - windows

I don't understand how to use SetThreadPriority and SetPriorityClass to lower and increase the priority of a Thread.
My understanding is that the SetPriorityClass selects the range of priorities available to a process and the SetThreadPriority sets the relative priority within the class.
For instance, what is the result of doing this for a thread :
SetPriorityClass(GetCurrentProcess(), PROCESS_MODE_BACKGROUND_BEGIN);
SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_END);
Thanks for help.

One thing about PROCESS_MODE_BACKGROUND_BEGIN that I have observed but that has apparently not been documented, is that at least under Windows 7 it empties the working set of the process PERMANENTLY, no matter how the process accesses the memory - until the background mode ends.
For example, normally without PROCESS_MODE_BACKGROUND_BEGIN, when my machine has gigabytes of free memory, and the process needs to consume and constantly process gigabytes of memory, the process working set will be about equal to the allocation size. That is, the process gets all the memory it uses in its working set. Good.
Now, with PROCESS_MODE_BACKGROUND_BEGIN the working set will be a few tens of megabytes!
The bad result is, that this causes constant page faults and the computation runs much slower! The page faults are likely not to page file, but to Windows cache memory. But the page faults still slow the computation down significantly, while causing also the CPU to be consumed with meaningless load instead.
In conclusion, PROCESS_MODE_BACKGROUND_BEGIN is not suitable for low-priority background work. The work will be very time and energy inefficient.
PROCESS_MODE_BACKGROUND_BEGIN is suitable only when the process really does not intend to do anything consuming in the background.
In contrast, THREAD_MODE_BACKGROUND_BEGIN does not have such dreadful effects, even when the thread is the only thread in the process.
Note also, that you need to turn off PROCESS_MODE_BACKGROUND_BEGIN for good, using only PROCESS_MODE_BACKGROUND_END. It is not enough to call THREAD_MODE_BACKGROUND_END after PROCESS_MODE_BACKGROUND_BEGIN.
So Arno is not quite correct with the claim that THREAD_MODE_BACKGROUND_END undoes the effects of PROCESS_MODE_BACKGROUND_BEGIN even for a single thread.
Additional note: SetProcessPriorityBoost with bDisablePriorityBoost = TRUE does not have any such effect on the working set.

The process priority class and the thread priority are building the base priority of a thread. See Scheduling Priorities to find how the priorities are assembled. By looking at this list it becomes clear that your understanding is somewhat correct; within a certain priority class the base priority can have various values, determined by the thread priority.
The PROCESS_MODE_BACKGROUND_BEGIN value for SetPriorityClass and the THREAD_MODE_BACKGROUND_END value for SetThreadPriority are not supported on all Windows versions.
PROCESS_MODE_BACKGROUND_BEGIN:
The system lowers the resource scheduling priorities of the process (and its threads) so that it can perform background work without significantly affecting activity in the foreground.
THREAD_MODE_BACKGROUND_END:
End background processing mode. The system restores the resource scheduling priorities of the thread as they were before the thread entered background processing mode.
The consequence of the scenario in question here is predictable: The SetPriorityClass will set the process with all of its threads into background processing mode. The following SetThreadPriority will only release the a thread from background processing mode. But all other possible threads of the process will stay in in background processing mode.
Note: Only the combination of process priority class and thread priority determines the base priority. Therefore neither a call to GetThreadPriority nor a call to GetPriorityClass will return the base priority. Only their combination releases the base priority which is described in the "Scheduling Priorities" link above. Unfortunately the new background processing mode values aren't yet included in the base priority list. But the name base priority tells what matters here: Based on the base priority (derived from process priority class and thread priority) the scheduler is allowed to dynamically adapt the scheduling priority. The background mode is just another way to fine tune the scheduling priority. Another way are Priority Boosts. The priority boost functionality exists for some time. The new access to background processing mode values for SetThreadPriority and SetPriorityClass opens the priority boost capability directly. In Windows XP this had to be done by a call to SetProcessPriorityBoost.

Related

Could process running only on one processor have threads running on other processors?

Is it possible, in multiprocessor environment (PC) that one windows process is configured to run only on one processor (affinity mask = 1 or SetProcessAffinityMask(GetCurrentProcess(),1)), but its thread are spawned on other processors?
(Question came from discussion started in one company, regarding using synchronization objects (Events, Mutexes, Semaphores) and WinAPIs, like WaitForSignleObject, etc, especially SignalObjectAndWait for which MSDN states
"Note that the "signal" and "wait" are not guaranteed to be performed
as an atomic operation. Threads executing on other processors can
observe the signaled state of the first object before the thread
calling SignalObjectAndWait begins its wait on the second object"
Does it mean that for single processor it's guaranteed to be atomic?
P.S. Is there any differences for Windows Context Switching that there are multiple processors or single processor with more real cores?
P.P.S. Please be patient with this question if I didn't use exact and concrete terms - this are is still not very good known for me.
No.
The set of processor cores a thread can run on is the intersection of the process affinity mask and the thread affinity mask.
To get the behavior you describe, one would set the thread affinity mask for the main thread, and not mess with the process mask.
For your followup questions: If it isn't atomic, it isn't atomic. There are additional guarantees for threads sharing a core, because preemption follows certain rules, but they are very complex, since relative priority and dynamic priority are important factors in thread scheduling. Because of the complexity, it is best to use proper synchronization.
Notably, race conditions between threads of equal priority certainly still exist on a single core (or single core restricted) system, but they are far less frequent and therefore far more difficult to find and debug.
Is it possible, in multiprocessor environment (PC) that one windows process is configured to run only on one processor (affinity mask = 1 or SetProcessAffinityMask(GetCurrentProcess(),1)), but its thread are spawned on other processors?
If not set cpu affinity to only one core, one process could run on multiple cores?
What's the difference between processes and threads?
Thread could have processes or process could have threads?
Could process seen from a thread point of view or vice verse?
What is atomic notion?
when number 1 could seen as multidimensional unit?
Could we divide 1/0 (to zero)? When could we or couldn't?
Does it mean that for single processor it's guaranteed to be atomic?
One cpu: do you remember: run and stay resident? Good old time!
Then Unix: multiprocessing, multithreading, etc. :)
Note:
You couldn't ask a question without knowing answer to that question.
Try to ask something you don't know, that's impossible! You're asking because you have an answer. Look inside your question. Answer is evident. :)

Is it advisable to use the Windows API 'SetThreadPriority' within a parallel_for_each loop

I would like to reduce the thread-priority of the threads servicing a parallel_for_each, because under heavy load conditions they consume too much processor time relative to other threads in my system.
Questions:
1) Do the servicing threads of a parallel_for_each inherit the thread-priority of the calling thread? In this case I could presumably call SetThreadPriority before and after the parallel_for_each, and everything should be fine.
2) Alternatively is it advisable to call SetThreadPriority within the parallel_for_each? This will clearly invoke the API multiple times for the same threads. Is there a large overhead of doing this?
2.b) Assuming that I do this, will it affect thread-priorities the next time that parallel_for_each is called - ie do I need to somehow reset the priority of each thread afterwards?
3) I'm wondering about thread-priorities in general. Would anyone like to comment: supposing that I had 2 threads contending for a single processor and one was set to "below-normal" while the other was "normal" priority. Roughly what percentage more processor time would the one thread get compared to the other?
All threads initially start at THREAD_PRIORITY_NORMAL. So you'd have to reduce the priority of each thread. Or reduce the priority of the owning process.
There is little overhead in calling SetThreadPriority. Once you have woken up a thread, the additional cost of calling SetThreadPriority is negligible. Once you set the thread's priority it will remain at that value until changed.
Suppose that you have one processor, and two threads ready to run. The scheduler will always choose to run the thread with the higher priority. This means that in your scenario, the below normal threads would never run. In reality, there's a lot more to scheduling than that. For example priority inversion. However, you can think of it like this. If all processors are busy with normal priority threads, then expect lower priority threads to be starved of CPU.

What effect does changing the process priority have in Windows?

If you go into Task Manager, right click a process, and set priority to Realtime, it often stops program crashes, or makes them run faster.
In a programming context, what does this do?
It calls SetPriorityClass().
Every thread has a base priority level determined by the thread's
priority value and the priority class of its process. The system uses
the base priority level of all executable threads to determine which
thread gets the next slice of CPU time. The SetThreadPriority function
enables setting the base priority level of a thread relative to the
priority class of its process. For more information, see Scheduling
Priorities.
It tells the widows scheduler to be more or less greedy when allocating execution time slices to your process. Realtime execution makes it never yield execution (not even to drivers, according to MSDN), which may cause stalls in your app if it waits on external events but has no yielding of its own(like Sleep, SwitchToThread or WaitFor[Single|Multiple]Objects), as such using realtime should be avoided unless you know that the application will handle it correctly.
It works by changing the weight given to this process in the OS task scheduler. Your CPU can only execute one instruction at a time (to put it very, very simply) and the OS's job is to keep swapping instructions from each running process. By raising or lowering the priority, you're affecting how much time it's allotted in the CPU relative to other applications currently being multi-tasked.

What is the 'realtime' process priority setting for?

From what I've read in the past, you're encouraged not to change the priority of your Windows applications programmatically, and if you do, you should never change them to 'Realtime'.
What does the 'Realtime' process priority setting do, compared to 'High', and 'Above Normal'?
A realtime priority thread can never be pre-empted by timer interrupts and runs at a higher priority than any other thread in the system. As such a CPU bound realtime priority thread can totally ruin a machine.
Creating realtime priority threads requires a privilege (SeIncreaseBasePriorityPrivilege) so it can only be done by administrative users.
For Vista and beyond, one option for applications that do require that they run at realtime priorities is to use the Multimedia Class Scheduler Service (MMCSS) and let it manage your threads priority. The MMCSS will prevent your application from using too much CPU time so you don't have to worry about tanking the machine.
Simply, the "Real Time" priority class is higher than "High" priority class. I don't think there's much more to it than that. Oh yeah - you have to have the SeIncreaseBasePriorityPrivilege to put a thread into the Real Time class.
Windows will sometimes boost the priority of a thread for various reasons, but it won't boost the priority of a thread into another priority class. It also won't boost the priority of threads in the real-time priority class. So a High priority thread won't get any automatic temporary boost into the Real Time priority class.
Russinovich's "Inside Windows" chapter on how Windows handles priorities is a great resource for learning how this works:
http://web.archive.org/web/20140909124652/http://download.microsoft.com/download/5/b/3/5b38800c-ba6e-4023-9078-6e9ce2383e65/C06X1116607.pdf
Note that there's absolutely no problem with a thread having a Real-time priority on a normal Windows system - they aren't necessarily for special processes running on dedicatd machines. I imagine that multimedia drivers and/or processes might need threads with a real-time priority. However, such a thread should not require much CPU - it should be blocking most of the time in order for normal system events to get processing.
It would be the highest available priority setting, and would usually only be used on box that was dedicated to running that specific program. It's actually high enough that it could cause starvation of the keyboard and mouse threads to the extent that they become unresponsive.
So basicly, if you have to ask, don't use it :)
Real-time is the highest priority class available to a process. Therefore, it is different from 'High' in that it's one step greater, and 'Above Normal' in that it's two steps greater.
Similarly, real-time is also a thread priority level.
The process priority class raises or lowers all effective thread priorities in the process and is therefore considered the 'base priority'.
So, a process has a:
Base process priority class.
Individual thread priorities, offsets of the base priority class.
Since real-time is supposed to be reserved for applications that absolutely must pre-empt other running processes, there is a special security privilege to protect against haphazard use of it. This is defined by the security policy.
In NT6+ (Vista+), use of the Vista Multimedia Class Scheduler is the proper way to achieve real-time operations in what is not a real-time OS. It works, for the most part, though is not perfect since the OS isn't designed for real-time operations.
Microsoft considers this priority very dangerous, rightly so. No application should use it except in very specialized circumstances, and even then try to limit its use to temporary needs.
Once Windows learns a program uses higher than normal priority it seems like it limits the priority on the process.
Setting the priority from IDLE to REALTIME does NOT change the CPU usage.
I found on My multi-processor AMD CPU that if I drop one of the CPUs ot like the LAST one the CPU usage will MAX OUT and the last CPU remains idle. The processor speed increases to 75% on my Quad AMD.
Use Task Manager->select process->Right Click on the process->Select->Set Affinity
Click all but the last processor. The CPU usage will increase to the MAX on the remaining processors and Frame counts if processing video will increase.
Like all other answers before real time gives that program the utmost priority class. Nothing is processed until that program has been processed.
On my pentium 4 machine I set minecraft to real time a lot since it increases the game performance a lot, and the system seems completely stable. so realtime isn't as bad as it seems, just if you have a multi-core set a program's affinity to a specific core or cores (just not all of them, just to let everything else be able to run in case the real time set programs gets hung up) and set the priority to real time.
It basically is higher/greater in everything else. A keyboard is less of a priority than the real time process. This means the process will be taken into account faster then keyboard and if it can't handle that, then your keyboard is slowed.
Realtime process priority is used mainly to make a specific process run faster at the expense of literally everything else. I personally have my Discord bot's process priority set to realtime, since the bot is lightweight, and I need to keep it responding quickly, but it would be a bad idea to randomly change process priorities unless you know what you're doing, for example, if you were to set Google Chrome's priority to low, it wouldn't cause any problems, but if you were to set registry's priority to low, it would likely cause more than enough problems. Just don't set a heavy program to realtime unless the device it's being run on is completely dedicated to that program.

Win32 Thread scheduling

As I understand, windows thread scheduler does not discriminate beween threads belonging two different processes, provided all of them have the same base priority. My question is if I have two applications one with only one thread and the other with say 50 threads all with same base priority, does it mean that the second process enjoys more CPU time then the first one?
Scheduling in Windows is at the thread granularity. The basic idea behind this approach is that processes don't run but only provide resources and a context in which their threads run. Coming back to your question, because scheduling decisions are made strictly on a thread basis, no consideration is given to what process the thread belongs to. In your example, if process A has 1 runnable thread and process B has 50 runnable threads, and all 51 threads are at the same priority, each thread would receive 1/51 of the CPU time—Windows wouldn't give 50 percent of the CPU to process A and 50 percent to process B.
To understand the thread-scheduling algorithms, you must first understand the priority levels that Windows uses. You can refer here for quick reference.
Try reading Windows Internals for in depth understanding.
All of the above are accurate but if you're worried about the 50 thread process hogging all the CPU, there ARE techniques you can do to ensure that no single process overwhelms the CPU.
IMHO the best way to do this is to use job objects to manage the usage of a process. First call CreateJobObject, then SetInformationJobObject to limit the max CPU usage of the processes in the job object and AssignProcessToJobObject to assign the process with 50 threads to the job object. You can then let the OS ensure that the 50 thread process doesn't consume too much CPU time.
The unit of scheduling is a thread, not a process, so a process with 50 threads, all in a tight loop, will get much more of the cpu than a process with only a single thread, provided all are running at the same priority. This is normally not a concern since most threads in the system are not in a runnable state and will not be up for scheduling; they are waiting on I/O, waiting for input from the user, and so on.
Windows Internals is a great book for learning more about the Windows thread scheduler.
That depends on the behavior of the threads. In general with a 50 : 1 difference in thread count, yes, the application with more threads is going to get a lot more time. However, windows also uses dynamic thread prioritization, which can change this somewhat. Dynamic thread prioritization is described here:
https://web.archive.org/web/20130312225716/http://support.microsoft.com/kb/109228
Relevant excerpt:
The base priority of a thread is the base level from which these upward adjustments are made. The current priority of a thread is called its dynamic priority. Interactive threads that yield before their time slice is up will tend to be adjusted upward in priority from their base priority. Compute-bound threads that do not yield, consuming their entire time slice, will tend to have their priority decreased, but not below the base level. This arrangement is often called heuristic scheduling. It provides better interactive performance and tends to lessen the system impact of "CPU hog" threads.
There is a local 'advanced' setting that purportedly can be used to shade scheduling slightly in favor of the app with focus. With the 'services' setting, there is no preference. In previous versions of Windows, this setting used to be somewhat more granular than just 'applications with focus'(slight preference to app with focus) and 'services' (all equal weigthing)
As this can be set by the user on the targe machine, it seems like it is asking for grief to depend on this setting...

Resources