As I understand, windows thread scheduler does not discriminate beween threads belonging two different processes, provided all of them have the same base priority. My question is if I have two applications one with only one thread and the other with say 50 threads all with same base priority, does it mean that the second process enjoys more CPU time then the first one?
Scheduling in Windows is at the thread granularity. The basic idea behind this approach is that processes don't run but only provide resources and a context in which their threads run. Coming back to your question, because scheduling decisions are made strictly on a thread basis, no consideration is given to what process the thread belongs to. In your example, if process A has 1 runnable thread and process B has 50 runnable threads, and all 51 threads are at the same priority, each thread would receive 1/51 of the CPU time—Windows wouldn't give 50 percent of the CPU to process A and 50 percent to process B.
To understand the thread-scheduling algorithms, you must first understand the priority levels that Windows uses. You can refer here for quick reference.
Try reading Windows Internals for in depth understanding.
All of the above are accurate but if you're worried about the 50 thread process hogging all the CPU, there ARE techniques you can do to ensure that no single process overwhelms the CPU.
IMHO the best way to do this is to use job objects to manage the usage of a process. First call CreateJobObject, then SetInformationJobObject to limit the max CPU usage of the processes in the job object and AssignProcessToJobObject to assign the process with 50 threads to the job object. You can then let the OS ensure that the 50 thread process doesn't consume too much CPU time.
The unit of scheduling is a thread, not a process, so a process with 50 threads, all in a tight loop, will get much more of the cpu than a process with only a single thread, provided all are running at the same priority. This is normally not a concern since most threads in the system are not in a runnable state and will not be up for scheduling; they are waiting on I/O, waiting for input from the user, and so on.
Windows Internals is a great book for learning more about the Windows thread scheduler.
That depends on the behavior of the threads. In general with a 50 : 1 difference in thread count, yes, the application with more threads is going to get a lot more time. However, windows also uses dynamic thread prioritization, which can change this somewhat. Dynamic thread prioritization is described here:
https://web.archive.org/web/20130312225716/http://support.microsoft.com/kb/109228
Relevant excerpt:
The base priority of a thread is the base level from which these upward adjustments are made. The current priority of a thread is called its dynamic priority. Interactive threads that yield before their time slice is up will tend to be adjusted upward in priority from their base priority. Compute-bound threads that do not yield, consuming their entire time slice, will tend to have their priority decreased, but not below the base level. This arrangement is often called heuristic scheduling. It provides better interactive performance and tends to lessen the system impact of "CPU hog" threads.
There is a local 'advanced' setting that purportedly can be used to shade scheduling slightly in favor of the app with focus. With the 'services' setting, there is no preference. In previous versions of Windows, this setting used to be somewhat more granular than just 'applications with focus'(slight preference to app with focus) and 'services' (all equal weigthing)
As this can be set by the user on the targe machine, it seems like it is asking for grief to depend on this setting...
Related
Is it possible, in multiprocessor environment (PC) that one windows process is configured to run only on one processor (affinity mask = 1 or SetProcessAffinityMask(GetCurrentProcess(),1)), but its thread are spawned on other processors?
(Question came from discussion started in one company, regarding using synchronization objects (Events, Mutexes, Semaphores) and WinAPIs, like WaitForSignleObject, etc, especially SignalObjectAndWait for which MSDN states
"Note that the "signal" and "wait" are not guaranteed to be performed
as an atomic operation. Threads executing on other processors can
observe the signaled state of the first object before the thread
calling SignalObjectAndWait begins its wait on the second object"
Does it mean that for single processor it's guaranteed to be atomic?
P.S. Is there any differences for Windows Context Switching that there are multiple processors or single processor with more real cores?
P.P.S. Please be patient with this question if I didn't use exact and concrete terms - this are is still not very good known for me.
No.
The set of processor cores a thread can run on is the intersection of the process affinity mask and the thread affinity mask.
To get the behavior you describe, one would set the thread affinity mask for the main thread, and not mess with the process mask.
For your followup questions: If it isn't atomic, it isn't atomic. There are additional guarantees for threads sharing a core, because preemption follows certain rules, but they are very complex, since relative priority and dynamic priority are important factors in thread scheduling. Because of the complexity, it is best to use proper synchronization.
Notably, race conditions between threads of equal priority certainly still exist on a single core (or single core restricted) system, but they are far less frequent and therefore far more difficult to find and debug.
Is it possible, in multiprocessor environment (PC) that one windows process is configured to run only on one processor (affinity mask = 1 or SetProcessAffinityMask(GetCurrentProcess(),1)), but its thread are spawned on other processors?
If not set cpu affinity to only one core, one process could run on multiple cores?
What's the difference between processes and threads?
Thread could have processes or process could have threads?
Could process seen from a thread point of view or vice verse?
What is atomic notion?
when number 1 could seen as multidimensional unit?
Could we divide 1/0 (to zero)? When could we or couldn't?
Does it mean that for single processor it's guaranteed to be atomic?
One cpu: do you remember: run and stay resident? Good old time!
Then Unix: multiprocessing, multithreading, etc. :)
Note:
You couldn't ask a question without knowing answer to that question.
Try to ask something you don't know, that's impossible! You're asking because you have an answer. Look inside your question. Answer is evident. :)
I would like to reduce the thread-priority of the threads servicing a parallel_for_each, because under heavy load conditions they consume too much processor time relative to other threads in my system.
Questions:
1) Do the servicing threads of a parallel_for_each inherit the thread-priority of the calling thread? In this case I could presumably call SetThreadPriority before and after the parallel_for_each, and everything should be fine.
2) Alternatively is it advisable to call SetThreadPriority within the parallel_for_each? This will clearly invoke the API multiple times for the same threads. Is there a large overhead of doing this?
2.b) Assuming that I do this, will it affect thread-priorities the next time that parallel_for_each is called - ie do I need to somehow reset the priority of each thread afterwards?
3) I'm wondering about thread-priorities in general. Would anyone like to comment: supposing that I had 2 threads contending for a single processor and one was set to "below-normal" while the other was "normal" priority. Roughly what percentage more processor time would the one thread get compared to the other?
All threads initially start at THREAD_PRIORITY_NORMAL. So you'd have to reduce the priority of each thread. Or reduce the priority of the owning process.
There is little overhead in calling SetThreadPriority. Once you have woken up a thread, the additional cost of calling SetThreadPriority is negligible. Once you set the thread's priority it will remain at that value until changed.
Suppose that you have one processor, and two threads ready to run. The scheduler will always choose to run the thread with the higher priority. This means that in your scenario, the below normal threads would never run. In reality, there's a lot more to scheduling than that. For example priority inversion. However, you can think of it like this. If all processors are busy with normal priority threads, then expect lower priority threads to be starved of CPU.
If you go into Task Manager, right click a process, and set priority to Realtime, it often stops program crashes, or makes them run faster.
In a programming context, what does this do?
It calls SetPriorityClass().
Every thread has a base priority level determined by the thread's
priority value and the priority class of its process. The system uses
the base priority level of all executable threads to determine which
thread gets the next slice of CPU time. The SetThreadPriority function
enables setting the base priority level of a thread relative to the
priority class of its process. For more information, see Scheduling
Priorities.
It tells the widows scheduler to be more or less greedy when allocating execution time slices to your process. Realtime execution makes it never yield execution (not even to drivers, according to MSDN), which may cause stalls in your app if it waits on external events but has no yielding of its own(like Sleep, SwitchToThread or WaitFor[Single|Multiple]Objects), as such using realtime should be avoided unless you know that the application will handle it correctly.
It works by changing the weight given to this process in the OS task scheduler. Your CPU can only execute one instruction at a time (to put it very, very simply) and the OS's job is to keep swapping instructions from each running process. By raising or lowering the priority, you're affecting how much time it's allotted in the CPU relative to other applications currently being multi-tasked.
In particular, I'm looking at using TPL to start (and wait for) external processes. Does the TPL look at total machine load (both CPU and I/O) before deciding to start another task (hence -- in my case -- another external process)?
For example:
I've got about 100 media files that need to be encoded or transcoded (e.g. from WAV to FLAC or from FLAC to MP3). The encoding is done by launching an external process (e.g. FLAC.EXE or LAME.EXE). Each file takes about 30 seconds. Each process is mostly CPU-bound, but there's some I/O in there. I've got 4 cores, so the worst case (transcoding by piping the decoder into the encoder) still only uses 2 cores. I'd like to do something like:
Parallel.ForEach(sourceFiles,
sourceFile =>
TranscodeUsingPipedExternalProcesses(sourceFile));
Will this kick off 100 tasks (and hence 200 external processes competing for the CPU)? Or will it see that the CPU's busy and only do 2-3 at a time?
You're going to run into a couple of issues here. The starvation avoidance mechanism of the scheduler will see your tasks as blocked as they wait on processes. It will find it hard to distinguish between a deadlocked thread and one simply waiting for a process to complete. As a result it may schedule new tasks if your tasks run or a long time (see below). The hillclimbing heuristic should take into account the overall load on the system, both from your application and others. It simply tries to maximize work done, so it will add more work until the overall throughput of the system stops increasing and then it will back off. I don't think this will effect your application but the stavation avoidance issue probably will.
You can find more detail as to how this all works in Parallel Programming with Microsoft®.NET, Colin Campbell, Ralph Johnson, Ade Miller, Stephen Toub (an earlier draft is online).
"The .NET thread pool automatically manages the number of worker
threads in the pool. It adds and removes threads according to built-in
heuristics. The .NET thread pool has two main mechanisms for injecting
threads: a starvation-avoidance mechanism that adds worker
threads if it sees no progress being made on queued items and a hillclimbing
heuristic that tries to maximize throughput while using as
few threads as possible.
The goal of starvation avoidance is to prevent deadlock. This kind
of deadlock can occur when a worker thread waits for a synchronization
event that can only be satisfied by a work item that is still pending
in the thread pool’s global or local queues. If there were a fixed
number of worker threads, and all of those threads were similarly
blocked, the system would be unable to ever make further progress.
Adding a new worker thread resolves the problem.
A goal of the hill-climbing heuristic is to improve the utilization
of cores when threads are blocked by I/O or other wait conditions
that stall the processor. By default, the managed thread pool has one
worker thread per core. If one of these worker threads becomes
blocked, there’s a chance that a core might be underutilized, depending
on the computer’s overall workload. The thread injection logic
doesn’t distinguish between a thread that’s blocked and a thread
that’s performing a lengthy, processor-intensive operation. Therefore,
whenever the thread pool’s global or local queues contain pending
work items, active work items that take a long time to run (more than
a half second) can trigger the creation of new thread pool worker
threads.
The .NET thread pool has an opportunity to inject threads every
time a work item completes or at 500 millisecond intervals, whichever
is shorter. The thread pool uses this opportunity to try adding threads
(or taking them away), guided by feedback from previous changes in
the thread count. If adding threads seems to be helping throughput,
the thread pool adds more; otherwise, it reduces the number of
worker threads. This technique is called the hill-climbing heuristic.
Therefore, one reason to keep individual tasks short is to avoid
“starvation detection,” but another reason to keep them short is to
give the thread pool more opportunities to improve throughput by
adjusting the thread count. The shorter the duration of individual
tasks, the more often the thread pool can measure throughput and
adjust the thread count accordingly.
To make this concrete, consider an extreme example. Suppose
that you have a complex financial simulation with 500 processor-intensive
operations, each one of which takes ten minutes on average
to complete. If you create top-level tasks in the global queue for each
of these operations, you will find that after about five minutes the
thread pool will grow to 500 worker threads. The reason is that the
thread pool sees all of the tasks as blocked and begins to add new
threads at the rate of approximately two threads per second.
What’s wrong with 500 worker threads? In principle, nothing, if
you have 500 cores for them to use and vast amounts of system
memory. In fact, this is the long-term vision of parallel computing.
However, if you don’t have that many cores on your computer, you are
in a situation where many threads are competing for time slices. This
situation is known as processor oversubscription. Allowing many
processor-intensive threads to compete for time on a single core adds
context switching overhead that can severely reduce overall system
throughput. Even if you don’t run out of memory, performance in this
situation can be much, much worse than in sequential computation.
(Each context switch takes between 6,000 and 8,000 processor cycles.)
The cost of context switching is not the only source of overhead.
A managed thread in .NET consumes roughly a megabyte of stack
space, whether or not that space is used for currently executing functions.
It takes about 200,000 CPU cycles to create a new thread, and
about 100,000 cycles to retire a thread. These are expensive operations.
As long as your tasks don’t each take minutes, the thread pool’s
hill-climbing algorithm will eventually realize it has too many threads
and cut back on its own accord. However, if you do have tasks that
occupy a worker thread for many seconds or minutes or hours, that
will throw off the thread pool’s heuristics, and at that point you
should consider an alternative.
The first option is to decompose your application into shorter
tasks that complete fast enough for the thread pool to successfully
control the number of threads for optimal throughput.
A second possibility is to implement your own task scheduler
object that does not perform thread injection. If your tasks are of long
duration, you don’t need a highly optimized task scheduler because
the cost of scheduling will be negligible compared to the execution
time of the task. MSDN® developer program has an example of a
simple task scheduler implementation that limits the maximum degree
of concurrency. For more information, see the section, “Further Reading,”
at the end of this chapter.
As a last resort, you can use the SetMaxThreads method to
configure the ThreadPool class with an upper limit for the number
of worker threads, usually equal to the number of cores (this is the
Environment.ProcessorCount property). This upper limit applies for
the entire process, including all AppDomains."
The short answer is: no.
Internally, the TPL uses the standard ThreadPool to schedule its tasks. So you're actually asking whether the ThreadPool takes machine load into account and it doesn't. The only thing that limits the number of tasks simultaneously running is the number of threads in the thread pool, nothing else.
Is it possible to have the external processes report back to your application once they are ready? In that case you do not have to wait for them (keeping threads occupied).
Ran a test using TPL/ThreadPool to schedule a great number of tasks doing looped spins. Using an external app I've loaded one of the cores to 100% using proc affinity. The number of active tasks never decreased.
Even better, I ran multiple instances of the same CPU intensive .NET TPL enabled app. The number of threads for all the apps was the same, and never went below the number of cores, even though my machine was barely usable.
So theory aside, TPL uses the number of cores available, but never checks on their actual load. A very poor implementation in my opinion.
From what I've read in the past, you're encouraged not to change the priority of your Windows applications programmatically, and if you do, you should never change them to 'Realtime'.
What does the 'Realtime' process priority setting do, compared to 'High', and 'Above Normal'?
A realtime priority thread can never be pre-empted by timer interrupts and runs at a higher priority than any other thread in the system. As such a CPU bound realtime priority thread can totally ruin a machine.
Creating realtime priority threads requires a privilege (SeIncreaseBasePriorityPrivilege) so it can only be done by administrative users.
For Vista and beyond, one option for applications that do require that they run at realtime priorities is to use the Multimedia Class Scheduler Service (MMCSS) and let it manage your threads priority. The MMCSS will prevent your application from using too much CPU time so you don't have to worry about tanking the machine.
Simply, the "Real Time" priority class is higher than "High" priority class. I don't think there's much more to it than that. Oh yeah - you have to have the SeIncreaseBasePriorityPrivilege to put a thread into the Real Time class.
Windows will sometimes boost the priority of a thread for various reasons, but it won't boost the priority of a thread into another priority class. It also won't boost the priority of threads in the real-time priority class. So a High priority thread won't get any automatic temporary boost into the Real Time priority class.
Russinovich's "Inside Windows" chapter on how Windows handles priorities is a great resource for learning how this works:
http://web.archive.org/web/20140909124652/http://download.microsoft.com/download/5/b/3/5b38800c-ba6e-4023-9078-6e9ce2383e65/C06X1116607.pdf
Note that there's absolutely no problem with a thread having a Real-time priority on a normal Windows system - they aren't necessarily for special processes running on dedicatd machines. I imagine that multimedia drivers and/or processes might need threads with a real-time priority. However, such a thread should not require much CPU - it should be blocking most of the time in order for normal system events to get processing.
It would be the highest available priority setting, and would usually only be used on box that was dedicated to running that specific program. It's actually high enough that it could cause starvation of the keyboard and mouse threads to the extent that they become unresponsive.
So basicly, if you have to ask, don't use it :)
Real-time is the highest priority class available to a process. Therefore, it is different from 'High' in that it's one step greater, and 'Above Normal' in that it's two steps greater.
Similarly, real-time is also a thread priority level.
The process priority class raises or lowers all effective thread priorities in the process and is therefore considered the 'base priority'.
So, a process has a:
Base process priority class.
Individual thread priorities, offsets of the base priority class.
Since real-time is supposed to be reserved for applications that absolutely must pre-empt other running processes, there is a special security privilege to protect against haphazard use of it. This is defined by the security policy.
In NT6+ (Vista+), use of the Vista Multimedia Class Scheduler is the proper way to achieve real-time operations in what is not a real-time OS. It works, for the most part, though is not perfect since the OS isn't designed for real-time operations.
Microsoft considers this priority very dangerous, rightly so. No application should use it except in very specialized circumstances, and even then try to limit its use to temporary needs.
Once Windows learns a program uses higher than normal priority it seems like it limits the priority on the process.
Setting the priority from IDLE to REALTIME does NOT change the CPU usage.
I found on My multi-processor AMD CPU that if I drop one of the CPUs ot like the LAST one the CPU usage will MAX OUT and the last CPU remains idle. The processor speed increases to 75% on my Quad AMD.
Use Task Manager->select process->Right Click on the process->Select->Set Affinity
Click all but the last processor. The CPU usage will increase to the MAX on the remaining processors and Frame counts if processing video will increase.
Like all other answers before real time gives that program the utmost priority class. Nothing is processed until that program has been processed.
On my pentium 4 machine I set minecraft to real time a lot since it increases the game performance a lot, and the system seems completely stable. so realtime isn't as bad as it seems, just if you have a multi-core set a program's affinity to a specific core or cores (just not all of them, just to let everything else be able to run in case the real time set programs gets hung up) and set the priority to real time.
It basically is higher/greater in everything else. A keyboard is less of a priority than the real time process. This means the process will be taken into account faster then keyboard and if it can't handle that, then your keyboard is slowed.
Realtime process priority is used mainly to make a specific process run faster at the expense of literally everything else. I personally have my Discord bot's process priority set to realtime, since the bot is lightweight, and I need to keep it responding quickly, but it would be a bad idea to randomly change process priorities unless you know what you're doing, for example, if you were to set Google Chrome's priority to low, it wouldn't cause any problems, but if you were to set registry's priority to low, it would likely cause more than enough problems. Just don't set a heavy program to realtime unless the device it's being run on is completely dedicated to that program.