From what I've read in the past, you're encouraged not to change the priority of your windows applications programmatically, and if you do, you should never change them to 'realtime'.
What does the 'realtime' process priority setting do, compared to 'High', and 'Above Normal'?
My Processor is an i7 4710 HQ 2.50GHz.
This is explained in Scheduling Priorities on MSDN (emphasis mine):
You should almost never use REALTIME_PRIORITY_CLASS, because this interrupts system threads that manage mouse input, keyboard input, and background disk flushing. This class can be appropriate for applications that "talk" directly to hardware or that perform brief tasks that should have limited interruptions.
So that's the theory. In practice, if you need to do this, you're probably doing something wrong.
Related
I don't understand how to use SetThreadPriority and SetPriorityClass to lower and increase the priority of a Thread.
My understanding is that the SetPriorityClass selects the range of priorities available to a process and the SetThreadPriority sets the relative priority within the class.
For instance, what is the result of doing this for a thread :
SetPriorityClass(GetCurrentProcess(), PROCESS_MODE_BACKGROUND_BEGIN);
SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_END);
Thanks for help.
One thing about PROCESS_MODE_BACKGROUND_BEGIN that I have observed but that has apparently not been documented, is that at least under Windows 7 it empties the working set of the process PERMANENTLY, no matter how the process accesses the memory - until the background mode ends.
For example, normally without PROCESS_MODE_BACKGROUND_BEGIN, when my machine has gigabytes of free memory, and the process needs to consume and constantly process gigabytes of memory, the process working set will be about equal to the allocation size. That is, the process gets all the memory it uses in its working set. Good.
Now, with PROCESS_MODE_BACKGROUND_BEGIN the working set will be a few tens of megabytes!
The bad result is, that this causes constant page faults and the computation runs much slower! The page faults are likely not to page file, but to Windows cache memory. But the page faults still slow the computation down significantly, while causing also the CPU to be consumed with meaningless load instead.
In conclusion, PROCESS_MODE_BACKGROUND_BEGIN is not suitable for low-priority background work. The work will be very time and energy inefficient.
PROCESS_MODE_BACKGROUND_BEGIN is suitable only when the process really does not intend to do anything consuming in the background.
In contrast, THREAD_MODE_BACKGROUND_BEGIN does not have such dreadful effects, even when the thread is the only thread in the process.
Note also, that you need to turn off PROCESS_MODE_BACKGROUND_BEGIN for good, using only PROCESS_MODE_BACKGROUND_END. It is not enough to call THREAD_MODE_BACKGROUND_END after PROCESS_MODE_BACKGROUND_BEGIN.
So Arno is not quite correct with the claim that THREAD_MODE_BACKGROUND_END undoes the effects of PROCESS_MODE_BACKGROUND_BEGIN even for a single thread.
Additional note: SetProcessPriorityBoost with bDisablePriorityBoost = TRUE does not have any such effect on the working set.
The process priority class and the thread priority are building the base priority of a thread. See Scheduling Priorities to find how the priorities are assembled. By looking at this list it becomes clear that your understanding is somewhat correct; within a certain priority class the base priority can have various values, determined by the thread priority.
The PROCESS_MODE_BACKGROUND_BEGIN value for SetPriorityClass and the THREAD_MODE_BACKGROUND_END value for SetThreadPriority are not supported on all Windows versions.
PROCESS_MODE_BACKGROUND_BEGIN:
The system lowers the resource scheduling priorities of the process (and its threads) so that it can perform background work without significantly affecting activity in the foreground.
THREAD_MODE_BACKGROUND_END:
End background processing mode. The system restores the resource scheduling priorities of the thread as they were before the thread entered background processing mode.
The consequence of the scenario in question here is predictable: The SetPriorityClass will set the process with all of its threads into background processing mode. The following SetThreadPriority will only release the a thread from background processing mode. But all other possible threads of the process will stay in in background processing mode.
Note: Only the combination of process priority class and thread priority determines the base priority. Therefore neither a call to GetThreadPriority nor a call to GetPriorityClass will return the base priority. Only their combination releases the base priority which is described in the "Scheduling Priorities" link above. Unfortunately the new background processing mode values aren't yet included in the base priority list. But the name base priority tells what matters here: Based on the base priority (derived from process priority class and thread priority) the scheduler is allowed to dynamically adapt the scheduling priority. The background mode is just another way to fine tune the scheduling priority. Another way are Priority Boosts. The priority boost functionality exists for some time. The new access to background processing mode values for SetThreadPriority and SetPriorityClass opens the priority boost capability directly. In Windows XP this had to be done by a call to SetProcessPriorityBoost.
I worked on VxWorks 5.5 long time back and it was the best experience working on world's best real time OS. Since then I never got a chance to work on it again. But, a question keeps popping to me, what makes is so fast and deterministic?
I have not been able to find many references for this question via Google.
So, I just tried thinking what makes a regular OS non-deterministic:
Memory allocation/de-allocation:- Wiki says RTOS use fixed size blocks, so that these blocks can be directly indexed, but this will cause internal fragmentation and I am sure this is something not at all desirable on mission critical systems where the memory is already limited.
Paging/segmentation:- Its kind of linked to Point 1
Interrupt Handling:- Not sure how VxWorks implements it, as this is something VxWorks handles very well
Context switching:- I believe in VxWorks 5.5 all the processes used to execute in kernel address space, so context switching used to involve just saving register values and nothing about PCB(process control block), but still I am not 100% sure
Process scheduling algorithms:- If Windows implements preemptive scheduling (priority/round robin) then will process scheduling be as fast as in VxWorks? I dont think so. So, how does VxWorks handle scheduling?
Please correct my understanding wherever required.
I believe the following would account for lots of the difference:
No Paging/Swapping
A deterministic RTOS simply can't swap memory pages to disk. This would kill the determinism, since at any moment you could have to swap memory in or out.
vxWorks requires that your application fit entirely in RAM
No Processes
In vxWorks 5.5, there are tasks, but no process like Windows or Linux. The tasks are more akin to threads and switching context is a relatively inexpensive operation. In Linux/Windows, switching process is quite expensive.
Note that in vxWorks 6.x, a process model was introduced, which increases some overhead, but mainly related to transitioning from User mode to Supervisor mode. The task switching time is not necessarily directly affected by the new model.
Fixed Priority
In vxWorks, the task priorities are set by the developer and are system wide. The highest priority task at any given time will be the one running. You can thus design your system to ensure that the tasks with the tightest deadline always executes before others.
In Linux/Windows, generally speaking, while you have some control over the priority of processes, the scheduler will eventually let lower priority processes run even if higher priority process are still active.
I am looking for a way to yield the remainder of the thread execution's scheduled time slice to a different thread. There is a SwitchToThread function in WINAPI, but it doesn't let the caller specify the thread it wants to switch to. I browsed MSDN for quite some time and haven't found anything that would offer just that.
For an operating-system-internals layman like me, it seems that yielding thread should be able to specify which thread does it want to pass the execution to. Is it possible or is it just my imagination?
The reason you can't yield processor time-slices to a designated thread is that Windows features a preemptive scheduling kernel which pretty much places the responsibility and authority of scheduling the processor time in the hands of the kernel and only the kernel.
As such threads don't have any control over when they run, if they run, and even less over which thread is switched to after their time slice is up.
However, there are a few way you may influence context switches:
by increasing the priority of a certain thread you may force the scheduler to schedule it more often in the detriment of other threads (obviously the reverse applies as well - you can lower the priority of other threads)
you can code your process to place threads in kernel wait mode when they don't have work to do in order to help the scheduler do it's job. When using proper kernel wait constructs such as Critical Sections, Mutexes, Semaphores, and Timers you effectively tell the kernel a certain thread doesn't need to be scheduled until a certain codition is met.
Note: There is rarely a reason you should tamper with task priorities so USE WITH CAUTION
You might use 'fibers' instead of 'threads': for example there's a Win32 API named SwitchToFiber which lets you specify the fiber to be scheduled.
Take a look at UMS (User-mode scheduling) threads in Windows 7
http://msdn.microsoft.com/en-us/library/dd627187(VS.85).aspx
The second thread can simply wait for the yielding thread either by calling WaitForSingleObject() on its handle or periodically polling GetExitCodeThread(). The other answers are correct about altering the operating system's scheduling mechanisms - it is better to design the threads properly in the first place.
This is not possible. Only the kernel can decide what code runs next though you can influence it by reducing the non-waiting threads it has to choose from to run next, and by setting thread priorities with SetThreadPriority.
You can use regular synchronization primitives like events, semaphores, etc. to serialize your two threads. This does not in any form prevent the kernel from scheduling other threads in between, or in parallel on another CPU core, or virtually simultaneously on the same core. This is due to preemtive multitasking nature of modern general purpose operating systems.
If you want to do your own scheduling under Windows, you can use fibers, which essentially are threads that you have to schedule yourself. However, given that you describe yourself as a layman to the OS internals world, that would probably be a bad idea, as fibers are something of an advanced feature.
Can I ask why you want to use SwitchToThread?
If for example it's some form of because thread x is computing some value that you want to wait for on thread Y, then I'd really suggest looking at the Parallel Pattern Library or the Asynchronous Agents Library in Visual Studio 2010 which allows you to do this either with message blocks (receive on an asynchronous value) or simply via tasks : wait for a set of tasks to complete and inline their execution while waiting...
//i.e. on an arbitrary thread
task_group* tasks;
tasks->run(... / some functor/)
a call to tasks->wait() will wait and inline any tasks running.
From what I've read in the past, you're encouraged not to change the priority of your Windows applications programmatically, and if you do, you should never change them to 'Realtime'.
What does the 'Realtime' process priority setting do, compared to 'High', and 'Above Normal'?
A realtime priority thread can never be pre-empted by timer interrupts and runs at a higher priority than any other thread in the system. As such a CPU bound realtime priority thread can totally ruin a machine.
Creating realtime priority threads requires a privilege (SeIncreaseBasePriorityPrivilege) so it can only be done by administrative users.
For Vista and beyond, one option for applications that do require that they run at realtime priorities is to use the Multimedia Class Scheduler Service (MMCSS) and let it manage your threads priority. The MMCSS will prevent your application from using too much CPU time so you don't have to worry about tanking the machine.
Simply, the "Real Time" priority class is higher than "High" priority class. I don't think there's much more to it than that. Oh yeah - you have to have the SeIncreaseBasePriorityPrivilege to put a thread into the Real Time class.
Windows will sometimes boost the priority of a thread for various reasons, but it won't boost the priority of a thread into another priority class. It also won't boost the priority of threads in the real-time priority class. So a High priority thread won't get any automatic temporary boost into the Real Time priority class.
Russinovich's "Inside Windows" chapter on how Windows handles priorities is a great resource for learning how this works:
http://web.archive.org/web/20140909124652/http://download.microsoft.com/download/5/b/3/5b38800c-ba6e-4023-9078-6e9ce2383e65/C06X1116607.pdf
Note that there's absolutely no problem with a thread having a Real-time priority on a normal Windows system - they aren't necessarily for special processes running on dedicatd machines. I imagine that multimedia drivers and/or processes might need threads with a real-time priority. However, such a thread should not require much CPU - it should be blocking most of the time in order for normal system events to get processing.
It would be the highest available priority setting, and would usually only be used on box that was dedicated to running that specific program. It's actually high enough that it could cause starvation of the keyboard and mouse threads to the extent that they become unresponsive.
So basicly, if you have to ask, don't use it :)
Real-time is the highest priority class available to a process. Therefore, it is different from 'High' in that it's one step greater, and 'Above Normal' in that it's two steps greater.
Similarly, real-time is also a thread priority level.
The process priority class raises or lowers all effective thread priorities in the process and is therefore considered the 'base priority'.
So, a process has a:
Base process priority class.
Individual thread priorities, offsets of the base priority class.
Since real-time is supposed to be reserved for applications that absolutely must pre-empt other running processes, there is a special security privilege to protect against haphazard use of it. This is defined by the security policy.
In NT6+ (Vista+), use of the Vista Multimedia Class Scheduler is the proper way to achieve real-time operations in what is not a real-time OS. It works, for the most part, though is not perfect since the OS isn't designed for real-time operations.
Microsoft considers this priority very dangerous, rightly so. No application should use it except in very specialized circumstances, and even then try to limit its use to temporary needs.
Once Windows learns a program uses higher than normal priority it seems like it limits the priority on the process.
Setting the priority from IDLE to REALTIME does NOT change the CPU usage.
I found on My multi-processor AMD CPU that if I drop one of the CPUs ot like the LAST one the CPU usage will MAX OUT and the last CPU remains idle. The processor speed increases to 75% on my Quad AMD.
Use Task Manager->select process->Right Click on the process->Select->Set Affinity
Click all but the last processor. The CPU usage will increase to the MAX on the remaining processors and Frame counts if processing video will increase.
Like all other answers before real time gives that program the utmost priority class. Nothing is processed until that program has been processed.
On my pentium 4 machine I set minecraft to real time a lot since it increases the game performance a lot, and the system seems completely stable. so realtime isn't as bad as it seems, just if you have a multi-core set a program's affinity to a specific core or cores (just not all of them, just to let everything else be able to run in case the real time set programs gets hung up) and set the priority to real time.
It basically is higher/greater in everything else. A keyboard is less of a priority than the real time process. This means the process will be taken into account faster then keyboard and if it can't handle that, then your keyboard is slowed.
Realtime process priority is used mainly to make a specific process run faster at the expense of literally everything else. I personally have my Discord bot's process priority set to realtime, since the bot is lightweight, and I need to keep it responding quickly, but it would be a bad idea to randomly change process priorities unless you know what you're doing, for example, if you were to set Google Chrome's priority to low, it wouldn't cause any problems, but if you were to set registry's priority to low, it would likely cause more than enough problems. Just don't set a heavy program to realtime unless the device it's being run on is completely dedicated to that program.
Can breakpoints be used in interrupt service routines (ISRs)?
Yes - in an emulator.
Otherwise, no. It's difficult to pull off, and a bad idea in any case. ISRs are (usually) supposed to work with the hardware, and hardware can easily behave very differently when you leave a gap of half a second between each instruction.
Set up some sort of logging system instead.
ISRs also ungracefully "steal" the CPU from other processes, so many operating systems recommend keeping your ISRs extremely short and doing only what is strictly necessary (such as dealing with any urgent hardware stuff, and scheduling a task that will deal with the event properly). So in theory, ISRs should be so simple that they don't need to be debugged.
If it's hardware behaviour that's the problem, use some sort of logging instead, as I've suggested. If the hardware doesn't really mind long gaps of time between instructions, then you could just write most of the driver in user space - and you can use a debugger on that!
Depending on your platform, you can do this by accessing the debug port of your processor, typically using the JTAG interface. Keep in mind that you're drastically changing everything that has to do with timing with that method, so your debug session may be useless. But then again, many bugs can be caught this way. Also mind MMU based memory mappings, as JTAG debuggers often don't take them into account.
In Windows, with a kernel debugger attached, you can indeed place breakpoints in interrupt handlers.