Can someone help me understand with clear explantion or any reference link, How kernel handles preemption on tasks having same priority. Suppose i have three TASKS A, B and C assigned with High priority
TASK(A)
{
High Priority
Reading Asynchronous messages
}
TASK(B)
{
High Priority
Sending Asynchronous messages
}
TASK(C)
{
High Priority
Draw process
}
In this case which Task will be considered for processing and how is it preempted?
General scheduling order is
Kernel invokes function schedule() either directly in kernel context or when TIF_NEED_RESCHED flag is set and kernel returning from interrupt context.
This function invokes pick_next_task() to receive the task, which will preempt currently running one.
pick_next_task() invokes every scheduler class' pick_next_task() in order of descending priority until one of them returns task. Note, that priority means class' priority (e.g. soft real-time or normal), not process' one.
The CFS's approach (scheduler for normal processes) is to give each process an equal amount of virtual run-time. Virtual run-time is a process' real run-time weighted with its priority (process' priority). So CFS class returns task with lesser virtual runtime.
For scheduler, there is no matter what process are doing, what massages it sends or receives. So, in general case, if your processes have equal priorities, process with less run-time will preempt another process on the next schedule() invokation.
Related
The first come, first served scheduling algorithm is a non-preemptive algorithm, it means that if there's a process in running state, it cannot be preempted, till it completes. But if some kernel process arrives in between, will the CPU be allocated to that kernel process?
If yes, will this be the case with any higher priority process, irrespective of whether it is a system process or not?
As mrBen told in his answer, there is no notion of priority. It will still be treated as any such process waiting on the ready queue. Hence, this algorithm cannot just be used in Practical.
However, that being said, there are certain situations which makes FCFS a practical use. Consider the use case where the Process Scheduling algorithm uses Priority Scheduling and consider there are 2 processes having the same priority. In this situation to resolve the conflict, FCFS may be used.
In such cases, Kernel Process will always be having higher priority than the user processes. Within Kernel processes, the hardware interrupt has higher priority than the software interrupt since you cannot have a device waiting and starve it while executing signals.
Hope I answered your question!
I'm doing some Linux CFS analysis for my OS class and have an observation that I cannot explain.
For two otherwise identical processes, when they are executed with a SCHED_OTHER policy, I am seeing about 50% more voluntary context switches than when I exeucute them with a SCHED_FIFO or a SCHED_RR policy.
This wouldn't surprise me a bit for involuntary switches, since SCHED_OTHER has a much lower priority, so it has to give up the CPU. But why would this be the case for voluntary switches. Why would SCHED_OTHER volunteer to give up the CPU more often than the real-time processes? It's an identical process, so it only volunteers to give up the CPU when it switches over to I/O, right? And I don't think that the choice of policy would affect the frequency of I/O attempts.
Any Linux people have an idea? Thanks!
First understand that the scheduling policies are nothing but the scheduling algorithms that are implemented in the kernel. So SCHED_FIFO, SCHED_RR, SCHED_OTHER are different algorithms in the kernel. SCHED_FIFO and SCHED_RR belong to the real time scheduling algorithms "class". SCHED_OTHER is nothing but the scheduling algorithm for normal processes in the system, more popularly known as the CFS(Completely Fair Scheduler) algorithm.
SCHED_OTHER has a much lower priority
To be precise it doesn't have "much" lower priority but has "a" lower priority than the Real time scheduling class. There are three scheduling classes in the Linux Scheduler - Real-Time scheduling class, Normal Process scheduling class and Idle Tasks scheduling class. The priority levels are as follows:
Real Time Scheduling Class.
Normal Task Scheduling Class.
Idle Tasks Scheduling Class.
Tasks on the system belong to one of these classes. (Note that at any point in time a task can belong to only one scheduling class, although its class can be changed). The scheduler in Linux first checks whether there is a task in the real time class. If any, then it invokes the SCHED_FIFO or SCHED_RR algorithm, depending on what is configured on the system. If there are no real time tasks, then the scheduler checks for the normal tasks and invokes the CFS algorithm depending on whether there is any normal task ready to run. Also
Coming to the main question, why do see more context switches when you run the same process in two different scheduling classes. There are two cases:
Generally on a simple system, there are hardly any real time tasks and most task belong to normal task class. Thus, when you run that process in real time class, you will have all the processor to this process exclusively(since the real time scheduling class has a higher priority than normal task scheduling class, and there are no(/very few real) time task(s) to share the CPU with). When you run the same process in the normal task class, the process has to share the processor with various other processes, thus leading to more context switches.
Even if there are many real time tasks in the system, the nature of the real time scheduling algorithms in question, FIFO and RR, lead to lower context switches. In FIFO, a processor is not switched to other task until the current one completes and in RR there is a fixed interval(time-quanta) that is given to the processes. When you look at CFS, the timeslice that a process gets is proportional to the number of tasks in runqueue of the processor. It is a function of its weight and the total weight of the processor runqueue. I assume you are well versed with FIFO and RR since you are taking OS classes. For more information on CFS I will advice you to google it or if you are brave enough then go through its source code.
Hope the answer is complete :)
I don't understand how to use SetThreadPriority and SetPriorityClass to lower and increase the priority of a Thread.
My understanding is that the SetPriorityClass selects the range of priorities available to a process and the SetThreadPriority sets the relative priority within the class.
For instance, what is the result of doing this for a thread :
SetPriorityClass(GetCurrentProcess(), PROCESS_MODE_BACKGROUND_BEGIN);
SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_END);
Thanks for help.
One thing about PROCESS_MODE_BACKGROUND_BEGIN that I have observed but that has apparently not been documented, is that at least under Windows 7 it empties the working set of the process PERMANENTLY, no matter how the process accesses the memory - until the background mode ends.
For example, normally without PROCESS_MODE_BACKGROUND_BEGIN, when my machine has gigabytes of free memory, and the process needs to consume and constantly process gigabytes of memory, the process working set will be about equal to the allocation size. That is, the process gets all the memory it uses in its working set. Good.
Now, with PROCESS_MODE_BACKGROUND_BEGIN the working set will be a few tens of megabytes!
The bad result is, that this causes constant page faults and the computation runs much slower! The page faults are likely not to page file, but to Windows cache memory. But the page faults still slow the computation down significantly, while causing also the CPU to be consumed with meaningless load instead.
In conclusion, PROCESS_MODE_BACKGROUND_BEGIN is not suitable for low-priority background work. The work will be very time and energy inefficient.
PROCESS_MODE_BACKGROUND_BEGIN is suitable only when the process really does not intend to do anything consuming in the background.
In contrast, THREAD_MODE_BACKGROUND_BEGIN does not have such dreadful effects, even when the thread is the only thread in the process.
Note also, that you need to turn off PROCESS_MODE_BACKGROUND_BEGIN for good, using only PROCESS_MODE_BACKGROUND_END. It is not enough to call THREAD_MODE_BACKGROUND_END after PROCESS_MODE_BACKGROUND_BEGIN.
So Arno is not quite correct with the claim that THREAD_MODE_BACKGROUND_END undoes the effects of PROCESS_MODE_BACKGROUND_BEGIN even for a single thread.
Additional note: SetProcessPriorityBoost with bDisablePriorityBoost = TRUE does not have any such effect on the working set.
The process priority class and the thread priority are building the base priority of a thread. See Scheduling Priorities to find how the priorities are assembled. By looking at this list it becomes clear that your understanding is somewhat correct; within a certain priority class the base priority can have various values, determined by the thread priority.
The PROCESS_MODE_BACKGROUND_BEGIN value for SetPriorityClass and the THREAD_MODE_BACKGROUND_END value for SetThreadPriority are not supported on all Windows versions.
PROCESS_MODE_BACKGROUND_BEGIN:
The system lowers the resource scheduling priorities of the process (and its threads) so that it can perform background work without significantly affecting activity in the foreground.
THREAD_MODE_BACKGROUND_END:
End background processing mode. The system restores the resource scheduling priorities of the thread as they were before the thread entered background processing mode.
The consequence of the scenario in question here is predictable: The SetPriorityClass will set the process with all of its threads into background processing mode. The following SetThreadPriority will only release the a thread from background processing mode. But all other possible threads of the process will stay in in background processing mode.
Note: Only the combination of process priority class and thread priority determines the base priority. Therefore neither a call to GetThreadPriority nor a call to GetPriorityClass will return the base priority. Only their combination releases the base priority which is described in the "Scheduling Priorities" link above. Unfortunately the new background processing mode values aren't yet included in the base priority list. But the name base priority tells what matters here: Based on the base priority (derived from process priority class and thread priority) the scheduler is allowed to dynamically adapt the scheduling priority. The background mode is just another way to fine tune the scheduling priority. Another way are Priority Boosts. The priority boost functionality exists for some time. The new access to background processing mode values for SetThreadPriority and SetPriorityClass opens the priority boost capability directly. In Windows XP this had to be done by a call to SetProcessPriorityBoost.
Are there any good resources (books, websites) that give very good comparison of different scheduling algorithms for a Finite State Machine (FSM) in an embedded system without an OS?
I am designing a simple embedded web server without an OS. I would like to know what are the various methods used to schedule the processing of the different events that occur in the system.
For example,if two events arrived at the same time how are the events prioritized? If I assign different priorities to events, how do I ensure that the higher priority event gets processed first? If an even higher priority event comes in while an event is being processed, how can make sure that that event is processed immediately?
I'm planning on using a FSM to check various conditions upon an event's arrival and then to properly schedule the event for processing. Because the embedded web server does not have an OS, I am considering using a cyclic executive approach. But I would like to see a comparison of the pros and cons of different algorithms that could be used in this approach.
If I knew what the question meant the answer would still probably be Miro Samek's Practical UML Statecharts in C/C++, Second Edition: Event-Driven Programming for Embedded Systems
You state: "I mean for example scheduling condion in like ,if two task arrived at the same time which task need to be prioritized and simillar other situations in embedded webserver."
Which I interpret as: "What is the set of rules used to determine which task gets executed first (scheduled) when multiple tasks arrive at the same time."
I used your terminology, "task" to illustrate the similarity. But Clifford is correct. The proper term should be "event" or "message".
And when you say "scheduling condition" I think you mean "set of rules that determines a schedule of events".
The definition of algorithm is: A process or set of rules to be followed in calculations or other problem-solving operations, esp. by a computer.
From a paper entitled Scheduling Algorithms:
Consider the central processing unit of a computer that must process a
sequence of jobs that arrive over time. In what order should the jobs
be processed in order to minimize, on average, the time that a job is
in the system from arrival to completion?
Which again, sounds like what you're calling "scheduling conditions".
I bring this up because using the right words to describe what you are looking for will help us (the SO community) give you better answers. And will help you as you research further on your own.
If my interpretation of your question still isn't what you have in mind, please let me know what, in particular, I've said is wrong and I will try again. Maybe some more examples would help me better understand.
Some further reading on scheduling (which is what you asked for):
A good starting point of course is the Wikipedia article on Scheduling Disciplines
A bit lower level than you are looking for but still full of detailed information on scheduling is Scheduling Algorithms for High-Level Synthesis (NOTE: for whatever reason the PDF has the pages in reverse order, so start at the bottom)
An example of a priority interrupt scheduler:
Take an architecture where Priority Level 0 is the highest. Two events come in simultaneously. One with Priority 2 and another with Priority 3. The scheduling algorithm starts processing the one with Priority 2 because it has a higher priority.
While the event with Priority 2 is being processed, another event with Priority 0 comes in. The scheduler interrupts the event with Priority 2 and processes the event with Priority 0.
When it's finished processing the Priority 0 event, it returns to processing the Priority 2 event. When it's finished processing the Priority 2 event, it processes the Priority 3 event.
Finally, when it's done with processing all of the priority interrupts, it returns control to the main processing task which handles events where priority doesn't matter.
An illustration:
In the above image, the "task" is the super loop which DipSwitch mentioned or the infinite loop in main() that occurs in a cyclic executive which you mentioned. The "events" are the various routines that are run in the super loop or interrupts as seen above if they require prioritization.
Terms to search for are Priority Interrupt and Control Flow. Some good reading material is the Toppers Kernel Spec (where I got the image from), the ARM Interrupt Architecture, and a paper on the 80196 Interrupt Architecture.
I mention the Toppers Kernel Spec just because that's where I got the image from. But at the heart of any real-time OS is it's scheduling algorithm and interrupt architecture.
The "on event" processing you ask about would be handled by the microprocessor/microcontroller interrupt subsystem. How you structure the priority levels and how you handle non-priority events is what makes up the totality of your scheduling algorithm.
An example of a cooperative scheduler:
typedef struct {
void (*task)(void); // Pointer to the task function.
uint32_t period; // Period to execute with.
uint32_t delay; // Delay before first call.
} task_type;
volatile uint32_t elapsed_ticks = 0;
task_type tasks[NUM_TASKS];
void Dispatch_Tasks(void)
{
Disable_Interrupts();
while (elapsed_ticks > 0) { // TRUE only if the ISR ran.
for (uint32_t i = 0; i < NUM_TASKS; i++) {
if (--tasks[i].delay == 0) {
tasks[i].delay = tasks[i].period;
Enable_Interrupts();
tasks[i].task(); // Execute the task!
Disable_Interrupts();
}
}
--elapsed_ticks;
}
Enable_Interrupts();
}
// Count the number of ticks yet to be processed.
void Timer_ISR(void)
{
++elapsed_ticks;
}
The above example was take from a blog post entitled "Simple Co-Operative Scheduling".
A cooperative scheduler is a combination of a super loop and a timer interrupt. From Section 2.4 in NON-BLOCKING HARDWARE CODING FOR EMBEDDED SYSTEMS:
A Cooperative scheduler is essentially a combination of the two
previously discussed schedulers. One timer is set to interrupt at a
regular interval, which will be the minimum time resolution for the
different tasks. Each task is then assigned a period that is a
multiple of the minimum resolution of the interrupt interval. A
function is then constantly called to update the interrupt count for
each task and run tasks that have reached their interrupt period. This
results in a scheduler that has the scalability of the Superloop with
the timing reliability of the Time Triggered scheduler. This is a
commonly used scheduler for sensor systems. However, this type of
scheduler is not without its limitations. It is still important that
the task calls in a cooperative scheduler are short. If one task
blocks longer than one timer interrupt period, a time-critical task
might be missed.
And for a more in depth analysis, here is a paper from the International Journal of Electrical & Computer Sciences.
Preemptive versus Cooperative:
A cooperative scheduler cannot handle asynchronous events without some sort of a preemption algorithm running on top of it. An example of this would be a multilevel queue architecture. Some discussion on this can be found in this paper on CPU Scheduling. There are, of course, pros and cons to each. A few of which are described in this short article on the RTKernel-32.
As for "any specific type preemptive scheduling scheduling process that can satisfy priority based task scheduling (like in the graph)", any priority based interrupt controller is inherently preemptive. If you schedule one task per interrupt, it will execute as shown in the graph.
If you go into Task Manager, right click a process, and set priority to Realtime, it often stops program crashes, or makes them run faster.
In a programming context, what does this do?
It calls SetPriorityClass().
Every thread has a base priority level determined by the thread's
priority value and the priority class of its process. The system uses
the base priority level of all executable threads to determine which
thread gets the next slice of CPU time. The SetThreadPriority function
enables setting the base priority level of a thread relative to the
priority class of its process. For more information, see Scheduling
Priorities.
It tells the widows scheduler to be more or less greedy when allocating execution time slices to your process. Realtime execution makes it never yield execution (not even to drivers, according to MSDN), which may cause stalls in your app if it waits on external events but has no yielding of its own(like Sleep, SwitchToThread or WaitFor[Single|Multiple]Objects), as such using realtime should be avoided unless you know that the application will handle it correctly.
It works by changing the weight given to this process in the OS task scheduler. Your CPU can only execute one instruction at a time (to put it very, very simply) and the OS's job is to keep swapping instructions from each running process. By raising or lowering the priority, you're affecting how much time it's allotted in the CPU relative to other applications currently being multi-tasked.