I know that the kernel scheduler is run periodically. This period is determined by a timer.
However, I have been unable to find where the IRQ for the timer interrupt is and the entire flow from beginning to end of the scheduler code.
I understand that the schedule() function may potentially have several entry and exit points..
but could someone point me towards where to look for these?
From the kernel source, I've gathered that __schedule() is the main schedule function that seems to be called from schedule()..
but what calls schedule()..and what calls the function that calls schedule.. ..
There are actually two schedulers, or rather two scheduling codes in the Linux kernel. There is a core scheduler, which you yourself mentioned called schedule() which calls __schedule(). schedule() is called from many points in the kernel:
Explicit blocking, like in case of semaphores, mutexes etc.
A flag TIF_NEED_RESCHED is checked on interrupts and on return to userspace, if set then schedule is called.
A process wakes up.
There is another scheduler code with the name scheduler_tick()[this too resides in core.c], which is a periodic scheduler and is called by the timer code(timer.c) via interrupt with a frequency of HZ, i.e. scheduler_tick() is called HZ times in one second. HZ is hardware dependent and its value varies between 100-1024. scheduler_tick() calls the task_tick() of the scheduling class to which the current task on the processor belongs.
Related
Is the thread in MS Windows with C++ a time slice or the execution of a function or both?
A thread is executing a function which is a block of code inside an outer loop. If you send a signal (via a global variable) to break from the outer loop. The function returns, but what happens to the running thread assuming it is a time slice of execution?
Neither.
If your scheduler is set to a time-slice algorithm then the time-slice represents when and how long your thread will run.
A thread is an object that manages a block of executable code that can be scheduled. Typically, as part of thread creation you pass a function pointer to that block of code. When the "job" of the executable code is done the thread is destroyed.
In 32-bit and 64-bit Windows, every thread runs a specified function. Conceptually speaking, the initial thread of a new process runs the application's main function, and every additional thread runs a function specified by the programmer when the thread is created. See the documentation for CreateThread; the lpStartAddress argument specifies the function for the thread to run.
(In fact, each thread also runs operating system code, and usually runtime library code as well, but that's an implementation detail that doesn't matter for our purposes.)
Conceptually, when any particular thread is running on a particular CPU core, it might stop for either of two reasons: because the thread has stopped running altogether, or because of a context switch. In the case of a context switch, the thread will be started up again at a later time, and from the thread's point of view everything will look the same as it did when it was interrupted.
(In fact, the OS may also interrupt the thread in order to run device driver or other operating system code. This doesn't involve a context switch; the device driver code runs in the context of the interrupted thread, which is one of the reasons device drivers are hard to write.)
Here are some of the reasons the thread might stop running altogether ["exit"]:
The function the thread was created to run has exited.
The thread calls ExitThread().
Some other thread calls TerminateThread().
Here are some of the reasons there might be a context switch:
The thread's timeslice has expired.
Another thread with a higher priority has become ready to run.
The thread calls Sleep() or one of the wait functions.
It's hard to tell what you're trying to ask, so this may not have addressed it. But perhaps it will clarify things enough to allow you to ask your question in words I can understand.
I am little confused b/w workqueues and kthread when they are created as following-
Create kthread for each online CPU and bind to 1 unique CPU
for_each_online_cpu(cpu) {
kthread = kthread_create(func, ...);
kthread_bind(kthread, cpu);
}
//Each kthread will process work in serialized manner
Create BOUND workqueue for each online CPU with #max_active as 1
for_each_online_cpu() {
wq = alloc_workqueue(name, WQ_MEM_RECLAIM, 1)
}
// queue_work_on(cpu, work) will ensure the works queued on a particular CPU are
processed in a serialized manner.
Please let me know if my understanding is correct and what are the advantages of kthread over workqueues and vice-versa.
Thanks in advance.
"Work" is some action that should complete in a reasonable time. Though it can sleep, it shouldn't do so for a long time, because multiple work items share the the same worker thread.
A thread is yours to run for as long as you want. It doesn't have to return to some caller in order to do other work, so you can put it in a loop (and that is usually done). The loop can contain arbitrary sleeps.
Work queues are used in situations when the caller cannot do the intended action itself, for instance because it is an interrupt service routine and the work is too long for an interrupt, or is otherwise inappropriate to run in an interrupt (because it requires a process context).
First thing is, Workqueue is also a kthread. Now If you are just using the default queue, you will declare the work function and schedule_work()
, which in turn will add you work function to the default workqueue for that processor.This default workqueue is nothing but a kthread which was created at the time of boot up.
Now about the advantage and disadvantage, workqueue is used in a very specific scenario : when you want to delay your work for some later time. As #Kaz mentioned one of the situation could be when you are in interrupt handler and wants to come out as soon as possible.
So with workqueue you can schedule your work for some later time while normal kthread can't be delayed.
In my kernel configuration CONFIG_PREEMPT is not set. Since schedule() is not allowed in interrupt handler how does round robin type of scheduling is implemented in linux kernel. i.e. Who calls the scheduler so often. In entry_32.S it calls preempt_schedule_irq only if CONFIG_PREEMPT is set.
What happens is the timer on the CPU is set to interrupt the kernel every so often. But we can't just call schedule from interrupt context right? So what the kernel does is a neat trick. It changes the currently executing task while executing the handler and then returns. What this effectively does is switch out the context from underneath the handler so the handler completes but at the same time the next context to run is now the next task that will execute. Read up on do_context_switch (IIRC I think that's what it's called) and you will see that it switches it's stack and context from underneath the current execution and resumes the same function in another context.
And CONFIG_PREEMPT only applies to kernel code preemption in kernel context. Userspace tasks will always preempt. All this means is that any kernel code that starts to execute runs to completion (unless you call schedule() yourself or block waiting for I/O, etc....). Normally the kernel can preempt as long as it does not hold any locks except in certain cases where acquiring a lock can put the thread to sleep.
what i s the difference between SetEvent() and Thread Lock() function? anyone please help me
Events are used when you want to start/continue processing once a certain task is completed i.e. you want to wait until that event occurs. Other threads can inform the waiting thread about the completion of this task using SetEvent.
On the other hand, critical section is used when you want only one thread to execute a block of code at a time i.e. you want a set of instructions to be executed by one thread without any other thread changing the state at that time. For example, you are inserting an item into a linked list which involves multiple steps, at that time you don't want another thread to come and try to insert one more object into the list. So you block the other thread until first one finishes using critical sections.
Events can be used for inter-process communication, ie synchronising activity amongst different processes. They are typically used for 'signalling' the occurrence of an activity (e.g. file write has finished). More information on events:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686915%28v=vs.85%29.aspx
Critical sections can only be used within a process for synchronizing threads and use a basic lock/unlock concept. They are typically used to protect a resource from multi-threaded access (e.g. a variable). They are very cheap (in CPU terms) to use. The inter-process variant is called a Mutex in Windows. More info:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682530%28v=vs.85%29.aspx
Can anyone please let me know the usage of schedule() function in linux kernel.
Who will schedule this scheduler thread.?
Thanks in advance
Two mechanism are available: voluntary or hardware-based.
http://lwn.net/Articles/95334/
Arising from a recent patch, voluntary preemption has been introduced into the kernel:
http://kerneltrap.org/node/3440
This means the CPU will explicitly surrender the current job and let the scheduler take over to select the next tasks on the active tasks list. It has been found that this way of voluntary preemption improved performance over involuntary preemption (which is timer clock-based)
More info:
http://wiki.osdev.org/Context_Switching (software vs hardware context switching - similar to what we are talking here)
http://wiki.osdev.org/Scheduling_Algorithms
There is no scheduler thread in the Linux kernel. There are specific situations in which the schedule() function is called. For example:
1) When a process or kernel thread explicitly calls it in kernel mode. A process generally calls schedule() function if it needs to wait for some event to occur; like availability of data from an input-output device.
2) When a process of priority higher than the current process was waiting for some event and the event occurs.
3) When the time slice allocated to the current process expires.