I'm trying to understand context switching in OSs in general. I have a couple of questions that I could not find the answers to.
I would really appreciate any insight on these.
Do context switches happen mid instructions? If not, is it true for multi-step instructions (x86) like INC, XADD?
On which processor is the code responsible for context switching is run? If it is run on an arbitrary processor, that could modify the registers on that processor, right? So how does the OS manage to save that particular processor's state?
First of all, please do not restrict OS to Windows :D
Do context switches happen mid instructions? If not, is it true for multi-step instructions (x86) like INC, XADD?
In the software context switching, context switch will happen on a specific interrupt (A hardware timer, or an internal CPU tick counter timer). All of the CPU's architectures (AFAIK) have a register or flag to notify the "Fetch Unit" that there is a pending interrupt. Then, the CPU starts executing ISR by setting the PC register. Pay attention context switch will be done on an ISR. So, According to the interrupt mechanism, occurring an interrupt during executing an instruction, does not have any conflict. This way the current instruction will execute completely, But the "Fetch Unit" will load the first ISR instruction (After the hardware stack frame operation, in most of the architectures).
Although, some of the recent CPUs architecture have a Hardware Context Switching mechanism. In this way, All of the context switching processes will be done and handled by the CPU's hardware. To trigger a context switch and tell the CPU where to load its new state from, the far version of CALL and JMP instructions are used in the Intel CPUs architecture.
On which processor is the code responsible for context switching is run? If it is run on an arbitrary processor, that could modify the registers on that processor, right? So how does the OS manage to save that particular processor's state?
Each processor has its own context switch. In this way, Each processor has a particular scheduler in the kernel and OS (by observing the load balance on processors) will assign each task to one of the processors (at least in Linux).
Related
This question is about operating systems in general. Is there any necessary mechanism in implementation of operating systems that impacts flow of instructions my program sends to CPU?
For example if my program was set for maximum priority in OS, would it perform exactly the same when run without OS?
Is there any necessary mechanism in implementation of operating systems that impacts flow of instructions my program sends to CPU?
Not strictly necessary mechanisms (depending on how you define "OS"); but typically there's IRQs, exceptions and task switches.
IRQs are used by devices to ask the OS (their device driver) for attention; and interrupting the flow of instructions your program sends to CPU. The alternative is polling, which wastes a huge amount of CPU time checking if the device needs attention when it probably doesn't. Because applications need to use devices (file IO, keyboard, video, etc) and wasting CPU time is bad; IRQs significantly improve the performance of applications.
Exceptions (like IRQs) also interrupt the normal flow of instructions. They occur when the normal flow of instructions can't continue, either because your program crashed, or because your program needs something. The most common cause of exceptions is virtual memory (e.g. using swap space to let the application have more memory than actually exists so that the application can actually work properly; where the exception tells the OS that your program tried to access memory that has to be fetched from disk first). In general; this also improves performance for multiple reasons (because "can't execute because there's not enough RAM" can be considered "zero performance"; and because various tricks reduce RAM consumption and increase the amount of RAM that can be used for things like caching files which improve file IO speed).
Task switches is the basis of multi-tasking (e.g. being able to run more than one application at a time). If there are more tasks that want CPU time than there are CPUs, then the OS (scheduler) may (depending on task priorities and scheduler design) switch between them so that all the tasks get some CPU time. However; most applications spend most of their time waiting for something to do (e.g. waiting for user to press a key) and don't need CPU time while waiting; and if the OS is only running one task then the scheduler does nothing (no task switches because there's no other task to switch to). In other words, if the OS supports multi-tasking but you're only running one task, then it makes no difference.
Note that in some cases, IRQs and/or tasks are also used to "opportunistically" do work in the background (when hardware has nothing better to do) to improve performance (e.g. pre-fetch, pre-process and/or pre-calculate data before it's needed so that the resulting data is available instantly when it is needed).
For example if my program was set for maximum priority in OS, would it perform exactly the same when run without OS?
It's best to think of it as many layers - hardware & devices (CPU, etc), with kernel and device drivers on top, with applications on top of that. If you remove any of the layers nothing works (e.g. how can an application read and write files when there's no file system and no disk device drivers?).
If you shift all of the functionality that an OS provides into the application (e.g. a statically linked library that can make an application boot on bare metal); then if the functionality is the same the performance will be the same.
You can only improve performance by reducing functionality. For example, if you get rid of security you'll improve performance (temporarily, until your application becomes part of an attacker's botnet and performance becomes significantly worse due to all the bitcoin mining it's doing). In a similar way, you can get rid of flexibility (reboot the computer when you plug in a different USB flash stick), or fault tolerance (trash all of your data without any warning when the storage devices start failing because software assumed hardware is permanently perfect).
On a single-cored computer there is only one real/physical point of control. How can the process handler get the point of control back when it wants, when the only point of control is in hands of the current process?
A hardware interrupt from the interrupt controller. This could be from an external device, such as a hard drive notifying the CPU that a DMA operation has completed or a UART indicating data is available to be read from its registers. Most often it is from a timer/clock cycle counter. Before the OS runs user-mode code, it configures this clock to interrupt after a certain number of clock cycles and configures an interrupt handler which invokes the OS's scheduler code.
All of the above is for a preempt-able OS, which encompasses nearly every modern OS. In the old days, the OS could not interrupt user-mode code. The user-mode code had to call back into the OS before another process could be scheduled. Obviously this meant that one program could freeze the entire system permanently.
A debugger makes perfect sense when you're talking about an interpreted program because instructions always pass through the interpreter for verification before execution. But how does a debugger for a compiled application work? If the instructions are already layed out in memory and run, how can I be notified that a 'breakpoint' has been reached, or that an 'exception' has occurred?
With the help of hardware and/or the operating system.
Most modern CPUs have several debug registers that can be set to trigger a CPU exception when a certain address is reached. They often also support address watchpoints, which trigger exceptions when the application reads from or writes to a specified address or address range, and single-stepping, which causes a process to execute a single instruction and throw an exception. These exceptions can be caught by a debugger attached to the program (see below).
Alternatively, some debuggers create breakpoints by temporarily replacing the instruction at the breakpoint with an interrupt or trap instruction (thereby also causing the program to raise a CPU exception). Once the breakpoint is hit, the debugger replaces it with the original instruction and single-steps the CPU past that instruction so that the program behaves normally.
As far as exceptions go, that depends on the system you're working on. On UNIX systems, debuggers generally use the ptrace() system call to attach to a process and get a first shot at handling its signals.
TL;DR - low-level magic.
I have intensive processing that I need to perform in a device driver, at DISPATCH_LEVEL or lower IRQL.
How do I create a kernel-thread?
What IRQL does it run at? Can I control this?
How is it scheduled? Because I am thinking from a user-mode perspective here, what priority does it run at?
What kernel functions can I use to provide locking / synchronization?
you can create system thread with this As you can see one of its parameters is a start routine which can hold custom code - in it you can use KeRaiseIrql and KeLowerIrql. By default threads will run in PASSIVE_LEVEL. "Locks, Deadlocks, and Synchronization" is a very helpful paper regarding synchronization in kernel on windows and everyone who has to do some tinkering with the windows kernel should read or at least skim it
I want to know which threads processes device interrupts. What happens when there is a interrupt when a user mode thread is running? Also do other user threads get a chance to run when the system is processing an interrupt?
Kindly suggest me some reference material describing how interrupts are handled by windows.
Device interrupts themselves are (usually) processed by whatever thread had the CPU that took the interrupt, but in a ring 0 and at a different protection level. This limits some of the actions an interrupt handler can take, because most of the time the current thread will not be related to the thread that is waiting for the event to happen that the interrupt is indicating.
The kernel itself is closed source, and only documented through its internal API. That API is exposed to device driver authors, and described in the driver development kits.
Some resources to get you started:
Any edition of Microsoft Windows Internals by Solomon and Russinovich. The current seems to be the 4th edition, but even an old edition will help.
The Windows DDK, now renamed the WDK. Its documentation is available online too. Be sure to read the Kernel Mode Design Guide...
Sysinternals has tools and articles to probe at and explain the kernel's behavior. This used to be an independent site until Microsoft got tired of Mark Russinovich seeming to know more about how the kernel worked than they did. ;-)
Note that source code to many of the common device drivers are included in the DDK in the samples. Although the production versions are almost certainly different, reading the sample drivers can answer some questions even if you don't want to implement a driver yourself.
Like any other operating system, Windows processes interrupts in Kernel mode, with an elevated Interrupt Priority Level (I think they call them IRPL's, but I don't know what the "R" stands for). Any user thread or lower-level Kernel thread running on the same machine will be interrupted while the interrupt request is processed, and will be resumed when the ineterrupt processing is complete.
In order to learn more about device interrupts on Windows you need to study device driver development. This is a niche topic, I don't think you can find many useful resources in the Web and you may have to look for a book or a training course.
Anyway, Windows handle interrupts with Interrupt Request Levels (IRQLs) and Deferred procedure calls. An interrupt is handled in Kernel mode, which runs in higher priority than user mode. A proper interrupt handler needs to react very quickly. It only performs the absolutely necessary operations and registers a Deferred Procedure Call to run in the future. This will happen, when the system is in a Interrupt Request Level.