System call vs Interrupt contexts - linux-kernel

System calls are implemented using software interrupts(interrupt vector 128). In roberts love book "Linux Kernel Development" its written that interrupt handle happens at interrupt context. Its also written that system call runs at process context, but system call handle is in fact an " interrupt handle" so why its in different context?

You're getting the implementation of your platform mixed up with the design of the Linux Kernel.
When you're talking about the Linux Kernel, the interrupt context is where code runs 'on its own' with no process attached to it - commonly used for handling interrupts from devices. Process context is as a result of a system call from a userland process and code running in it is 'attached' to a process.
When you're talking about the platform implementation, an interrupt context could simply mean that the processor is in a interrupt handler mode of some sort. I don't know enough about your platform to provide anything specific.

Related

How OS protects against malicious memory access from assembly level code?

I know about the system calls that OS provides to protect programs from accessing other programs memory. But that can only help if I have used the system call library provided by OS. What if I write a assembly code myself that sets CPU bit for kernel mode and executes a privileged instruction ( let's say modify OS' program segment in memory ). Can OS protect against that ?
P.S. Out of curiosity question. If any good blog or book reference can be provided, that would be helpful as I want to study OS in as much detail as possible.
The processor protects again such malicious mischief by (1) requiring you to be in an elevated mode (for our example here, KERNEL); and (2) limiting access to kernel mode.
In order to enter kernel mode from user mode there either has to be an interrupt (not applicable here) or an exception. Usually both are handled the same way but there are some bizarre processors (Did anyone say Intel?) that do things a bit differently
The operating system exception and interrupt handlers must limits what the user mode program can do.
What if I write a assembly code myself that sets CPU bit for kernel mode and executes a privileged instruction
You cant just set the kernel mode bit in the processor status register to enter kernel mode.
Can OS protect against that ?
The CPU protects against that.
If any good blog or book reference can be provided, that would be helpful as I want to study OS in as much detail as possible.
The VAX/VMS Systems Internals book is old but it is cheap and shows how a real OS has been implemented.
This blog clearly explains what my confusion was.
http://minnie.tuhs.org/CompArch/Lectures/week05.html
Even though user programs can switch to kernel mode, but they have to do it through a interrupt instruction ( int in case x86) and for this interrupt, the interrupt handler is written by the OS. ( probably when it was in kernel mode at bootup time). So this way all priviliged instructions can only be executed by the OS code only.

What does a thread do when switching from user to kernel mode under WindowsNT (recent x86 versions, Vista and Win7)?

From what I understand, a thread that executes in user mode can eventually enter code that switches to kernel mode (using sysenter). But how can a thread that's been emanating from user code execute kernel code?
Eg: I'm calling CreateFile(), it then delegates to NtCreateFile(), which in turn calls ZwCreateFile(), than ZiFastSystemCall()... than sysenter... profit, kernel access?
Edit
This question:
How does Windows protect transition into kernel mode has an answer that helped me understand, see quote:
"The user mode thread is causing an exception that's caught by the Ring 0 code. The user mode thread is halted and the CPU switches to a kernel/ring 0 thread, which can then inspect the context (e.g., call stack & registers) of the user mode thread to figure out what to do." Also see this blog post, very informative: http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
The short answer is that it can't.
What happens is that when you create a user-mode thread, the kernel creates a matching kernel mode thread. When "your" thread needs to execute some code in kernel mode, it's actually executed in the matching kernel mode thread.
Disclaimer: The last time I really looked closely at this was probably with Win2K or maybe even NT4 -- but I doubt much has changed in this respect.

how to call the kernel module from a user space Terminal into an application

http://www.makelinux.net/ldd3/chp-2-sect-3#chp-2-ITERM-4135 this link describes the user space and kernel space communication.
could anyone explain it with a simple user space application program in c that links & communicates(send / receives values) to the kernel object.?
The program insmod, available on most Linux machines (but requiring sudo privileges to run) instructs the kernel to load a specified module (kernel object) through the system call init_module.
More generally, user-space programs communicate with the kernel through these system calls, which are essentially requests to the kernel from user space. Any application you write in C must use system calls in some way to interact with the system (for example, printf uses the write system call under the hood to put characters on the screen).
Just open a file with open(2). The compiler will add code to the application for this call which will put the function arguments on the stack and make it crash in a certain way (see system call). The kernel catches all the crashes and handles them.
Since this is a "good" crash, the kernel will look up which function to invoke, get the arguments from the stack and invoke the function.
The reason for this complicated approach is security: By "crashing", the application completely relinquishes control. The CPU will switch to a different mode, too. In this mode, it can access the hardware (in "application" mode, any access to the hardware leads to an "illegal access" crash which terminates your app).
The open(2) function itself can't do much. Instead, it will check which file system can handle the request and invoke the open function of the file system. File systems are implemented as kernel modules.

Spawning a kernel mode thread - Windows

I have intensive processing that I need to perform in a device driver, at DISPATCH_LEVEL or lower IRQL.
How do I create a kernel-thread?
What IRQL does it run at? Can I control this?
How is it scheduled? Because I am thinking from a user-mode perspective here, what priority does it run at?
What kernel functions can I use to provide locking / synchronization?
you can create system thread with this As you can see one of its parameters is a start routine which can hold custom code - in it you can use KeRaiseIrql and KeLowerIrql. By default threads will run in PASSIVE_LEVEL. "Locks, Deadlocks, and Synchronization" is a very helpful paper regarding synchronization in kernel on windows and everyone who has to do some tinkering with the windows kernel should read or at least skim it

Interrupt processing in Windows

I want to know which threads processes device interrupts. What happens when there is a interrupt when a user mode thread is running? Also do other user threads get a chance to run when the system is processing an interrupt?
Kindly suggest me some reference material describing how interrupts are handled by windows.
Device interrupts themselves are (usually) processed by whatever thread had the CPU that took the interrupt, but in a ring 0 and at a different protection level. This limits some of the actions an interrupt handler can take, because most of the time the current thread will not be related to the thread that is waiting for the event to happen that the interrupt is indicating.
The kernel itself is closed source, and only documented through its internal API. That API is exposed to device driver authors, and described in the driver development kits.
Some resources to get you started:
Any edition of Microsoft Windows Internals by Solomon and Russinovich. The current seems to be the 4th edition, but even an old edition will help.
The Windows DDK, now renamed the WDK. Its documentation is available online too. Be sure to read the Kernel Mode Design Guide...
Sysinternals has tools and articles to probe at and explain the kernel's behavior. This used to be an independent site until Microsoft got tired of Mark Russinovich seeming to know more about how the kernel worked than they did. ;-)
Note that source code to many of the common device drivers are included in the DDK in the samples. Although the production versions are almost certainly different, reading the sample drivers can answer some questions even if you don't want to implement a driver yourself.
Like any other operating system, Windows processes interrupts in Kernel mode, with an elevated Interrupt Priority Level (I think they call them IRPL's, but I don't know what the "R" stands for). Any user thread or lower-level Kernel thread running on the same machine will be interrupted while the interrupt request is processed, and will be resumed when the ineterrupt processing is complete.
In order to learn more about device interrupts on Windows you need to study device driver development. This is a niche topic, I don't think you can find many useful resources in the Web and you may have to look for a book or a training course.
Anyway, Windows handle interrupts with Interrupt Request Levels (IRQLs) and Deferred procedure calls. An interrupt is handled in Kernel mode, which runs in higher priority than user mode. A proper interrupt handler needs to react very quickly. It only performs the absolutely necessary operations and registers a Deferred Procedure Call to run in the future. This will happen, when the system is in a Interrupt Request Level.

Resources