Context switch internals - linux-kernel

I want to learn and fill gaps in my knowledge with the help of this question.
So, a user is running a thread (kernel-level) and it now calls yield (a system call I presume).
The scheduler must now save the context of the current thread in the TCB (which is stored in the kernel somewhere) and choose another thread to run and loads its context and jump to its CS:EIP.
To narrow things down, I am working on Linux running on top of x86 architecture. Now, I want to get into the details:
So, first we have a system call:
1) The wrapper function for yield will push the system call arguments onto the stack. Push the return address and raise an interrupt with the system call number pushed onto some register (say EAX).
2) The interrupt changes the CPU mode from user to kernel and jumps to the interrupt vector table and from there to the actual system call in the kernel.
3) I guess the scheduler gets called now and now it must save the current state in the TCB. Here is my dilemma. Since, the scheduler will use the kernel stack and not the user stack for performing its operation (which means the SS and SP have to be changed) how does it store the state of the user without modifying any registers in the process. I have read on forums that there are special hardware instructions for saving state but then how does the scheduler get access to them and who runs these instructions and when?
4) The scheduler now stores the state into the TCB and loads another TCB.
5) When the scheduler runs the original thread, the control gets back to the wrapper function which clears the stack and the thread resumes.
Side questions: Does the scheduler run as a kernel-only thread (i.e. a thread which can run only kernel code)? Is there a separate kernel stack for each kernel-thread or each process?

At a high level, there are two separate mechanisms to understand. The first is the kernel entry/exit mechanism: this switches a single running thread from running usermode code to running kernel code in the context of that thread, and back again. The second is the context switch mechanism itself, which switches in kernel mode from running in the context of one thread to another.
So, when Thread A calls sched_yield() and is replaced by Thread B, what happens is:
Thread A enters the kernel, changing from user mode to kernel mode;
Thread A in the kernel context-switches to Thread B in the kernel;
Thread B exits the kernel, changing from kernel mode back to user mode.
Each user thread has both a user-mode stack and a kernel-mode stack. When a thread enters the kernel, the current value of the user-mode stack (SS:ESP) and instruction pointer (CS:EIP) are saved to the thread's kernel-mode stack, and the CPU switches to the kernel-mode stack - with the int $80 syscall mechanism, this is done by the CPU itself. The remaining register values and flags are then also saved to the kernel stack.
When a thread returns from the kernel to user-mode, the register values and flags are popped from the kernel-mode stack, then the user-mode stack and instruction pointer values are restored from the saved values on the kernel-mode stack.
When a thread context-switches, it calls into the scheduler (the scheduler does not run as a separate thread - it always runs in the context of the current thread). The scheduler code selects a process to run next, and calls the switch_to() function. This function essentially just switches the kernel stacks - it saves the current value of the stack pointer into the TCB for the current thread (called struct task_struct in Linux), and loads a previously-saved stack pointer from the TCB for the next thread. At this point it also saves and restores some other thread state that isn't usually used by the kernel - things like floating point/SSE registers. If the threads being switched don't share the same virtual memory space (ie. they're in different processes), the page tables are also switched.
So you can see that the core user-mode state of a thread isn't saved and restored at context-switch time - it's saved and restored to the thread's kernel stack when you enter and leave the kernel. The context-switch code doesn't have to worry about clobbering the user-mode register values - those are already safely saved away in the kernel stack by that point.

What you missed during step 2 is that the stack gets switched from a thread's user-level stack (where you pushed args) to a thread's protected-level stack. The current context of the thread interrupted by the syscall is actually saved on this protected stack. Inside the ISR and just before entering the kernel, this protected-stack is again switched to the kernel stack you are talking about. Once inside the kernel, kernel functions such as scheduler's functions eventually use the kernel-stack. Later on, a thread gets elected by the scheduler and the system returns to the ISR, it switchs back from the kernel stack to the newly elected (or the former if no higher priority thread is active) thread's protected-level stack, wich eventually contains the new thread context. Therefore the context is restored from this stack by code automatically (depending on the underlying architecture). Finally, a special instruction restores the latest touchy resgisters such as the stack pointer and the instruction pointer. Back in the userland...
To sum-up, a thread has (generally) two stacks, and the kernel itself has one. The kernel stack gets wiped at the end of each kernel entering. It's interesting to point out that since 2.6, the kernel itself gets threaded for some processing, therefore a kernel-thread has its own protected-level stack beside the general kernel-stack.
Some ressources:
3.3.3 Performing the Process Switch of Understanding the Linux Kernel, O'Reilly
5.12.1 Exception- or Interrupt-Handler Procedures of the Intel's manual 3A (sysprogramming). Chapter number may vary from edition to other, thus a lookup on "Stack Usage on Transfers to Interrupt and Exception-Handling Routines" should get you to the good one.
Hope this help!

Kernel itself have no stack at all. The same is true for the process. It also have no stack. Threads are only system citizens which are considered as execution units. Due to this only threads can be scheduled and only threads have stacks. But there is one point which kernel mode code exploits heavily - every moment of time system works in the context of the currently active thread. Due to this kernel itself can reuse the stack of the currently active stack. Note that only one of them can execute at the same moment of time either kernel code or user code. Due to this when kernel is invoked it just reuse thread stack and perform a cleanup before returning control back to the interrupted activities in the thread. The same mechanism works for interrupt handlers. The same mechanism is exploited by signal handlers.
In its turn thread stack is divided into two isolated parts, one of which called user stack (because it is used when thread executes in user mode), and second one is called kernel stack (because it is used when thread executes in kernel mode). Once thread crosses the border between user and kernel mode, CPU automatically switches it from one stack to another. Both stack are tracked by kernel and CPU differently. For the kernel stack, CPU permanently keeps in mind pointer to the top of the kernel stack of the thread. It is easy, because this address is constant for the thread. Each time when thread enters the kernel it found empty kernel stack and each time when it returns to the user mode it cleans kernel stack. In the same time CPU doesn't keep in mind pointer to the top of the user stack, when thread runs in the kernel mode. Instead during entering to the kernel, CPU creates special "interrupt" stack frame on the top of the kernel stack and stores the value of the user mode stack pointer in that frame. When thread exits the kernel, CPU restores the value of ESP from previously created "interrupt" stack frame, immediately before its cleanup. (on legacy x86 the pair of instructions int/iret handle enter and exit from kernel mode)
During entering to the kernel mode, immediately after CPU will have created "interrupt" stack frame, kernel pushes content of the rest of CPU registers to the kernel stack. Note that is saves values only for those registers, which can be used by kernel code. For example kernel doesn't save content of SSE registers just because it will never touch them. Similarly just before asking CPU to return control back to the user mode, kernel pops previously saved content back to the registers.
Note that in such systems as Windows and Linux there is a notion of system thread (frequently called kernel thread, I know it is confusing). System threads a kind of special threads, because they execute only in kernel mode and due to this have no user part of the stack. Kernel employs them for auxiliary housekeeping tasks.
Thread switch is performed only in kernel mode. That mean that both threads outgoing and incoming run in kernel mode, both uses their own kernel stacks, and both have kernel stacks have "interrupt" frames with pointers to the top of the user stacks. Key point of the thread switch is a switch between kernel stacks of threads, as simple as:
pushad; // save context of outgoing thread on the top of the kernel stack of outgoing thread
; here kernel uses kernel stack of outgoing thread
mov [TCB_of_outgoing_thread], ESP;
mov ESP , [TCB_of_incoming_thread]
; here kernel uses kernel stack of incoming thread
popad; // save context of incoming thread from the top of the kernel stack of incoming thread
Note that there is only one function in the kernel that performs thread switch. Due to this each time when kernel has stacks switched it can find a context of incoming thread on the top of the stack. Just because every time before stack switch kernel pushes context of outgoing thread to its stack.
Note also that every time after stack switch and before returning back to the user mode, kernel reloads the mind of CPU by new value of the top of kernel stack. Making this it assures that when new active thread will try to enter kernel in future it will be switched by CPU to its own kernel stack.
Note also that not all registers are saved on the stack during thread switch, some registers like FPU/MMX/SSE are saved in specially dedicated area in TCB of outgoing thread. Kernel employs different strategy here for two reasons. First of all not every thread in the system uses them. Pushing their content to and and popping it from the stack for every thread is inefficient. And second one there are special instructions for "fast" saving and loading of their content. And these instructions doesn't use stack.
Note also that in fact kernel part of the thread stack has fixed size and is allocated as part of TCB. (true for Linux and I believe for Windows too)

Related

Is it valid to write below ESP?

For a 32-bit windows application is it valid to use stack memory below ESP for temporary swap space without explicitly decrementing ESP?
Consider a function that returns a floating point value in ST(0). If our value is currently in EAX we would, for example,
PUSH EAX
FLD [ESP]
ADD ESP,4 // or POP EAX, etc
// return...
Or without modifying the ESP register, we could just :
MOV [ESP-4], EAX
FLD [ESP-4]
// return...
In both cases the same thing happens except that in the first case we take care to decrement the stack pointer before using the memory, and then to increment it afterwards. In the latter case we do not.
Notwithstanding any real need to persist this value on the stack (reentrancy issues, function calls between PUSHing and reading the value back, etc) is there any fundamental reason why writing to the stack below ESP like this would be invalid?
TL:DR: no, there are some SEH corner cases that can make it unsafe in practice, as well as being documented as unsafe. #Raymond Chen recently wrote a blog post that you should probably read instead of this answer.
His example of a code-fetch page-fault I/O error that can be "fixed" by prompting the user to insert a CD-ROM and retry is also my conclusion for the only practically-recoverable fault if there aren't any other possibly-faulting instructions between store and reload below ESP/RSP.
Or if you ask a debugger to call a function in the program being debugged, it will also use the target process's stack.
This answer has a list of some things you'd think would potentially step on memory below ESP, but actually don't, which might be interesting. It seems to be only SEH and debuggers that can be a problem in practice.
First of all, if you care about efficiency, can't you avoid x87 in your calling convention? movd xmm0, eax is a more efficient way to return a float that was in an integer register. (And you can often avoid moving FP values to integer registers in the first place, using SSE2 integer instructions to pick apart exponent / mantissa for a log(x), or integer add 1 for nextafter(x).) But if you need to support very old hardware, then you need a 32-bit x87 version of your program as well as an efficient 64-bit version.
But there are other use-cases for small amounts of scratch space on the stack where it would be nice to save a couple instructions that offset ESP/RSP.
Trying to collect up the combined wisdom of other answers and discussion in comments under them (and on this answer):
It is explicitly documented as being not safe by Microsoft: (for 64-bit code, I didn't find an equivalent statement for 32-bit code but I'm sure there is one)
Stack Usage (for x64)
All memory beyond the current address of RSP is considered volatile: The OS, or a debugger, may overwrite this memory during a user debug session, or an interrupt handler.
So that's the documentation, but the interrupt reason stated doesn't make sense for the user-space stack, only the kernel stack. The important part is that they document it as not guaranteed safe, not the reasons given.
Hardware interrupts can't use the user stack; that would let user-space crash the kernel with mov esp, 0, or worse take over the kernel by having another thread in the user-space process modify return addresses while an interrupt handler was running. This is why kernels always configure things so interrupt context is pushed onto the kernel stack.
Modern debuggers run in a separate process, and are not "intrusive". Back in 16-bit DOS days, without a multi-tasking protected-memory OS to give each task its own address space, debuggers would use the same stack as the program being debugged, between any two instructions while single-stepping.
#RossRidge points out that a debugger might want to let you call a function in the context of the current thread, e.g. with SetThreadContext. This would run with ESP/RSP just below the current value. This could obviously have side-effects for the process being debugged (intentional on the part of the user running the debugger), but clobbering local variables of the current function below ESP/RSP would be an undesirable and unexpected side-effect. (So compilers can't put them there.)
(In a calling convention with a red-zone below ESP/RSP, a debugger could respect that red-zone by decrementing ESP/RSP before making the function call.)
There are existing program that intentionally break when being debugged at all, and consider this a feature (to defend against efforts to reverse-engineer them).
Related: the x86-64 System V ABI (Linux, OS X, all other non-Windows systems) does define a red-zone for user-space code (64-bit only): 128 bytes below RSP that is guaranteed not to be asynchronously clobbered. Unix signal handlers can run asynchronously between any two user-space instructions, but the kernel respects the red-zone by leaving a 128 byte gap below the old user-space RSP, in case it was in use. With no signal handlers installed, you have an effectively unlimited red-zone even in 32-bit mode (where the ABI does not guarantee a red-zone). Compiler-generated code, or library code, of course can't assume that nothing else in the whole program (or in a library the program called) has installed a signal handler.
So the question becomes: is there anything on Windows that can asynchronously run code using the user-space stack between two arbitrary instructions? (i.e. any equivalent to a Unix signal handler.)
As far as we can tell, SEH (Structured Exception Handling) is the only real obstacle to what you propose for user-space code on current 32 and 64-bit Windows. (But future Windows could include a new feature.)
And I guess debugging if you happen ask your debugger to call a function in the target process/thread as mentioned above.
In this specific case, not touching any other memory other than the stack, or doing anything else that could fault, it's probably safe even from SEH.
SEH (Structured Exception Handling) lets user-space software have hardware exceptions like divide by zero delivered somewhat similarly to C++ exceptions. These are not truly asynchronous: they're for exceptions triggered by instructions you ran, not for events that happened to come after some random instruction.
But unlike normal exceptions, one thing a SEH handler can do is resume from where the exception occurred. (#RossRidge commented: SEH handlers are are initially called in the context of the unwound stack and can choose to ignore the exception and continue executing at the point where the exception occurred.)
So that's a problem even if there's no catch() clause in the current function.
Normally HW exceptions can only be triggered synchronously. e.g. by a div instruction, or by a memory access which could fault with STATUS_ACCESS_VIOLATION (the Windows equivalent of a Linux SIGSEGV segmentation fault). You control what instructions you use, so you can avoid instructions that might fault.
If you limit your code to only accessing stack memory between the store and reload, and you respect the stack-growth guard page, your program won't fault from accessing [esp-4]. (Unless you reached the max stack size (Stack Overflow), in which case push eax would fault, too, and you can't really recover from this situation because there's no stack space for SEH to use.)
So we can rule out STATUS_ACCESS_VIOLATION as a problem, because if we get that on accessing stack memory we're hosed anyway.
An SEH handler for STATUS_IN_PAGE_ERROR could run before any load instruction. Windows can page out any page it wants to, and transparently page it back in if it's needed again (virtual memory paging). But if there's an I/O error, your Windows attempts to let your process handle the failure by delivering a STATUS_IN_PAGE_ERROR
Again, if that happens to the current stack, we're hosed.
But code-fetch could cause STATUS_IN_PAGE_ERROR, and you could plausibly recover from that. But not by resuming execution at the place where the exception occurred (unless we can somehow remap that page to another copy in a highly fault-tolerant system??), so we might still be ok here.
An I/O error paging in the code that wants to read what we stored below ESP rules out any chance of reading it. If you weren't planning to do that anyway, you're fine. A generic SEH handler that doesn't know about this specific piece of code wouldn't be trying to do that anyway. I think usually a STATUS_IN_PAGE_ERROR would at most try to print an error message or maybe log something, not try to carry on whatever computation was happening.
Accessing other memory in between the store and reload to memory below ESP could trigger a STATUS_IN_PAGE_ERROR for that memory. In library code, you probably can't assume that some other pointer you passed isn't going to be weird and the caller is expecting to handle STATUS_ACCESS_VIOLATION or PAGE_ERROR for it.
Current compilers don't take advantage of space below ESP/RSP on Windows, even though they do take advantage of the red-zone in x86-64 System V (in leaf functions that need to spill / reload something, exactly like what you're doing for int -> x87.) That's because MS says it isn't safe, and they don't know whether SEH handlers exist that could try to resume after an SEH.
Things that you'd think might be a problem in current Windows, and why they're not:
The guard page stuff below ESP: as long as you don't go too far below the current ESP, you'll be touching the guard page and trigger allocation of more stack space instead of faulting. This is fine as long as the kernel doesn't check user-space ESP and find out that you're touching stack space without having "reserved" it first.
kernel reclaim of pages below ESP/RSP: apparently Windows doesn't currently do this. So using a lot of stack space once ever will keep those pages allocated for the rest of your process lifetime, unless you manually VirtualAlloc(MEM_RESET) them. (The kernel would be allowed to do this, though, because the docs say memory below RSP is volatile. The kernel could effectively zero it asynchronously if it wants to, copy-on-write mapping it to a zero page instead of writing it to the pagefile under memory pressure.)
APC (Asynchronous Procedure Calls): They can only be delivered when the process is in an "alertable state", which means only when inside a call to a function like SleepEx(0,1). calling a function already uses an unknown amount of space below E/RSP, so you already have to assume that every call clobbers everything below the stack pointer. Thus these "async" callbacks are not truly asynchronous with respect to normal execution the way Unix signal handlers are. (fun fact: POSIX async io does use signal handlers to run callbacks).
Console-application callbacks for ctrl-C and other events (SetConsoleCtrlHandler). This looks exactly like registering a Unix signal handler, but in Windows the handler runs in a separate thread with its own stack. (See RbMm's comment)
SetThreadContext: another thread could change our EIP/RIP asynchronously while this thread is suspended, but the whole program has to be written specially for that to make any sense. Unless it's a debugger using it. Correctness is normally not required when some other thread is messing around with your EIP unless the circumstances are very controlled.
And apparently there are no other ways that another process (or something this thread registered) can trigger execution of anything asynchronously with respect to the execution of user-space code on Windows.
If there are no SEH handlers that could try to resume, Windows more or less has a 4096 byte red-zone below ESP (or maybe more if you touch it incrementally?), but RbMm says nobody takes advantage of it in practice. This is unsurprising because MS says not to, and you can't always know if your callers might have done something with SEH.
Obviously anything that would synchronously clobber it (like a call) must also be avoided, again same as when using the red-zone in the x86-64 System V calling convention. (See https://stackoverflow.com/tags/red-zone/info for more about it.)
in general case (x86/x64 platform) - interrupt can be executed at any time, which overwrite memory bellow stack pointer (if it executed on current stack). because this, even temporary save something bellow stack pointer, not valid in kernel mode - interrupt will be use current kernel stack. but in user mode situation another - windows build interrupt table (IDT) suchwise that when interrupt raised - it will be always executed in kernel mode and in kernel stack. as result user mode stack (below stack pointer) will be not affected. and possible temporary use some stack space bellow it pointer, until you not do any functions calls. if exception will be (say by access invalid address) - also space bellow stack pointer will be overwritten - cpu exception of course begin executed in kernel mode and kernel stack, but than kernel execute callback in user space via ntdll.KiDispatchExecption already on current stack space. so in general this is valid in windows user mode (in current implementation), but you need good understand what you doing. however this is very rarely i think used
of course, how correct noted in comments that we can, in windows user mode, write below stack pointer - is just the current implementation behavior. this not documented or guaranteed.
but this is very fundamental - unlikely will be changed: interrupts always will be executed in privileged kernel mode only. and kernel mode will be use only kernel mode stack. the user mode context not trusted at all. what will be if user mode program set incorrect stack pointer ? say by
mov rsp,1 or mov esp,1 ? and just after this instruction interrupt will be raised. what will be if it begin executed on such invalid esp/rsp ? all operation system just crashed. exactly because this interrupt will be executed only on kernel stack. and not overwrite user stack space.
also need note that stack is limited space (even in user mode), access it bellow 1 page (4Kb)already error (need do stack probing page by page, for move guard page down).
and finally really there is no need usually access [ESP-4], EAX - in what problem decrement ESP first ? even if we need access stack space in loop huge count of time - decrement stack pointer need only once - 1 additional instruction (not in loop) nothing change in performance or code size.
so despite formal this is will be correct work in windows user mode, better (and not need) use this
of course formal documentation say:
Stack Usage
All memory beyond the current address of RSP is considered volatile
but this is for common case, including kernel mode too. i wrote about user mode and based on current implementation
possible in future windows and add "direct" apc or some "direct" signals - some code will be executed via callback just after thread enter to kernel (during usual hardware interrupt). after this all below esp will be undefined. but until this not exist. until this code will be work always(in current builds) correct.
In general (not specifically related to any OS); it's not safe to write below ESP if:
It's possible for the code to be interrupted and the interrupt handler will run at the same privilege level. Note: This is typically very unlikely for "user-space" code, but extremely likely for kernel code.
You call any other code (where either the call or the stack used by the called routine can trash the data you stored below ESP)
Something else depends on "normal" stack use. This can include signal handling, (language based) exception unwinding, debuggers, "stack smashing protector"
It's safe to write below ESP if it's not "not safe".
Note that for 64-bit code, writing below RSP is built into the x86-64 ABI ("red zone"); and is made safe by support for it in tool chains/compilers and everything else.
When a thread gets created, Windows reserves a contiguous region of virtual memory of a configurable size (the default is 1 MB) for the thread's stack. Initially, the stack looks like this (the stack grows downwards):
--------------
| committed |
--------------
| guard page |
--------------
| . |
| reserved |
| . |
| . |
| |
--------------
ESP will be pointing somewhere inside the committed page. The guard page is used to support automatic stack growth. The reserved pages region ensures that the requested stack size is available in virtual memory.
Consider the two instructions from the question:
MOV [ESP-4], EAX
FLD [ESP-4]
There are three possibilities:
The first instruction executes successfully. There is nothing that uses the user-mode stack that can execute between the two instructions. So the second instruction will use the correct value (#RbMm stated this in the comments under his answer and I agree).
The first instruction raises an exception and an exception handler does not return EXCEPTION_CONTINUE_EXECUTION. As long as the second instruction is immediately after the first one (it is not in the exception handler or placed after it), then the second instruction will not execute. So you're still safe. Execution continues from stack frame where the exception handler exists.
The first instruction raises an exception and an exception handler returns EXCEPTION_CONTINUE_EXECUTION. Execution continues from the same instruction that raised the exception (potentially with a context modified by the handler). In this particular example, the first will be re-executed to write a value below ESP. No problem. If the second instruction raised an exception or there are more than two instructions, then the exception might occur a place after a value is written below ESP. When the exception handler gets called, it may overwrite the value and then return EXCEPTION_CONTINUE_EXECUTION. But when execution resumes, the value written is assumed to still be there, but it's not anymore. This is a situation where it's not safe to write below ESP. This applies even if all of the instructions are placed consecutively. Thanks to #RaymondChen for pointing this out.
In general, if the two instructions are not placed back-to-back, if you are writing to locations beyond ESP, there is no guarantee that the written values won't get corrupted or overwritten. One case that I can think of where this might happen is structured exception handling (SEH). If a hardware-defined exception (such as divide by zero) occurs, the kernel exception handler will be invoked (KiUserExceptionDispatcher) in kernel-mode, which will invoke the user-mode side of the handler (RtlDispatchException). When switching from user-mode to kernel-mode and then back to user-mode, whatever value was in ESP will be saved and restored. However, the user-mode handler itself uses the user-mode stack and will iterate over a registered list of exception handlers, each of which uses the user-mode stack. These functions will modify ESP as required. This may lead to losing the values you've written beyond ESP. A similar situation occurs when using software-define exceptions (throw in VC++).
I think you can deal with this by registering your own exception handler before any other exception handlers (so that it is called first). When your handler gets called, you can save your data beyond ESP elsewhere. Later, during unwinding, you get the cleanup opportunity to restore your data to the same location (or any other location) on the stack.
You need also to similarly watch out for asynchronous procedure calls (APCs) and callbacks.
Several answers here mention APCs (Asynchronous Procedure Calls), saying that they can only be delivered when the process is in an "alertable state", and are not truly asynchronous with respect to normal execution the way Unix signal handlers are
Windows 10 version 1809 introduces Special User APCs, which can fire at any moment just like Unix signals. See this article for low level details.
The Special User APC is a mechanism that was added in RS5 (and exposed through NtQueueApcThreadEx), but lately (in an insider build) was exposed through a new syscall - NtQueueApcThreadEx2. If this type of APC is used, the thread is signaled in the middle of the execution to execute the special APC.

how kernel manage user space threads in linux?

I have read this Linux - Threads and Process
I understood that every kernel threads have unique task_struct
But Right now my question is that how kernel manage user application's thread, suppose any user application have 12 thread then how kernel manage them and every thread have unique task_struct like kernel threads
The kernel manages them when it can, ie. whenever it is entered from an 'interrupt' that changes the state of the threads.
There are two flavours of interrupt: either a syscall from a running thread, or a call from a driver that has been entered from a 'real' hardware interrupt from KB, NIC, disk, timer etc, can change the state of threads and initiate a sceduling algorithm run that may change the set of threads to run on the available cores.
In between interrupts, the kernel manages nothing because it is not entered.
The task_struct is raised when a running thread makes a syscall to create a new thread. The new thread is created ready, and will run whenever the scheduling algorithm dispatches it onto a core.

Nested Interrupt Handling in ARM

Below is the flow mentioned in the Cortex A Prog Guide, I have a few questions on the text.
A reentrant interrupt handler must therefore take the following steps after an IRQ exception is raised and control is transferred to the interrupt handler in the way previously described.
• The interrupt handler saves the context of the interrupted program (that is, it pushes onto the alternative kernel mode stack any registers which will be corrupted by the handler, including the return address and SPSR_IRQ).
Q> What is the alternative kernel mode stack here ?
• It determines which interrupt source needs to be processed and clears the source in the external hardware (preventing it from immediately triggering another interrupt).
• The interrupt handler changes the processor to the other kernel mode, leaving the CPSR I bit set (interrupts are still disabled).
Q> From IRQ to SVC mode with CPSR.I =1 . Right ?
• The interrupt handler saves the exception return address on the stack (a stack for the new mode, located in kernel memory) and re-enables interrupts.
Q> Are there 2 stacks here ?
• It calls the appropriate C handler for the original interrupt (interrupts are still disabled).
• Upon completion, the interrupt handler disables IRQ and pops the exception return address from the stack.
• It restores the context of the interrupted program directly from the alternative kernel mode stack. This includes restoring the PC, and the CPSR which switches back to the previous execution mode.
Q> How is the nesting done here ? I am bit confused here...
1) Up to you, really. The requirement is that it is one that cannot be asynchronously invoked. So you can use System mode stack, which is shared with User mode - with some interesting implications. Or you can use the Supervisor mode stack, as long as you always properly store all context before executing an SVC instruction.
2) Yes.
3) Yes, you store the context on a stack for whichever mode picked in (1).
4) While executing in the alternative mode, you re-enable interrupts (as your text states). At this point, the processor will now react to new interrupts signalled to the core - generally ones of a higher priority as configured in your interrupt controller.

How does the kernel know if the CPU is in user mode or kenel mode?

Since the CPU runs in user/kernel mode, I want to know how this is determined by kernel. I mean, if a sys call is invoked, the kernel executes it on behalf of the process, but how does the kernel know that it is executing in kernel mode?
You can tell if you're in user-mode or kernel-mode from the privilege level set in the code-segment register (CS). Every instruction loaded into the CPU from the memory pointed to by the RIP or EIP register (the instruction pointer register depending on if you are x86_64 or x86 respectively) will read from the segment described in the global descriptor table (GDT) by the current code-segment descriptor. The lower two-bits of the code segment descriptor will determine the current privilege level that the code is executing at. When a syscall is made, which is typically done through a software interrupt, the CPU will check the current privilege-level, and if it's in user-mode, will exchange the current code-segment descriptor for a kernel-level one as determined by the syscall's software interrupt gate descriptor, as well as make a stack-switch and save the current flags, the user-level CS value and RIP value on this new kernel-level stack. When the syscall is complete, the user-mode CS value, flags, and instruction pointer (EIP or RIP) value are restored from the kernel-stack, and a stack-switch is made back to the current executing processes' stack.
Broadly if it's running kernel code it's in kernel mode. The transition from user-space to kernel mode (say for a system call) causes a context switch to occur. As part of this context switch the CPU mode is changed.
Kernel code only executes in kernel mode. There is no way, kernel code can execute in user mode. When application calls system call, it will generate a trap (software interrupt) and the mode will be switch to kernel mode and kernel implementation of system call will executed. Once it is done, kernel will switch back to user mode and user application will continue processing in user mode.
The term is called "Superviser Mode", which applies to x86/ARM and many other processor as well.
Read this (which applies only to x86 CPU):
http://en.wikipedia.org/wiki/Ring_(computer_security)
Ring 0 to 3 are the different privileges level of x86 CPU. Normally only Ring0 and 3 are used (kernel and user), but nowadays Ring 1 find usages (eg, VMWare used it to emulate guest's execution of ring 0). Only Ring 0 has the full privilege to run some privileged instructions (like lgdt, or lidt), and so a good test at the assembly level is of course to execute these instruction, and see if your program encounters any exception or not.
Read this to really identify your current privilege level (look for CPL, which is a pictorialization of Jason's answer):
http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
It is a simple question and does not need any expert comment as provided above..
The question is how does a cpu come to know whether it is kernel mode or its a user mode.
The answer is "mode bit"....
It is a bit in Status register of cpu's registers set.
When "mode bit=0",,,it is considered as kernel mode(also called,monitor mode,privileged mode,protected mode...and many other...)
When "mode bit=1",,it is considered as User mode...and user can now perform its personal applications without any special kernel interruption.
so simple...isn't it??

Stack Pointer Register values

When a thread is executing in kernel mode, will the stack pointer points to its kernel mode stack? Similarly, will it points to the user mode stack when the thread is running in user mode?
Thanks.
It depends on what kind of process is executing. Recent linux kernels allow user processes in kernel mode There are also multiple stacks in each process and mode so your reference to "the stack pointer" is a little vague imho.

Resources