Nested Interrupt Handling in ARM - linux-kernel

Below is the flow mentioned in the Cortex A Prog Guide, I have a few questions on the text.
A reentrant interrupt handler must therefore take the following steps after an IRQ exception is raised and control is transferred to the interrupt handler in the way previously described.
• The interrupt handler saves the context of the interrupted program (that is, it pushes onto the alternative kernel mode stack any registers which will be corrupted by the handler, including the return address and SPSR_IRQ).
Q> What is the alternative kernel mode stack here ?
• It determines which interrupt source needs to be processed and clears the source in the external hardware (preventing it from immediately triggering another interrupt).
• The interrupt handler changes the processor to the other kernel mode, leaving the CPSR I bit set (interrupts are still disabled).
Q> From IRQ to SVC mode with CPSR.I =1 . Right ?
• The interrupt handler saves the exception return address on the stack (a stack for the new mode, located in kernel memory) and re-enables interrupts.
Q> Are there 2 stacks here ?
• It calls the appropriate C handler for the original interrupt (interrupts are still disabled).
• Upon completion, the interrupt handler disables IRQ and pops the exception return address from the stack.
• It restores the context of the interrupted program directly from the alternative kernel mode stack. This includes restoring the PC, and the CPSR which switches back to the previous execution mode.
Q> How is the nesting done here ? I am bit confused here...

1) Up to you, really. The requirement is that it is one that cannot be asynchronously invoked. So you can use System mode stack, which is shared with User mode - with some interesting implications. Or you can use the Supervisor mode stack, as long as you always properly store all context before executing an SVC instruction.
2) Yes.
3) Yes, you store the context on a stack for whichever mode picked in (1).
4) While executing in the alternative mode, you re-enable interrupts (as your text states). At this point, the processor will now react to new interrupts signalled to the core - generally ones of a higher priority as configured in your interrupt controller.

Related

When we use irq_set_chained_handler the irq line will be disabled or not?

When we use irq_set_chained_handler the irq line will not be disabled or not, when we are servicing the associated handler, as in case of request_irq.
It doesn't matter how the interrupt was setup. When any interrupt occurred, all interrupts (for this CPU) will be disabled during the interrupt handler. For example, on ARM architecture first place in C code where interrupt handling is found is asm_do_IRQ() function (defined in arch/arm/kernel/irq.c). It's being called from assembler code. For any interrupt (whether it was requested by request_irq() or by irq_set_chained_handler()) the same asm_do_IRQ() function is called, and interrupts are disabled automatically by ARM CPU. See this answer for details.
Historical notes
Also, it worth to be mentioned that some time ago Linux kernel was providing two types of interrupts: "fast" and "slow" ones. Fast interrupts (when using IRQF_DISABLED or SA_INTERRUPT flag) were running with disabled interrupts, and those handlers supposed to be very short and quick. Slow interrupts, on the other hand, were running with re-enabled interrupts, because handlers for slow interrupts may take much of time to be handled.
On modern versions of Linux kernel all interrupts are considered as "fast" and are running with interrupts disabled. Interrupts with huge handlers must be implemented as threaded (or enable interrupts manually in ISR using local_irq_enable_in_hardirq()).
That behavior was changed in Linux kernel v2.6.35 by this commit. You can find more details about this here.
Refer https://www.kernel.org/doc/Documentation/gpio/driver.txt
This means the GPIO irqchip is registered using
irq_set_chained_handler() or the corresponding
gpiochip_set_chained_irqchip() helper function, and the GPIO irqchip
handler will be called immediately from the parent irqchip, while
holding the IRQs disabled. The GPIO irqchip will then end up calling
something like this sequence in its interrupt handler:

Should my interrupt handler disable interrupts or does the ARM processor do it automatically?

Our group is using a custom driver to interface four MAX3107 UARTs on a shared I2C bus. The interrupts of the four MAX3107's are connected (i.e. shared interrupt via logic or'ing)) to a GPIO pin on the ARM9 processor (LPC3180 module). When one or more of these devices interrupt, they pull the GPIO line, which is configured as a level-sensitive interrupt, low. My question concerns the need, or not, to disable the specific interrupt line in the handler code. (I should add that we are running Linux 2.6.10).
Based on my reading of several ARM-specific app notes on interrupts, it seems that when the ARM processor receives an interrupt, it automatically disables (masks?) the corresponding interrupt line (in our case this would seem to be the line corresponding to the GPIO pin we selected). If this is true, then it seems that we should not have to disable interrupts for this GPIO pin in our interrupt handler code as doing so would seem redundant (though it seems to work okay). Stated differently, it seems to me that if the ARM processor automatically disables the GPIO interrupt upon an interrupt occurring, then if anything, our interrupt handler code should only have to re-enable the interrupt once the device is serviced.
The interrupt handler code that we are using includes disable_irq_nosync(irqno); at the very beginning of the handler and a corresponding enable_irq() at the end of the handler. If the ARM processor has already disabled the interrupt line (in hardware), what is the effect of these calls (i.e. a call to disable_irq_nosync() followed by a call to enable(irq())?
From the Arm Information Center Documentation:
On entry to an exception (interrupt):
interrupt requests (IRQs) are disabled for all exceptions
fast interrupt requests (FIQs) are disabled for FIQ and Reset exceptions.
It then goes on to say:
Handling an FIQ causes IRQs and subsequent FIQs to be disabled,
preventing them from being handled until after the FIQ handler enables
them. This is usually done by restoring the CPSR from the SPSR at the
end of the handler.
So you do not have to worry about disabling them, but you do have to worry about re-enabling them.
You will need to include enable_irq() at the end of your routine, but you shouldn't need to disable anything at the beginning. I wouldn't think that calling disable_irq_nosync(irqno) in software after it has been called in hardware would effect anything. Since the hardware call is most definitely called before the software call has a chance to take over. But it's probably better to remove it from the code to follow convention and not confuse the next programmer who takes a look at it.
More info here:
Arm Information Center

Context switch internals

I want to learn and fill gaps in my knowledge with the help of this question.
So, a user is running a thread (kernel-level) and it now calls yield (a system call I presume).
The scheduler must now save the context of the current thread in the TCB (which is stored in the kernel somewhere) and choose another thread to run and loads its context and jump to its CS:EIP.
To narrow things down, I am working on Linux running on top of x86 architecture. Now, I want to get into the details:
So, first we have a system call:
1) The wrapper function for yield will push the system call arguments onto the stack. Push the return address and raise an interrupt with the system call number pushed onto some register (say EAX).
2) The interrupt changes the CPU mode from user to kernel and jumps to the interrupt vector table and from there to the actual system call in the kernel.
3) I guess the scheduler gets called now and now it must save the current state in the TCB. Here is my dilemma. Since, the scheduler will use the kernel stack and not the user stack for performing its operation (which means the SS and SP have to be changed) how does it store the state of the user without modifying any registers in the process. I have read on forums that there are special hardware instructions for saving state but then how does the scheduler get access to them and who runs these instructions and when?
4) The scheduler now stores the state into the TCB and loads another TCB.
5) When the scheduler runs the original thread, the control gets back to the wrapper function which clears the stack and the thread resumes.
Side questions: Does the scheduler run as a kernel-only thread (i.e. a thread which can run only kernel code)? Is there a separate kernel stack for each kernel-thread or each process?
At a high level, there are two separate mechanisms to understand. The first is the kernel entry/exit mechanism: this switches a single running thread from running usermode code to running kernel code in the context of that thread, and back again. The second is the context switch mechanism itself, which switches in kernel mode from running in the context of one thread to another.
So, when Thread A calls sched_yield() and is replaced by Thread B, what happens is:
Thread A enters the kernel, changing from user mode to kernel mode;
Thread A in the kernel context-switches to Thread B in the kernel;
Thread B exits the kernel, changing from kernel mode back to user mode.
Each user thread has both a user-mode stack and a kernel-mode stack. When a thread enters the kernel, the current value of the user-mode stack (SS:ESP) and instruction pointer (CS:EIP) are saved to the thread's kernel-mode stack, and the CPU switches to the kernel-mode stack - with the int $80 syscall mechanism, this is done by the CPU itself. The remaining register values and flags are then also saved to the kernel stack.
When a thread returns from the kernel to user-mode, the register values and flags are popped from the kernel-mode stack, then the user-mode stack and instruction pointer values are restored from the saved values on the kernel-mode stack.
When a thread context-switches, it calls into the scheduler (the scheduler does not run as a separate thread - it always runs in the context of the current thread). The scheduler code selects a process to run next, and calls the switch_to() function. This function essentially just switches the kernel stacks - it saves the current value of the stack pointer into the TCB for the current thread (called struct task_struct in Linux), and loads a previously-saved stack pointer from the TCB for the next thread. At this point it also saves and restores some other thread state that isn't usually used by the kernel - things like floating point/SSE registers. If the threads being switched don't share the same virtual memory space (ie. they're in different processes), the page tables are also switched.
So you can see that the core user-mode state of a thread isn't saved and restored at context-switch time - it's saved and restored to the thread's kernel stack when you enter and leave the kernel. The context-switch code doesn't have to worry about clobbering the user-mode register values - those are already safely saved away in the kernel stack by that point.
What you missed during step 2 is that the stack gets switched from a thread's user-level stack (where you pushed args) to a thread's protected-level stack. The current context of the thread interrupted by the syscall is actually saved on this protected stack. Inside the ISR and just before entering the kernel, this protected-stack is again switched to the kernel stack you are talking about. Once inside the kernel, kernel functions such as scheduler's functions eventually use the kernel-stack. Later on, a thread gets elected by the scheduler and the system returns to the ISR, it switchs back from the kernel stack to the newly elected (or the former if no higher priority thread is active) thread's protected-level stack, wich eventually contains the new thread context. Therefore the context is restored from this stack by code automatically (depending on the underlying architecture). Finally, a special instruction restores the latest touchy resgisters such as the stack pointer and the instruction pointer. Back in the userland...
To sum-up, a thread has (generally) two stacks, and the kernel itself has one. The kernel stack gets wiped at the end of each kernel entering. It's interesting to point out that since 2.6, the kernel itself gets threaded for some processing, therefore a kernel-thread has its own protected-level stack beside the general kernel-stack.
Some ressources:
3.3.3 Performing the Process Switch of Understanding the Linux Kernel, O'Reilly
5.12.1 Exception- or Interrupt-Handler Procedures of the Intel's manual 3A (sysprogramming). Chapter number may vary from edition to other, thus a lookup on "Stack Usage on Transfers to Interrupt and Exception-Handling Routines" should get you to the good one.
Hope this help!
Kernel itself have no stack at all. The same is true for the process. It also have no stack. Threads are only system citizens which are considered as execution units. Due to this only threads can be scheduled and only threads have stacks. But there is one point which kernel mode code exploits heavily - every moment of time system works in the context of the currently active thread. Due to this kernel itself can reuse the stack of the currently active stack. Note that only one of them can execute at the same moment of time either kernel code or user code. Due to this when kernel is invoked it just reuse thread stack and perform a cleanup before returning control back to the interrupted activities in the thread. The same mechanism works for interrupt handlers. The same mechanism is exploited by signal handlers.
In its turn thread stack is divided into two isolated parts, one of which called user stack (because it is used when thread executes in user mode), and second one is called kernel stack (because it is used when thread executes in kernel mode). Once thread crosses the border between user and kernel mode, CPU automatically switches it from one stack to another. Both stack are tracked by kernel and CPU differently. For the kernel stack, CPU permanently keeps in mind pointer to the top of the kernel stack of the thread. It is easy, because this address is constant for the thread. Each time when thread enters the kernel it found empty kernel stack and each time when it returns to the user mode it cleans kernel stack. In the same time CPU doesn't keep in mind pointer to the top of the user stack, when thread runs in the kernel mode. Instead during entering to the kernel, CPU creates special "interrupt" stack frame on the top of the kernel stack and stores the value of the user mode stack pointer in that frame. When thread exits the kernel, CPU restores the value of ESP from previously created "interrupt" stack frame, immediately before its cleanup. (on legacy x86 the pair of instructions int/iret handle enter and exit from kernel mode)
During entering to the kernel mode, immediately after CPU will have created "interrupt" stack frame, kernel pushes content of the rest of CPU registers to the kernel stack. Note that is saves values only for those registers, which can be used by kernel code. For example kernel doesn't save content of SSE registers just because it will never touch them. Similarly just before asking CPU to return control back to the user mode, kernel pops previously saved content back to the registers.
Note that in such systems as Windows and Linux there is a notion of system thread (frequently called kernel thread, I know it is confusing). System threads a kind of special threads, because they execute only in kernel mode and due to this have no user part of the stack. Kernel employs them for auxiliary housekeeping tasks.
Thread switch is performed only in kernel mode. That mean that both threads outgoing and incoming run in kernel mode, both uses their own kernel stacks, and both have kernel stacks have "interrupt" frames with pointers to the top of the user stacks. Key point of the thread switch is a switch between kernel stacks of threads, as simple as:
pushad; // save context of outgoing thread on the top of the kernel stack of outgoing thread
; here kernel uses kernel stack of outgoing thread
mov [TCB_of_outgoing_thread], ESP;
mov ESP , [TCB_of_incoming_thread]
; here kernel uses kernel stack of incoming thread
popad; // save context of incoming thread from the top of the kernel stack of incoming thread
Note that there is only one function in the kernel that performs thread switch. Due to this each time when kernel has stacks switched it can find a context of incoming thread on the top of the stack. Just because every time before stack switch kernel pushes context of outgoing thread to its stack.
Note also that every time after stack switch and before returning back to the user mode, kernel reloads the mind of CPU by new value of the top of kernel stack. Making this it assures that when new active thread will try to enter kernel in future it will be switched by CPU to its own kernel stack.
Note also that not all registers are saved on the stack during thread switch, some registers like FPU/MMX/SSE are saved in specially dedicated area in TCB of outgoing thread. Kernel employs different strategy here for two reasons. First of all not every thread in the system uses them. Pushing their content to and and popping it from the stack for every thread is inefficient. And second one there are special instructions for "fast" saving and loading of their content. And these instructions doesn't use stack.
Note also that in fact kernel part of the thread stack has fixed size and is allocated as part of TCB. (true for Linux and I believe for Windows too)

Can an interrupt handler be preempted by the same interrupt handler?

Does the CPU disable all interrupts on local CPU before calling the interrupt handler?
Or does it only disable that particular interrupt line, which is being served?
x86 disables all local interrupts (except NMI of course) before jumping to the interrupt vector. Linux normally masks the specific interrupt and re-enables the rest of the interrupts (which aren't masked), unless a specific flags is passed to the interrupt handler registration.
Note that while this means your interrupt handler will not race with itself on the same CPU, it can and will race with itself running on other CPUs in an SMP / SMT system.
Normally (at least in x86), an interrupt disables interrupts.
When an interrupt is received, the hardware does these things:
1. Save all registers in a predetermined place.
2. Set the instruction pointer (AKA program counter) to the interrupt handler's address.
3. Set the register that controls interrupts to a value that disables all (or most) interrupts. This prevents another interrupt from interrupting this one.
An exception is NMI (non maskable interrupt) which can't be disabled.
Yes, that's fine.
I'd like to also add what I think might be relevant.
In many real-world drivers/kernel code, "bottom-half" (bh) handlers are used pretty often- tasklets, softirqs. These bh's run in interrupt context and can run in parallel with their top-half (th) handlers on SMP (esp softirq's).
Of course, recently there's a move (mainly code migrated from the PREEMPT_RT project) towards mainline, that essentially gets rid of the 'bh' mechanism- all interrupt handlers will run with all interrupts disabled. Not only that, handlers are (can be) converted to kernel threads- these are the so-called "threaded" interrupt handlers.
As of today, the choice is still left to the developer- you can use the 'traditional' th/bh style or the threaded style.
Ref and Details:
http://lwn.net/Articles/380931/
http://lwn.net/Articles/302043/
Quoting Intels own, surprisingly well-written "Intel® 64 and IA-32 Architectures Software Developer’s Manual", Volume 1, pages 6-10:
If an interrupt or exception handler is called
through an interrupt gate, the processor clears the interrupt enable (IF) flag in the EFLAGS register to prevent
subsequent interrupts from interfering with the execution of the handler. When a handler is called through a trap
gate, the state of the IF flag is not changed.
So just to be clear - yes, effectively the CPU "disables" all interrupts before calling the interrupt handler. Properly described, the processor simply triggers a flag which makes it ignore all interrupt requests. Except probably non-maskable interrupts and/or its own software exceptions (please someone correct me on this, not verified).
We want ISR to be atomic and no one should be able to preempt the ISR.
Therefore, An ISR disables the local interrupts ( i.e. the interrupt on the current processor) and once the ISR calls ret_from_intr() function ( i.e. we have finished the ISR) , interrupts are again enabled on the current processor.
If an interrupt occurs, it will now be served by the other processor ( in SMP system) and ISR related to that interrupt will start running.
In SMP system , We also need to include the proper synchronization mechanism ( spin lock) in an ISR.

Trap Dispatching on Windows

I am actually reading Windows Internals 5th edition and i am enjoying, although isn't a easy book to read and understand.
I am confused about IRQLs and IDT Table.
I read that windows implement custom priorization levels with IRQL and the Plug and Play Manager maps IRQ from devices to IRQL.
Alright, so, IRQLs are used for Software and Hardware interrupts, and for exceptions is used the Exception Dispatch handler.
When one device generates an interrupt, the interrupt controller pass this information to the CPU with the IRQ.
So Windows takes this IRQ and translates to IRQL to schedule when to execute the routine (routine that IDT[IRQ_VALUE] is pointing to?
Is that what is happening?
Yes, on a very high level.
Everything starts with a kernel trap. Kernel trap handler handles interrupts, exceptions, system service calls and virtual memory pager.
When an interrupt happens (line based - using dedicated pin or message based- writing to an address) windows uses IRQL to determine the priority of the interrupt and uses this to see if the interrupt can be served or not during that time. HAL does the job of translating the IRQ to IRQL.
It then uses IRQ to get an index of the IDT to find the appropriate ISR routing to invoke. Note there can be multiple ISR associated for a given IRQ. All of them execute in order.
Each processor has its own IDT so you could potentially have multiple ISR's running at the same time.
Exception dispatch, as I mentioned before, is also handled by the kernel trap but the procedure for it is different. It usually starts by checking for any exception handlers by stack unwinding, then checking for debugger port etc.

Resources