What is Disadvantage of using mutex in interrupt context.Why spin lock is preferred here.
A Mutex will force the function to sleep if it's contended and sleeping is illegal when preemption is disabled or in interrupt context.
Many functions in the kernel sleep (ie. call schedule()) directly or
indirectly: you can never call them while holding a spinlock, or with
preemption disabled. This also means you need to be in user context:
calling them from an interrupt is illegal.
The following is worth reading...
https://www.kernel.org/pub/linux/kernel/people/rusty/kernel-locking/c557.html
Theres a ton of information in that doc.
When a thread tries to acquire a mutex and if it does not succeed, either due to another thread having already acquired it or due to a context switch, the thread goes to sleep until been woken-up which is critical if used in an ISR.
Whereas when a thread fails to acquire a spin_lock, it continuously tries to acquire it, until it finally succeeds, thus avoiding sleeping in an ISR. Using spin_lock in the "top-half" is a common practice followed in writing Linux Device Driver Interrupt handlers.
Hence you should use a spin-lock instead!
First of all sorry for a little bit ambiguity in Question... What I want to understand is the below scenario
Suppose porcess is running, it holds one lock, Now after acquiring the lock HW interrupt is generated, So How kernel will handle this situation, will it wait for lock ? if yes, what if the interrupt handler need to access that lock or the shared data protected by that lock in process ?
The Linux kernel has a few functions for acquiring spinlocks, to deal with issues like the one you're raising here. In particular, there is spin_lock_irq(), which disables interrupts (on the CPU the process is running on) and acquires the spinlock. This can be used when the code knows interrupts are enabled before the spinlock is acquired; in case the function might be called in different contexts, there is also spin_lock_irqsave(), which stashes away the current state of interrupts before disabling them, so that they can be reenabled by spin_unlock_irqrestore().
In any case, if a lock is used in both process and interrupt context (which is a good and very common design if there is data that needs to be shared between the contexts), then process context must disable interrupts (locally on the CPU it's running on) when acquiring the spinlock to avoid deadlocks. In fact, lockdep ("CONFIG_PROVE_LOCKING") will verify this and warn if a spinlock is used in a way that is susceptible to the "interrupt while process context holds a lock" deadlock.
Let me explain some basic properties of interrupt handler or bottom half.
A handler can’t transfer data to or from user space, because it doesn’t execute in the context of a process.
Handlers also cannot do anything that would sleep, such as calling wait_event, allocating memory with anything other than GFP_ATOMIC, or locking a semaphore
handlers cannot call schedule.
What i am trying to say is that Interrupt handler runs in atomic context. They can not sleep as they cannot be rescheduled. interrupts do not have a backing process context
The above is by design. You can do whatever you want in code, just be prepared for the consequences
Let us assume that you acquire a lock in interrupt handler(bad design).
When an interrupt occur the process saves its register on stack and start ISR. now after acquiring a lock you would be in a deadlock as their is no way ISR know what the process was doing.
The process will not be able to resume execution until it is done it with ISR
In a preemptive kernel the ISR and the process can be preempt but for a non-preemptive kernel you are dead.
I read this article http://www.linuxjournal.com/article/5833 to learn about spinlock. I try this to use it in my kernel driver.
Here is what my driver code needs to do:
In f1(), it will get the spin lock, and caller can call f2() will wait for the lock since the spin lock is not being unlock. The spin lock will be unlock in my interrupt handler (triggered by the HW).
void f1() {
spin_lock(&mylock);
// write hardware
REG_ADDR += FLAG_A;
}
void f2() {
spin_lock(&mylock);
//...
}
The hardware will send the application an interrupt and my interrupt handler will call spin_unlock(&mylock);
My question is if I call
f1()
f2() // i want this to block until the interrupt return saying setting REG_ADDR is done.
when I run this, I get an exception in kernel saying a deadlock " INFO: possible recursive locking detected"
How can I re-write my code so that kernel does not think I have a deadlock?
I want my driver code to wait until HW sends me an interrupt saying setting REG_ADDR is done.
Thank you.
First, since you'll be expecting to block while waiting for the interrupt, you shouldn't be using spinlocks to lock the hardware as you'll probably be holding the lock for a long time. Using a spinlock in this case will waste a lot of CPU cycles if that function is called frequently.
I would first use a mutex to lock access to the hardware register in question so other kernel threads can't simultaneously modify the register. A mutex is allowed to sleep so if it can't acquire the lock, the thread is able to go to sleep until it can.
Then, I'd use a wait queue to block the thread until the interrupt arrives and signals that the bit has finished setting.
Also, as an aside, I noticed you're trying to access your peripheral by using the following expression REG_ADDR += FLAG_A;. In the kernel, that's not the correct way to do it. It may seem to work but will break on some architectures. You should be using the read{b,w,l} and write{b,w,l} macros like
unsigned long reg;
reg = readl(REG_ADDR);
reg |= FLAG_A;
writel(reg, REG_ADDR);
where REG_ADDR is an address you obtained from ioremap.
I will agree with Michael that Spinlock, Semaphores, Mutex ( Or any other Locking Mechanisms) must be used when any of the resources(Memory/variable/piece of code) has the probability of getting shared among the kernel/user threads.
Instead of using any of the Locking primitives available I would suggest using other sleeping functionalities available in kernel like wait_event_interruptibleand wake_up. They are simple and easy to exploit them into your code. You can find its details and exploitation on net.
I understand that the kernel can synchronize processes via the spinlock method. However, when it comes down to one processor how does it do so? How does it use a synchronization object to ensure mutual exclusion?
Is a semaphore at the level of the executive? How does the kernel come into play here?
Are mutexes only implemented at the level of the kernel? They do not give off a signal or message when the resource is free.
You've got several questions here:
I understand that the kernel can synchronize processes via the
spinlock method. However, when it comes down to one processor how does
it do so? How does it use a synchronization object to ensure mutual
exclusion?
On uni-processor machines, acquiring a spinlock simply raises the IRQL to >DISPATCH_LEVEL - a thread at such elevated IRQL cannot be pre-empted, so synchronization is guaranteed.
Is a semaphore at the level of the executive? How does the kernel come
into play here?
Semaphores, mutexes, (and most waitable objects, for that matter) are Kernel Dispatch Objects. Such objects are implemented by the kernel, and are made available to user mode applications via various functions exported by KERNEL32.DLL (CreateEvent/Mutex/Semaphore, et.al.). In addition, the "kernel comes into play" by scheduling thread waits, and awakening threads that are waiting on synchronization objects.
Are mutexes only implemented at the level of the kernel?
Mutex objects are indeed kernel dispatch objects (KMUTEX). A mutex object is signalled when it is un-owned. When a thread acquires a mutex, it's state goes to non-signalled, which means that any other thread that attempts to acquire it will be put into a wait state until either the mutex is acquired, or the wait times out.
For more detailed explanations on kernel dispatcher objects, as well as Windows synchronization in general, have a peek at the latest version of "Windows Internals" - every Windows developer should have a copy of this on their desk, IMHO.
'They do not give off a signal or message when the resource is free' - sure they do - they are an inter-thread signaling mechanism! A thread waiting on the mutex is signaled and made ready when the protected resource is released, so acquiring the mutex.
Spinlocks are generally not used on single-core processors - there is no point. TBH, spinlocks need great care on multi-core and clustered systems too if their use is not to be counter-productive.
I am reading following article by Robert Love
http://www.linuxjournal.com/article/6916
that says
"...Let's discuss the fact that work queues run in process context. This is in contrast to the other bottom-half mechanisms, which all run in interrupt context. Code running in interrupt context is unable to sleep, or block, because interrupt context does not have a backing process with which to reschedule. Therefore, because interrupt handlers are not associated with a process, there is nothing for the scheduler to put to sleep and, more importantly, nothing for the scheduler to wake up..."
I don't get it. AFAIK, scheduler in the kernel is O(1), that is implemented through the bitmap. So what stops the scehduler from putting interrupt context to sleep and taking next schedulable process and passing it the control?
So what stops the scehduler from putting interrupt context to sleep and taking next schedulable process and passing it the control?
The problem is that the interrupt context is not a process, and therefore cannot be put to sleep.
When an interrupt occurs, the processor saves the registers onto the stack and jumps to the start of the interrupt service routine. This means that when the interrupt handler is running, it is running in the context of the process that was executing when the interrupt occurred. The interrupt is executing on that process's stack, and when the interrupt handler completes, that process will resume executing.
If you tried to sleep or block inside an interrupt handler, you would wind up not only stopping the interrupt handler, but also the process it interrupted. This could be dangerous, as the interrupt handler has no way of knowing what the interrupted process was doing, or even if it is safe for that process to be suspended.
A simple scenario where things could go wrong would be a deadlock between the interrupt handler and the process it interrupts.
Process1 enters kernel mode.
Process1 acquires LockA.
Interrupt occurs.
ISR starts executing using Process1's stack.
ISR tries to acquire LockA.
ISR calls sleep to wait for LockA to be released.
At this point, you have a deadlock. Process1 can't resume execution until the ISR is done with its stack. But the ISR is blocked waiting for Process1 to release LockA.
I think it's a design idea.
Sure, you can design a system that you can sleep in interrupt, but except to make to the system hard to comprehend and complicated(many many situation you have to take into account), that's does not help anything. So from a design view, declare interrupt handler as can not sleep is very clear and easy to implement.
From Robert Love (a kernel hacker):
http://permalink.gmane.org/gmane.linux.kernel.kernelnewbies/1791
You cannot sleep in an interrupt handler because interrupts do not have
a backing process context, and thus there is nothing to reschedule back
into. In other words, interrupt handlers are not associated with a task,
so there is nothing to "put to sleep" and (more importantly) "nothing to
wake up". They must run atomically.
This is not unlike other operating systems. In most operating systems,
interrupts are not threaded. Bottom halves often are, however.
The reason the page fault handler can sleep is that it is invoked only
by code that is running in process context. Because the kernel's own
memory is not pagable, only user-space memory accesses can result in a
page fault. Thus, only a few certain places (such as calls to
copy_{to,from}_user()) can cause a page fault within the kernel. Those
places must all be made by code that can sleep (i.e., process context,
no locks, et cetera).
Because the thread switching infrastructure is unusable at that point. When servicing an interrupt, only stuff of higher priority can execute - See the Intel Software Developer's Manual on interrupt, task and processor priority. If you did allow another thread to execute (which you imply in your question that it would be easy to do), you wouldn't be able to let it do anything - if it caused a page fault, you'd have to use services in the kernel that are unusable while the interrupt is being serviced (see below for why).
Typically, your only goal in an interrupt routine is to get the device to stop interrupting and queue something at a lower interrupt level (in unix this is typically a non-interrupt level, but for Windows, it's dispatch, apc or passive level) to do the heavy lifting where you have access to more features of the kernel/os. See - Implementing a handler.
It's a property of how O/S's have to work, not something inherent in Linux. An interrupt routine can execute at any point so the state of what you interrupted is inconsistent. If you interrupted the thread scheduling code, its state is inconsistent so you can't be sure you can "sleep" and switch threads. Even if you protect the thread switching code from being interrupted, thread switching is a very high level feature of the O/S and if you protected everything it relies on, an interrupt becomes more of a suggestion than the imperative implied by its name.
So what stops the scehduler from putting interrupt context to sleep and taking next schedulable process and passing it the control?
Scheduling happens on timer interrupts. The basic rule is that only one interrupt can be open at a time, so if you go to sleep in the "got data from device X" interrupt, the timer interrupt cannot run to schedule it out.
Interrupts also happen many times and overlap. If you put the "got data" interrupt to sleep, and then get more data, what happens? It's confusing (and fragile) enough that the catch-all rule is: no sleeping in interrupts. You will do it wrong.
Disallowing an interrupt handler to block is a design choice. When some data is on the device, the interrupt handler intercepts the current process, prepares the transfer of the data and enables the interrupt; before the handler enables the current interrupt, the device has to hang. We want keep our I/O busy and our system responsive, then we had better not block the interrupt handler.
I don't think the "unstable states" are an essential reason. Processes, no matter they are in user-mode or kernel-mode, should be aware that they may be interrupted by interrupts. If some kernel-mode data structure will be accessed by both interrupt handler and the current process, and race condition exists, then the current process should disable local interrupts, and moreover for multi-processor architectures, spinlocks should be used to during the critical sections.
I also don't think if the interrupt handler were blocked, it cannot be waken up. When we say "block", basically it means that the blocked process is waiting for some event/resource, so it links itself into some wait-queue for that event/resource. Whenever the resource is released, the releasing process is responsible for waking up the waiting process(es).
However, the really annoying thing is that the blocked process can do nothing during the blocking time; it did nothing wrong for this punishment, which is unfair. And nobody could surely predict the blocking time, so the innocent process has to wait for unclear reason and for unlimited time.
Even if you could put an ISR to sleep, you wouldn't want to do it. You want your ISRs to be as fast as possible to reduce the risk of missing subsequent interrupts.
The linux kernel has two ways to allocate interrupt stack. One is on the kernel stack of the interrupted process, the other is a dedicated interrupt stack per CPU. If the interrupt context is saved on the dedicated interrupt stack per CPU, then indeed the interrupt context is completely not associated with any process. The "current" macro will produce an invalid pointer to current running process, since the "current" macro with some architecture are computed with the stack pointer. The stack pointer in the interrupt context may point to the dedicated interrupt stack, not the kernel stack of some process.
By nature, the question is whether in interrupt handler you can get a valid "current" (address to the current process task_structure), if yes, it's possible to modify the content there accordingly to make it into "sleep" state, which can be back by scheduler later if the state get changed somehow. The answer may be hardware-dependent.
But in ARM, it's impossible since 'current' is irrelevant to process under interrupt mode. See the code below:
#linux/arch/arm/include/asm/thread_info.h
94 static inline struct thread_info *current_thread_info(void)
95 {
96 register unsigned long sp asm ("sp");
97 return (struct thread_info *)(sp & ~(THREAD_SIZE - 1));
98 }
sp in USER mode and SVC mode are the "same" ("same" here not mean they're equal, instead, user mode's sp point to user space stack, while svc mode's sp r13_svc point to the kernel stack, where the user process's task_structure was updated at previous task switch, When a system call occurs, the process enter kernel space again, when the sp (sp_svc) is still not changed, these 2 sp are associated with each other, in this sense, they're 'same'), So under SVC mode, kernel code can get the valid 'current'. But under other privileged modes, say interrupt mode, sp is 'different', point to dedicated address defined in cpu_init(). The 'current' calculated under these mode will be irrelevant to the interrupted process, accessing it will result in unexpected behaviors. That's why it's always said that system call can sleep but interrupt handler can't, system call works on process context but interrupt not.
High-level interrupt handlers mask the operations of all lower-priority interrupts, including those of the system timer interrupt. Consequently, the interrupt handler must avoid involving itself in an activity that might cause it to sleep. If the handler sleeps, then the system may hang because the timer is masked and incapable of scheduling the sleeping thread.
Does this make sense?
If a higher-level interrupt routine gets to the point where the next thing it must do has to happen after a period of time, then it needs to put a request into the timer queue, asking that another interrupt routine be run (at lower priority level) some time later.
When that interrupt routine runs, it would then raise priority level back to the level of the original interrupt routine, and continue execution. This has the same effect as a sleep.
It is just a design/implementation choices in Linux OS. The advantage of this design is simple, but it may not be good for real time OS requirements.
Other OSes have other designs/implementations.
For example, in Solaris, the interrupts could have different priorities, that allows most of devices interrupts are invoked in interrupt threads. The interrupt threads allows sleep because each of interrupt threads has separate stack in the context of the thread.
The interrupt threads design is good for real time threads which should have higher priorities than interrupts.