Triggering a software event from an interrupt (XMEGA, GCC) - events

I want to run a periodic "housekeeping" event, triggered regularly by a timer interrupt. The interrupt fires frequently (kHz+), while the function may take a long time to finish, so I can't simply have it executed in line.
In the past, I've done this on an ATMEGA, where an ISR can simply permit other interrupts to fire (including itself again) with sei(). By wrapping the event in a "still executing" flag, it won't pile up on the stack and cause a... you know:
if (!inFunction) { inFunction = true; doFunction(); inFunction = false; }
I don't think this can be done -- at least as easily -- on the XMEGA, due to the PMIC interrupt controller. It appears the interrupt flags can only be reset by executing RETI.
So, I was thinking, it would be convenient if I could convince GCC to produce a tail call out of an interrupt. That would immediately execute the event, while clearing interrupts.
This would be easy enough to do in assembler, just push the address and IRET. (Well, some stack-mangling because ISR, but, yeah.) But I'm guessing it'll be a hack in GCC, possibly a custom ASM wrapper around a "naked" function?
Alternately, I would love to simply set a low priority software interrupt, but I don't see an intentional way to do this.
I could use software to trigger an interrupt from an otherwise unused peripheral. That's fine as a special case, but then, if I ever need to use that device, I have to find another. It's bad for code reuse, too.
Really, this is an X-Y problem and I know it. I think I want to do X, but really I need method Y that I just don't know about.
One better method is to set a flag, then let main() deal with it when it gets around to it. Unfortunately, I have blocking functions in main() (handling user input via serial), so that would take work, and be a mess.
The only "proper" method I know of offhand, is to do a full task switch -- but damned if I'm going to effectively implement an RTOS, or pull one in, just for this. There's got to be a better way.
Have I actually covered all the possibilities, and painted myself into a corner? Do I have to compromise and choose one of these? Am I missing anything better?

There are more possibilities to solve this.
1. Enable your timer interrupt as low priority. In this way the medium and high priority interrupts will be able to interrupt this low priority interrupt, and run unaffected.
This is similar to using sei(); in your interrupt handler in older processors (without PMIC).
2.a Set a flag (variable) in the interrupt. Poll the flag in the main loop. If the flag is set, clear it and do your stuff.
2.b Set up the timer but don't enable its interrupt. Poll the OVF interrupt flag of your timer in the main loop. If the flag is set, clear it and do your stuff.
These are timed less accurately according to what else the main loop does, so depends on your expectations for accuracy. Handling more tasks in the main loop without an OS: Cooperative multitasking, State machine.

Related

How to use delays in kernel driver ISR

I am aware that I certainly can't use msleep or usleep or any such function for introducing delays in a kernel ISR routine.
I have a kernel driver which have certain ISRs defined inside it. In one of the ISR block I have to insert a certain delay of order of millisecs. Lets say:
{
//A
//here I need sleep
//B
}
can I use something like:
{
//A
for(i=0;i<1000;i++);
//B
}
Lets say my processor is executing at 1Gbps, will the above for loop give me a delay of 1000 usecs, i.e. 1ms?
You must not sleep inside an interrupt handler.
Furthermore, you should wait for a long time inside an interrupt handler; this would block all proccesses and all other interrupts on the same CPU.
If you driver needs to do two things at different times, it should use a second interrupt or a timer to do the second thing.
I would be interested to hear about the reasons for having an intentional delay in an ISR. Generally speaking, it's a no-no. If you think you need one, then most probably it means that you need to rethink your code design.
As for introducing microscopic delays, one thing that I have used is cpu_relax(). This function is also used in the kernel to implement the above mentioned udelay() and ndelay() for some CPU architectures. I would advise you to take a look and see where and how this function is used in the Linux kernel. That might give you some ideas for your specific situation.
Functions udelay and ndelay implements busy-waiting delays, so you may use them in ISR. As suggested by Tsyvarev.

What is channel event system?

I am working on some project Where I have to deal with uc ATxmega128A1 , But being a beginner to a ucontrollers I want to know what is this channel event system regarding ucs.
I have referred a link http://www.atmel.com/Images/doc8071.pdf but not getting it.
The traditional way to do things the channel system can do is to use interrupts.
In the interrupt model, the CPU runs the code starting with main(), and continues usually with some loop. When an particular event occurs, such as a button being pressed, the CPU is "interrupted". The current processing is stopped, some registers are saved, and the execution jumps to some code pointed to by an interrupt vector called an interrupt handler. This code usually has instructions to save register values, and this is added automatically by the compiler.
When the interrupting code is finished, the CPU restores the values that the registers previously had and execution jumps back to the point in the main code where it was interrupted.
But this approach takes valuable CPU cycles. And some interrupt handlers don't do very much expect trigger some peripheral to take an action. Wouldn't it be great it these kinds of interrupt handlers could be avoided and have the mC have the peripherals talk directly to each other without pausing the CPU?
This is what the event channel system does. It allows peripherals to trigger each other directly without involving the CPU. The CPU continues to execute instructions while the channel system operates in parallel. This doesn't mean you can replace all interrupt handlers, though. If complicated processing is involved, you still need a handler to act. But the channel system does allow you to avoid using very simple interrupt handlers.
The paper you reference describes this in a little more detail (but assumes a lot of knowledge on the reader's part). You have to read the actual datasheet of your mC to find the exact details.

Avoiding sleep while holding a spinlock

I've recently read section 5.5.2 (Spinlocks and Atomic Context) of LDDv3 book:
Avoiding sleep while holding a lock can be more difficult; many kernel functions can sleep, and this behavior is not always well documented. Copying data to or from user space is an obvious example: the required user-space page may need to be swapped in from the disk before the copy can proceed, and that operation clearly requires a sleep. Just about any operation that must allocate memory can sleep; kmalloc can decide to give up the processor, and wait for more memory to become available unless it is explicitly told not to. Sleeps can happen in surprising places; writing code that will execute under a spinlock requires paying attention to every function that you call.
It's clear to me that spinlocks must always be held for the minimum time possible and I think that it's relatively easy to write correct spinlock-using code from scratch.
Suppose, however, that we have a big project where spinlocks are widely used.
How can we make sure that functions called from critical sections protected by spinlocks will never sleep?
Thanks in advance!
What about enabling "Sleep-inside-spinlock checking" for your kernel ? It is usually found under Kernel Debugging when you run make config. You might also try to duplicate its behavior in your code.
One thing I noticed on a lot of projects is people seem to misuse spinlocks, they get used instead of the other locking primitives that should have be used.
A linux spinlock only exists in multiprocessor builds (in single process builds the spinlock preprocessor defines are empty) spinlocks are for short duration locks on a multi processor platform.
If code fails to aquire a spinlock it just spins the processor until the lock is free. So either another process running on a different processor must free the lock or possibly it could be freed by an interrupt handler but the wait event mechanism is much better way of waiting on an interrupt.
The irqsave spinlock primitive is a tidy way of disabling/ enabling interrupts so a driver can lock out an interrupt handler but this should only be held for long enough for the process to update some variables shared with an interrupt handler, if you disable interupts you are not going to be scheduled.
If you need to lock out an interrupt handler use a spinlock with irqsave.
For general kernel locking you should be using mutex/semaphore api which will sleep on the lock if they need to.
To lock against code running in other processes use muxtex/semaphore
To lock against code running in an interrupt context use irq save/restore or spinlock_irq save/restore
To lock against code running on other processors then use spinlocks and avoid holding the lock for long.
I hope this helps

How to wait for one second on an 8051 microcontroller?

I'm supposed to write a program that will send some values to registers, then wait one second, then change the values. The thing is, I'm unable to find the instruction that will halt operations for one second.
How about setting up a timer interrupt ?
Some useful hints and code snippets in this Keil 8051 application note.
There is no such 'instruction'. There is however no doubt at least one hardware timer peripheral (the exact peripheral set depends on the exact part you are using). Get out the datasheet/user manual for the part you are using and figure out how to program the timer; you can then poll it or use interrupts. Typically you'd configure the timer to generate a periodic interrupt that then increments a counter variable.
Two things you must know about timer interrupts: Firstly, if your counter variable is greater than 8-bit, access to it will not be atomic, so outside of the interrupt context you must either temporarily disable interrupts to read it, or read it twice in succession with the same value to validate it. Secondly, the timer counter variable must be declared volatile to prevent the compiler optimising out access to it; this is true of all variables shared between interrupts and threads.
Another alternative is to use a low power 'sleep' mode if supported; you set up a timer to wake the processor after the desired period, and issue the necessary sleep instruction (this may be provided as an 'intrinsic' by your compiler, or you may be controlled by a peripheral register. This is general advice, not 8051 specific; I don't know if your part even supports a sleep mode.
Either way you need to wade through the part specific documentation. If you could tell us the exact part, you may get help with that.
A third solution is to use an 8051 specific RTOS kernel which will provide exactly the periodic delay function you are looking for, as well as multi-threading and IPC.
I would setup a timer so that it interrupts every 10ms. In that interrupt, increment a variable.
You will also need to write a function to disable interrupts and read that variable.
In your main program, you will read the timer variable and then wait until it is 10100 more than it is when you started.
Don't forget to watch out for the timer variable rolling over.

Why kernel code/thread executing in interrupt context cannot sleep?

I am reading following article by Robert Love
http://www.linuxjournal.com/article/6916
that says
"...Let's discuss the fact that work queues run in process context. This is in contrast to the other bottom-half mechanisms, which all run in interrupt context. Code running in interrupt context is unable to sleep, or block, because interrupt context does not have a backing process with which to reschedule. Therefore, because interrupt handlers are not associated with a process, there is nothing for the scheduler to put to sleep and, more importantly, nothing for the scheduler to wake up..."
I don't get it. AFAIK, scheduler in the kernel is O(1), that is implemented through the bitmap. So what stops the scehduler from putting interrupt context to sleep and taking next schedulable process and passing it the control?
So what stops the scehduler from putting interrupt context to sleep and taking next schedulable process and passing it the control?
The problem is that the interrupt context is not a process, and therefore cannot be put to sleep.
When an interrupt occurs, the processor saves the registers onto the stack and jumps to the start of the interrupt service routine. This means that when the interrupt handler is running, it is running in the context of the process that was executing when the interrupt occurred. The interrupt is executing on that process's stack, and when the interrupt handler completes, that process will resume executing.
If you tried to sleep or block inside an interrupt handler, you would wind up not only stopping the interrupt handler, but also the process it interrupted. This could be dangerous, as the interrupt handler has no way of knowing what the interrupted process was doing, or even if it is safe for that process to be suspended.
A simple scenario where things could go wrong would be a deadlock between the interrupt handler and the process it interrupts.
Process1 enters kernel mode.
Process1 acquires LockA.
Interrupt occurs.
ISR starts executing using Process1's stack.
ISR tries to acquire LockA.
ISR calls sleep to wait for LockA to be released.
At this point, you have a deadlock. Process1 can't resume execution until the ISR is done with its stack. But the ISR is blocked waiting for Process1 to release LockA.
I think it's a design idea.
Sure, you can design a system that you can sleep in interrupt, but except to make to the system hard to comprehend and complicated(many many situation you have to take into account), that's does not help anything. So from a design view, declare interrupt handler as can not sleep is very clear and easy to implement.
From Robert Love (a kernel hacker):
http://permalink.gmane.org/gmane.linux.kernel.kernelnewbies/1791
You cannot sleep in an interrupt handler because interrupts do not have
a backing process context, and thus there is nothing to reschedule back
into. In other words, interrupt handlers are not associated with a task,
so there is nothing to "put to sleep" and (more importantly) "nothing to
wake up". They must run atomically.
This is not unlike other operating systems. In most operating systems,
interrupts are not threaded. Bottom halves often are, however.
The reason the page fault handler can sleep is that it is invoked only
by code that is running in process context. Because the kernel's own
memory is not pagable, only user-space memory accesses can result in a
page fault. Thus, only a few certain places (such as calls to
copy_{to,from}_user()) can cause a page fault within the kernel. Those
places must all be made by code that can sleep (i.e., process context,
no locks, et cetera).
Because the thread switching infrastructure is unusable at that point. When servicing an interrupt, only stuff of higher priority can execute - See the Intel Software Developer's Manual on interrupt, task and processor priority. If you did allow another thread to execute (which you imply in your question that it would be easy to do), you wouldn't be able to let it do anything - if it caused a page fault, you'd have to use services in the kernel that are unusable while the interrupt is being serviced (see below for why).
Typically, your only goal in an interrupt routine is to get the device to stop interrupting and queue something at a lower interrupt level (in unix this is typically a non-interrupt level, but for Windows, it's dispatch, apc or passive level) to do the heavy lifting where you have access to more features of the kernel/os. See - Implementing a handler.
It's a property of how O/S's have to work, not something inherent in Linux. An interrupt routine can execute at any point so the state of what you interrupted is inconsistent. If you interrupted the thread scheduling code, its state is inconsistent so you can't be sure you can "sleep" and switch threads. Even if you protect the thread switching code from being interrupted, thread switching is a very high level feature of the O/S and if you protected everything it relies on, an interrupt becomes more of a suggestion than the imperative implied by its name.
So what stops the scehduler from putting interrupt context to sleep and taking next schedulable process and passing it the control?
Scheduling happens on timer interrupts. The basic rule is that only one interrupt can be open at a time, so if you go to sleep in the "got data from device X" interrupt, the timer interrupt cannot run to schedule it out.
Interrupts also happen many times and overlap. If you put the "got data" interrupt to sleep, and then get more data, what happens? It's confusing (and fragile) enough that the catch-all rule is: no sleeping in interrupts. You will do it wrong.
Disallowing an interrupt handler to block is a design choice. When some data is on the device, the interrupt handler intercepts the current process, prepares the transfer of the data and enables the interrupt; before the handler enables the current interrupt, the device has to hang. We want keep our I/O busy and our system responsive, then we had better not block the interrupt handler.
I don't think the "unstable states" are an essential reason. Processes, no matter they are in user-mode or kernel-mode, should be aware that they may be interrupted by interrupts. If some kernel-mode data structure will be accessed by both interrupt handler and the current process, and race condition exists, then the current process should disable local interrupts, and moreover for multi-processor architectures, spinlocks should be used to during the critical sections.
I also don't think if the interrupt handler were blocked, it cannot be waken up. When we say "block", basically it means that the blocked process is waiting for some event/resource, so it links itself into some wait-queue for that event/resource. Whenever the resource is released, the releasing process is responsible for waking up the waiting process(es).
However, the really annoying thing is that the blocked process can do nothing during the blocking time; it did nothing wrong for this punishment, which is unfair. And nobody could surely predict the blocking time, so the innocent process has to wait for unclear reason and for unlimited time.
Even if you could put an ISR to sleep, you wouldn't want to do it. You want your ISRs to be as fast as possible to reduce the risk of missing subsequent interrupts.
The linux kernel has two ways to allocate interrupt stack. One is on the kernel stack of the interrupted process, the other is a dedicated interrupt stack per CPU. If the interrupt context is saved on the dedicated interrupt stack per CPU, then indeed the interrupt context is completely not associated with any process. The "current" macro will produce an invalid pointer to current running process, since the "current" macro with some architecture are computed with the stack pointer. The stack pointer in the interrupt context may point to the dedicated interrupt stack, not the kernel stack of some process.
By nature, the question is whether in interrupt handler you can get a valid "current" (address to the current process task_structure), if yes, it's possible to modify the content there accordingly to make it into "sleep" state, which can be back by scheduler later if the state get changed somehow. The answer may be hardware-dependent.
But in ARM, it's impossible since 'current' is irrelevant to process under interrupt mode. See the code below:
#linux/arch/arm/include/asm/thread_info.h
94 static inline struct thread_info *current_thread_info(void)
95 {
96 register unsigned long sp asm ("sp");
97 return (struct thread_info *)(sp & ~(THREAD_SIZE - 1));
98 }
sp in USER mode and SVC mode are the "same" ("same" here not mean they're equal, instead, user mode's sp point to user space stack, while svc mode's sp r13_svc point to the kernel stack, where the user process's task_structure was updated at previous task switch, When a system call occurs, the process enter kernel space again, when the sp (sp_svc) is still not changed, these 2 sp are associated with each other, in this sense, they're 'same'), So under SVC mode, kernel code can get the valid 'current'. But under other privileged modes, say interrupt mode, sp is 'different', point to dedicated address defined in cpu_init(). The 'current' calculated under these mode will be irrelevant to the interrupted process, accessing it will result in unexpected behaviors. That's why it's always said that system call can sleep but interrupt handler can't, system call works on process context but interrupt not.
High-level interrupt handlers mask the operations of all lower-priority interrupts, including those of the system timer interrupt. Consequently, the interrupt handler must avoid involving itself in an activity that might cause it to sleep. If the handler sleeps, then the system may hang because the timer is masked and incapable of scheduling the sleeping thread.
Does this make sense?
If a higher-level interrupt routine gets to the point where the next thing it must do has to happen after a period of time, then it needs to put a request into the timer queue, asking that another interrupt routine be run (at lower priority level) some time later.
When that interrupt routine runs, it would then raise priority level back to the level of the original interrupt routine, and continue execution. This has the same effect as a sleep.
It is just a design/implementation choices in Linux OS. The advantage of this design is simple, but it may not be good for real time OS requirements.
Other OSes have other designs/implementations.
For example, in Solaris, the interrupts could have different priorities, that allows most of devices interrupts are invoked in interrupt threads. The interrupt threads allows sleep because each of interrupt threads has separate stack in the context of the thread.
The interrupt threads design is good for real time threads which should have higher priorities than interrupts.

Resources