Calling XSpi_Transfer from within gpio interrupt context - fpga

In a microblaze environment on a virtex 5:
I have a situation where I need to do a spi transaction (XSpi_Transfer) to read from an external chip (mcp2515) in repsonse to an interrupt. The interrupt is triggered using the gpio interrupt feature of XGpio interface.
The problem I'm having is that the XSpi_Transfer hangs when called from interrupt context. Any hints on whether using XSpi from an interrupt is possible or not? I presume the issue is that the low level mechanism of the XSpi uses interrupts to perform low level fifo handling.

Related

Why are MSI interrupts not shared?

Can any body tell why MSI interrupts are not shareable in linux.
PIN based interrupts can be shared by devices, but MSI interrupts are not shared by devices, each device gets its own MSI IRQ number. Why can't MSI interrupts be shared ?
The old INTx interrupts have two problematic properties:
Each INTx signal requires a separate signal line in hardware; and
the interrupt signal is independent of the other data signals, and this is sent asynchronously.
The consequences are that
multiple devices and drivers need to be able to share interrupts (the interrupt handler needs to check if its device actually raised the interrupt); and
when a driver receives an interrupt, it needs to do a read of some device register to ensure that any previous DMA writes made by the device are visible on the CPU.
Typically, both cases are handled by the driver reading its device's interrupt status register.
Message-Signaled Interrupts do not require a separate signal line but are sent as a message over the data bus. This means that the same hardware can support many more interrupts (so sharing it not necessary), and that the interrupt message is automatically synchronized with any DMA accesses. As a consequence, the interrupt handler does not need to do anything; the interrupt is guaranteed to come from its device, and DMA'd data is guaranteed to have already arrived.
If some drivers were written to share some MSI, the interrupt handler would again have to check whether the interrupt actually came from its own device, and there would be no advantage over INTx interrupts.
MSIs are not shared because it would not be possible, but because it is not necessary.
Please note that sharing an MSI is actually possible: as seen in this excerpt from /proc/interrupts, the Advanced Error Reporting, Power Management Events, and hotplugging drivers share one interrupt:
64: 0 0 PCI-MSI-edge aerdrv, PCIe PME, pciehp
These drivers are actually attached to the same device, but they still behave similar to INTx drivers, i.e., they register their interrupt with IRQF_SHARED, and the interrupt handlers check whether it was their own function that raised the interrupt.
Interrupt sharing is a hack due to resource constraints, like not having enough physical IRQ lines for each device that wants attention. If interrupts are represented by messages that have a large ID space, why would you do that?
"That" meaning: giving them the same identity so that devices then have to be probed to figure out which of the ones clashing to the same ID actually interrupted.
In fact, we would sometimes like to have multiple interrupts for one device. For instance, it's useful if the interrupt ID tells us not only which device interrupted by also why: like is it due to the arrival of input, or the draining of an output buffer? If interrupt lines are "cheap" because they are just software ID's with lots of bits, we can have that.

How an I2c read as well as write operation in "handler function" of request_threaded_irq affects the driver as a whole.?

I have a driver code with handler function and thread function of request_threaded_irq similar to this:
irq-handler fn()
{
/*disable device interrupt*/
i2c read from register;
set disable bit to client-device-interrupt
i2c write back;
return IRQ_WAKe_THREAD;
}
irq-thread fn()
{
i2c read from register;
....
....
/*enable device interrupt*/
i2c read from register;
set enable bit to client-device-interrupt
i2c write back;
/*Rest of the operation*/
..........
..........
return IRQ_HANDLED;
}
I have few question with respect to above implentation.
Will 2 i2c operation in "handler fn" takes considerable amount of time.?
Should I need to make bit manipulation in "handler fn" atomic?
Should I move the operation performed till "enable device interrupt" from "thread fn" to "handler fn"(this would cost me 4 more i2c operation and one bit manipulation exactly) ? - reason being chances are there that i can miss interrupt as per above code implementation.
If I do so(as per question 3). how does it affects the other device interrupts.(as I have a basic doubt whether "handler fn" in hard IRQ context operates with interrupts disabled)
Please provide me a good and optimum solution for the above scenario.
Thanks in advance.
I2C read/write transfers are NOT deterministic.
The protocol allows peripheral slave ICs to perform clock stretching thereby allowing them to "hold" the master until they are ready.
However this is NOT a common scenario and hence each I2C transfer usually completes in a pre-determined interval most of the time. However, it is NOT a guarantee, and hence NOT a good idea to perform several I2C transfers within an ISR.
This link contains a nice explanation about the fundamentals of threaded irqs and their proper usage.
Optimal design for the above scenario ?
Using threaded-interrupt handler approach will not have many benefits as attempting to enable/disable the interrupts on the device will add to the latency.
In your current scenario (single interrupt from single device), one can stick to the regular request_irq() to register an interrupt service routine(ISR).
ISR code :
1. In the ISR, call disable_irq() to prevent further interrupts.
2. Schedule a bottom half handler function (workqueue is a good choice).
3. Return IRQ_HANDLED from the ISR.
Bottom-half handler code :
4. Perform I2C transfers.
5. Call enable_irq() and exit.
NOTE :
Another way to implement the same design would be to use a threaded-irq without an ISR. This achieves the same as the above design and eliminates the need to define/initialise/cleanup the bottom-half handler separately in your code.
In this approach one would put all the I2C read/write code within the IRQ thread function and pass it to request_threaded_irq() along-with handler = NULL i.e. an empty ISR.

Should my interrupt handler disable interrupts or does the ARM processor do it automatically?

Our group is using a custom driver to interface four MAX3107 UARTs on a shared I2C bus. The interrupts of the four MAX3107's are connected (i.e. shared interrupt via logic or'ing)) to a GPIO pin on the ARM9 processor (LPC3180 module). When one or more of these devices interrupt, they pull the GPIO line, which is configured as a level-sensitive interrupt, low. My question concerns the need, or not, to disable the specific interrupt line in the handler code. (I should add that we are running Linux 2.6.10).
Based on my reading of several ARM-specific app notes on interrupts, it seems that when the ARM processor receives an interrupt, it automatically disables (masks?) the corresponding interrupt line (in our case this would seem to be the line corresponding to the GPIO pin we selected). If this is true, then it seems that we should not have to disable interrupts for this GPIO pin in our interrupt handler code as doing so would seem redundant (though it seems to work okay). Stated differently, it seems to me that if the ARM processor automatically disables the GPIO interrupt upon an interrupt occurring, then if anything, our interrupt handler code should only have to re-enable the interrupt once the device is serviced.
The interrupt handler code that we are using includes disable_irq_nosync(irqno); at the very beginning of the handler and a corresponding enable_irq() at the end of the handler. If the ARM processor has already disabled the interrupt line (in hardware), what is the effect of these calls (i.e. a call to disable_irq_nosync() followed by a call to enable(irq())?
From the Arm Information Center Documentation:
On entry to an exception (interrupt):
interrupt requests (IRQs) are disabled for all exceptions
fast interrupt requests (FIQs) are disabled for FIQ and Reset exceptions.
It then goes on to say:
Handling an FIQ causes IRQs and subsequent FIQs to be disabled,
preventing them from being handled until after the FIQ handler enables
them. This is usually done by restoring the CPSR from the SPSR at the
end of the handler.
So you do not have to worry about disabling them, but you do have to worry about re-enabling them.
You will need to include enable_irq() at the end of your routine, but you shouldn't need to disable anything at the beginning. I wouldn't think that calling disable_irq_nosync(irqno) in software after it has been called in hardware would effect anything. Since the hardware call is most definitely called before the software call has a chance to take over. But it's probably better to remove it from the code to follow convention and not confuse the next programmer who takes a look at it.
More info here:
Arm Information Center

Can an interrupt handler be preempted by the same interrupt handler?

Does the CPU disable all interrupts on local CPU before calling the interrupt handler?
Or does it only disable that particular interrupt line, which is being served?
x86 disables all local interrupts (except NMI of course) before jumping to the interrupt vector. Linux normally masks the specific interrupt and re-enables the rest of the interrupts (which aren't masked), unless a specific flags is passed to the interrupt handler registration.
Note that while this means your interrupt handler will not race with itself on the same CPU, it can and will race with itself running on other CPUs in an SMP / SMT system.
Normally (at least in x86), an interrupt disables interrupts.
When an interrupt is received, the hardware does these things:
1. Save all registers in a predetermined place.
2. Set the instruction pointer (AKA program counter) to the interrupt handler's address.
3. Set the register that controls interrupts to a value that disables all (or most) interrupts. This prevents another interrupt from interrupting this one.
An exception is NMI (non maskable interrupt) which can't be disabled.
Yes, that's fine.
I'd like to also add what I think might be relevant.
In many real-world drivers/kernel code, "bottom-half" (bh) handlers are used pretty often- tasklets, softirqs. These bh's run in interrupt context and can run in parallel with their top-half (th) handlers on SMP (esp softirq's).
Of course, recently there's a move (mainly code migrated from the PREEMPT_RT project) towards mainline, that essentially gets rid of the 'bh' mechanism- all interrupt handlers will run with all interrupts disabled. Not only that, handlers are (can be) converted to kernel threads- these are the so-called "threaded" interrupt handlers.
As of today, the choice is still left to the developer- you can use the 'traditional' th/bh style or the threaded style.
Ref and Details:
http://lwn.net/Articles/380931/
http://lwn.net/Articles/302043/
Quoting Intels own, surprisingly well-written "Intel® 64 and IA-32 Architectures Software Developer’s Manual", Volume 1, pages 6-10:
If an interrupt or exception handler is called
through an interrupt gate, the processor clears the interrupt enable (IF) flag in the EFLAGS register to prevent
subsequent interrupts from interfering with the execution of the handler. When a handler is called through a trap
gate, the state of the IF flag is not changed.
So just to be clear - yes, effectively the CPU "disables" all interrupts before calling the interrupt handler. Properly described, the processor simply triggers a flag which makes it ignore all interrupt requests. Except probably non-maskable interrupts and/or its own software exceptions (please someone correct me on this, not verified).
We want ISR to be atomic and no one should be able to preempt the ISR.
Therefore, An ISR disables the local interrupts ( i.e. the interrupt on the current processor) and once the ISR calls ret_from_intr() function ( i.e. we have finished the ISR) , interrupts are again enabled on the current processor.
If an interrupt occurs, it will now be served by the other processor ( in SMP system) and ISR related to that interrupt will start running.
In SMP system , We also need to include the proper synchronization mechanism ( spin lock) in an ISR.

Trap Dispatching on Windows

I am actually reading Windows Internals 5th edition and i am enjoying, although isn't a easy book to read and understand.
I am confused about IRQLs and IDT Table.
I read that windows implement custom priorization levels with IRQL and the Plug and Play Manager maps IRQ from devices to IRQL.
Alright, so, IRQLs are used for Software and Hardware interrupts, and for exceptions is used the Exception Dispatch handler.
When one device generates an interrupt, the interrupt controller pass this information to the CPU with the IRQ.
So Windows takes this IRQ and translates to IRQL to schedule when to execute the routine (routine that IDT[IRQ_VALUE] is pointing to?
Is that what is happening?
Yes, on a very high level.
Everything starts with a kernel trap. Kernel trap handler handles interrupts, exceptions, system service calls and virtual memory pager.
When an interrupt happens (line based - using dedicated pin or message based- writing to an address) windows uses IRQL to determine the priority of the interrupt and uses this to see if the interrupt can be served or not during that time. HAL does the job of translating the IRQ to IRQL.
It then uses IRQ to get an index of the IDT to find the appropriate ISR routing to invoke. Note there can be multiple ISR associated for a given IRQ. All of them execute in order.
Each processor has its own IDT so you could potentially have multiple ISR's running at the same time.
Exception dispatch, as I mentioned before, is also handled by the kernel trap but the procedure for it is different. It usually starts by checking for any exception handlers by stack unwinding, then checking for debugger port etc.

Resources