Can't access /proc/interrupts after free_irq - linux-kernel

I am writing a kernel v4l2 driver for an ov7670 CMOS element attached to GPIO pins on a Raspberry Pi. I set up three IRQ lines (Pixel, Href, and Vsync)
Here is how I am requesting them:
ret = request_irq( PX_IRQ,
ov7670rpi_pixel_interrupt,
irq_flags,
"ov7670rpi_px",
ov7670rpi_pixel_interrupt);
ret = request_irq( HREF_IRQ,
ov7670rpi_href_interrupt,
irq_flags,
"ov7670rpi_href",
ov7670rpi_href_interrupt);
ret = request_irq( VSYNC_IRQ,
ov7670rpi_vsync_interrupt,
irq_flags,
"ov7670rpi_vsync",
ov7670rpi_vsync_interrupt);
Now that goes fine:
#cat /proc/interrupts
CPU0
3: 4168 ARMCTRL BCM2708 Timer Tick
9: 0 ARMCTRL ov7670rpipx
10: 0 ARMCTRL ov7670rpihref
11: 0 ARMCTRL ov7670rpivsync
32: 68523 ARMCTRL dwc_otg, dwc_otg_pcd, dwc_otg_hcd:usb1
52: 0 ARMCTRL BCM2708 GPIO catchall handler
65: 543 ARMCTRL ARM Mailbox IRQ
66: 2 ARMCTRL VCHIQ doorbell
75: 1 ARMCTRL
77: 3439 ARMCTRL bcm2708_sdhci (dma)
79: 0 ARMCTRL bcm2708_i2c.0, bcm2708_i2c.1
80: 0 ARMCTRL bcm2708_spi.0
83: 21 ARMCTRL uart-pl011
84: 7436 ARMCTRL mmc0
FIQ: usb_fiq
Err: 0
Looks good.
This is how I disable the IRQs:
/* Disable Interrupts */
free_irq(PX_IRQ, ov7670rpi_pixel_interrupt);
free_irq(HREF_IRQ, ov7670rpi_href_interrupt);
free_irq(VSYNC_IRQ, ov7670rpi_vsync_interrupt);
I have also tried:
/* Disable Interrupts */
free_irq(PX_IRQ, NULL);
free_irq(HREF_IRQ, NULL);
free_irq(VSYNC_IRQ, NULL);
Both ways make it so that once the module is unloaded, I can not access /proc/interrupts. When I try to cat /proc/interrupts, they system locks.

The correct way to do this is to use gpio_request() with your GPIO numbers.
#define PX_GPIO 9
#define HREF_GPIO 10
#define VSYNC_GPIO 11
gpio_request(PX_GPIO, "v4l_rpi_px");
gpio_input(PX_GPIO);
gpio_request(HREF_GPIO, "v4l_rpi_href");
gpio_input(HREF_GPIO);
gpio_request(VSYNC_GPIO, "v4l_rpi_vsync");
gpio_input(VSYNC_GPIO);
/* Now, gpio_to_irq() can be used. */
ret = request_irq(gpio_to_irq(PX_GPIO),
ov7670rpi_pixel_interrupt,
irq_flags,
"ov7670rpi_px",
ov7670rpi_pixel_interrupt);
/* etc. */
bcm2708_gpio.c provides a GPIO interrupt controller as detailed in this question. See also the GPIO documentation.
Your primary interrupt controller ARMCTRL has interrupt 52 as BCM2708 GPIO catchall handler. This IRQ is chained and supports IRQ on each gpio line. The controller for the GPIO's will be listed differently in /proc/interrupts as GPIO.

Related

Mach-O - LINKEDIT section endianness

I'm a little confused on the __LINKEDIT section.
Let me set the background:
What I understand about __LINKEDIT
In theory (http://www.newosxbook.com/articles/DYLD.html) the first "section" will be LC_DYLD_INFO.
If I check mach-o/loader.h I get:
#define LC_DYLD_INFO 0x22
...
struct dyld_info_command {
uint32_t cmd; /* LC_DYLD_INFO or LC_DYLD_INFO_ONLY */
uint32_t cmdsize; /* sizeof(struct dyld_info_command) */
...
If I check a mach-o file with otool I get:
$ otool -l MyBinary | grep -B3 -A8 LINKEDIT
Load command 3
cmd LC_SEGMENT_64
cmdsize 72
segname __LINKEDIT
vmaddr 0x0000000100038000
vmsize 0x0000000000040000
fileoff 229376
filesize 254720
maxprot 0x00000001
initprot 0x00000001
nsects 0
flags 0x0
If I check the hex using xxd
$ xxd -s 229376 -l 4 MyBinary
00038000: 1122 9002 ."..
I know that the endiannes of my binary is little:
$ rabin2 -I MyBinary (03/14 10:21:51)
arch arm
baddr 0x100000000
binsz 484096
bintype mach0
bits 64
canary false
class MACH064
crypto false
endian little
havecode true
intrp /usr/lib/dyld
laddr 0x0
lang swift
linenum false
lsyms false
machine all
maxopsz 16
minopsz 1
nx false
os darwin
pcalign 0
pic true
relocs false
sanitiz false
static false
stripped true
subsys darwin
va true
I can corroborate that the first section in __LINKEDIT is LC_DYLD_INFO by getting it's offset form otool:
$ otool -l MyBinary | grep -B1 -A11 LC_DYLD_INFO (03/14 10:25:35)
Load command 4
cmd LC_DYLD_INFO_ONLY
cmdsize 48
rebase_off 229376
rebase_size 976
bind_off 230352
bind_size 3616
weak_bind_off 0
weak_bind_size 0
lazy_bind_off 233968
lazy_bind_size 6568
export_off 240536
export_size 9744
If we check the offset of __LINKEDIT and from LC_DYLD_INFO we get the same: 229376
Everything fine at the moment, kinda make sense.
My confusion
Now when I'm in lldb and want to make sense of the memory.
I can read the memory at the offset:
(lldb) image dump sections MyBinary
...
0x00000400 container [0x0000000100ca0000-0x0000000100ce0000) r-- 0x00038000 0x0003e300 0x00000000 MyBinary.__LINKEDIT
Ok, let's read that memory:
(lldb) x/x 0x00000100ca0000
0x100ca0000: 0x02902211
So this is my problem:
0x02902211
Let's assume I don't know if it's Little or Big Endian. I should find 0x22 at the begining or at the end of the bytes. but it's in the middle? (This confuses me)
the 0x11 I guess is the size 17(in decimal) which might corresponds to what I can see from the structure in loader.h (12bytes + 5bytes of padding?) :
struct dyld_info_command {
uint32_t cmd; /* LC_DYLD_INFO or LC_DYLD_INFO_ONLY */
uint32_t cmdsize; /* sizeof(struct dyld_info_command) */
uint32_t rebase_off; /* file offset to rebase info */
uint32_t rebase_size; /* size of rebase info */
uint32_t bind_off; /* file offset to binding info */
uint32_t bind_size; /* size of binding info */
uint32_t weak_bind_off; /* file offset to weak binding info */
uint32_t weak_bind_size; /* size of weak binding info */
uint32_t lazy_bind_off; /* file offset to lazy binding info */
uint32_t lazy_bind_size; /* size of lazy binding infs */
uint32_t export_off; /* file offset to lazy binding info */
uint32_t export_size; /* size of lazy binding infs */
};
My questions
1.) Why is the 0x22 not in the end(or beginnig)? or am I reading the offset incorrectly?
2.) otool says that the command size is 48 (that's 0x30 in hex) but I can't get it from the bytes next to 0x22. Where do I get the size from?
Thanks for taking the time to read all the way here, and thanks for any help.

Mapping device tree interrupt flags to devm_request_irq

I am currently writing a device driver for Linux for use of PowerPC.
The device tree entry is as follows:
// PPS Interrupt client
pps_hwirq {
compatible = "pps-hwirq";
interrupts = <17 0x02>; // IPIC 17 = IRQ1, 0x02 = falling edge
interrupt-parent = < &ipic >;
};
The 0x02 flag is quite important - the PPS is aligned with the falling edge, but this is not universal on GPS receivers and therefore should be configurable.
In the probe() function of the driver, obtaining the IRQ number is straightforward:
hwirq = irq_of_parse_and_map(np, 0);
if (hwirq == NO_IRQ) {
dev_err(&pdev->dev, "No interrupt found in the device tree\n");
return -EINVAL;
}
But how does one map the the IRQ flags from the device tree to the driver?
/* ****TODO****: Get the interrupt flags from the device tree
* For now, hard code to suit my problem, but since this differs
* by GPS receiver, it should be configurable.
*/
flags = IRQF_TRIGGER_FALLING;
/* register IRQ interrupt handler */
ret = devm_request_irq(&pdev->dev, data->irq, pps_hwint_irq_handler,
flags, data->info.name, data);
Unfortunately, there are few - if any - examples in the tree that actually do this job - most leave this flag as 0 (leave as-is) - here's a snippet of the results when grep for devm_request_irq, noting the values for the flags:
./drivers/crypto/mxs-dcp.c: ret = devm_request_irq(dev, dcp_vmi_irq, mxs_dcp_irq, 0,
./drivers/crypto/mxs-dcp.c: ret = devm_request_irq(dev, dcp_irq, mxs_dcp_irq, 0,
./drivers/crypto/omap-sham.c: err = devm_request_irq(dev, dd->irq, dd->pdata->intr_hdlr,
./drivers/crypto/omap-aes.c: err = devm_request_irq(dev, irq, omap_aes_irq, 0,
./drivers/crypto/picoxcell_crypto.c: if (devm_request_irq(&pdev->dev, irq->start, spacc_spacc_irq, 0,
Or hard code it to what the hardware actually asserts:
./drivers/crypto/tegra-aes.c: err = devm_request_irq(dev, dd->irq, aes_irq, IRQF_TRIGGER_HIGH |
So how does one cleanly associate this property from the device tree to the actual driver?
Further I'm gonna show how to obtain IRQ number and IRQ flags from Device Tree in some common cases:
in I2C drivers
in platform drivers
manually
In I2C drivers
In short
If you're writing an I2C driver, you don't need to read IRQ parameters from DT manually. You can rely on I2C core to populate IRQ parameters for you:
in your probe() function, client->irq will contain the IRQ number
devm_request_irq() will use IRQ flags from DT automatically (just don't pass any IRQ trigger flags to that function).
Details
Let's look at the i2c_device_probe() function (it's where your driver's probe() function is being called from):
static int i2c_device_probe(struct device *dev)
{
...
if (dev->of_node) {
...
irq = of_irq_get(dev->of_node, 0);
}
...
client->irq = irq;
...
status = driver->probe(client, i2c_match_id(driver->id_table, client));
}
So, client->irq will already contain IRQ number in your driver's probe function.
As for IRQ flags: of_irq_get() (in code above) eventually calls irqd_set_trigger_type(), which internally stores IRQ flags (read from device tree) for your interrupt number. So, when you call devm_request_irq(), it eventually ends up in __setup_irq(), and it does next:
/*
* If the trigger type is not specified by the caller,
* then use the default for this interrupt.
*/
if (!(new->flags & IRQF_TRIGGER_MASK))
new->flags |= irqd_get_trigger_type(&desc->irq_data);
where:
new->flags contains flags you provided to devm_request_irq()
irqd_get_trigger_type() returns flags obtained from DT
In other words, if you don't pass IRQ flags to devm_request_irq() (e.g. pass 0), it will use IRQ flags obtained from device tree.
See also this question for details.
In platform drivers
You can use platform_get_irq() to obtain IRQ number. It also stores (internally) IRQ flags obtained from DT, so if you pass flags=0 to devm_request_irq(), flags from DT will be used.
Manually
If your driver doesn't rely on kernel frameworks, you have to obtain IRQ values manually:
IRQ number can be obtained (as you mentioned) by irq_of_parse_and_map(); this function not only returns IRQ number, but also stores IRQ flags for your IRQ number (by calling irqd_set_trigger_type() eventually); stored IRQ flags will be automatically used in devm_request_irq(), if you don't pass IRQ trigger type to it (e.g. you can pass flags=0)
IRQ flags can be obtained by irq_get_trigger_type(), but only after executing irq_of_parse_and_map()
So probably you only need to run irq_of_parse_and_map() and let devm_request_irq() handle flags for you (just make sure you don't pass trigger flags to it).

USB driver using WriteFile and ReadFile not working in WIndows

Background: I am a Linux expert and have very little experience in Windows. I am working on a windows driver for a USB device with 2 interfaces. My driver should open the second interface which is shown below and communicate with the interrupt OUT and IN endpoints.
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 1
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 3 Human Interface Device
bInterfaceSubClass 0 No Subclass
bInterfaceProtocol 0 None
iInterface 3
HID Device Descriptor:
bLength 9
bDescriptorType 33
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 1
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 1
I am able to track this interface using the String descriptor of this interface. And I followed the following methods to do the communication
I open the a Write and a Read handler
// Get a handle for writing Output reports.
WriteHandle=CreateFile(detailData->DevicePath,
GENERIC_WRITE, FILE_SHARE_READ|FILE_SHARE_WRITE,
(LPSECURITY_ATTRIBUTES)NULL,OPEN_EXISTING,0,NULL);
//Get a handle to the device for the overlapped ReadFiles.
ReadHandle=CreateFile(detailData->DevicePath,
GENERIC_READ,FILE_SHARE_READ|FILE_SHARE_WRITE,(LPSECURITY_ATTRIBUTES)NULL,
OPEN_EXISTING,FILE_FLAG_OVERLAPPED,NULL);
And I try to push data like this
if(WriteFile(WriteHandle,buf,n,&BytesWritten,NULL)==FALSE) printf("Error sending output report\n");
if(ReadFile(ReadHandle,bufI,n,&NumberOfBytesRead,(LPOVERLAPPED) &HIDOverlapped)==FALSE) printf("Error receiving input report\n");
Question: This is not working. The problem I see is if the data in buf array used as WriteFile function argument is all zeros I see the data transmission. I am surprised as if I initialise the buf as
buf[0] = 0x80;
buf[1] = 0x00;
buf[2] = 0xDB;
Or any other data other than zeros then there is no transmission and the error message is printed. Also note n=65 and BytesWritten=0 when the function is invoked. I have confirmed the transmission of zeros using a hardware USB sniffer.
Could anybody point me out where have i gone wrong? Is there a better method to communicate to interrupt endpoints in Windows than using file read and write?

Why is the Java 7 Bytecode Verifier complaining about this Stack Frame?

I have a method which I have altered in a Java 7 (major version 51) class. Using javap, I've looked at the bytecode and the Stack Frame Map. Everything looks fine:
public int addOne(int);
flags: ACC_PUBLIC
Code:
stack=2, locals=2, args_size=2
0: iload_1
1: iconst_0
2: invokestatic #50 // Method isSomething:(I)Z
5: ifeq 12
8: iconst_0
9: goto 13
12: iconst_1
13: iadd
14: ireturn
StackMapTable: number_of_entries = 2
frame_type = 255 /* full_frame */
offset_delta = 12
locals = [ class test/Target, int ]
stack = [ int ]
frame_type = 255 /* full_frame */
offset_delta = 0
locals = [ class test/Target, int ]
stack = [ int, int ]
This verifier throws this exception:
java.lang.VerifyError: Expecting a stackmap frame at branch target 12
Exception Details:
Location:
test/Target.addOne(I)I #5: ifeq
Reason:
Expected stackmap frame at this location.
Bytecode:
0000000: 1b03 b800 3299 0007 03a7 0004 0460 ac
What's driving me crazy is that I had the compiler generate the same code from Java source, and it looks like this:
public int addOne(int);
flags: ACC_PUBLIC
Code:
stack=2, locals=2, args_size=2
0: iload_1
1: iconst_0
2: invokestatic #16 // Method isSomething:(I)Z
5: ifeq 12
8: iconst_0
9: goto 13
12: iconst_1
13: iadd
14: ireturn
StackMapTable: number_of_entries = 2
frame_type = 76 /* same_locals_1_stack_item */
stack = [ int ]
frame_type = 255 /* full_frame */
offset_delta = 0
locals = [ class test/Target, int ]
stack = [ int, int ]
Notice that the only difference in the stack frame map is that the synthetic map has all full frames -- but that shouldn't cause a difference. Does anyone know why the verfier might not like my synthetic map?
I am unable to reproduce this problem. Perhaps you are creating the stack frames in a way that javap will still read but isn't actually valid? Because my edited class has the same javap output but it verifies just fine. If you post the actual classfile I can see if I can find the problem, because I don't think there's anything further I can do with just the Javap output.
Source:
public class ArrayTest {
public int addOne(int x){
return x + (isSomething(0) ? 0 : 1);
}
public static boolean isSomething(int z) {return true;}
}
Javap output of method in original class
public int addOne(int);
flags: ACC_PUBLIC
Code:
stack=2, locals=2, args_size=2
0: iload_1
1: iconst_0
2: invokestatic #2 // Method isSomething:(I)Z
5: ifeq 12
8: iconst_0
9: goto 13
12: iconst_1
13: iadd
14: ireturn
StackMapTable: number_of_entries = 2
frame_type = 76 /* same_locals_1_stack_item */
stack = [ int ]
frame_type = 255 /* full_frame */
offset_delta = 0
locals = [ class ArrayTest, int ]
stack = [ int, int ]
Javap output of method in edited class
public int addOne(int);
flags: ACC_PUBLIC
Code:
stack=2, locals=2, args_size=2
0: iload_1
1: iconst_0
2: invokestatic #15 // Method ArrayTest.isSomething:(
)Z
5: ifeq 12
8: iconst_0
9: goto 13
12: iconst_1
13: iadd
14: ireturn
StackMapTable: number_of_entries = 2
frame_type = 255 /* full_frame */
offset_delta = 12
locals = [ class ArrayTest2, int ]
stack = [ int ]
frame_type = 255 /* full_frame */
offset_delta = 0
locals = [ class ArrayTest2, int ]
stack = [ int, int ]
As you can see, I have the same Javap output, but my class works just fine.
The answer is that javassist sucks and I am deeply regretting using it.
The StackMapTable attribute is gotten from a call to CodeAttribute.getAttribute(String tag). Even though this is how you access it, there is no API to add it back, unless it is of type StackMapTable. The only API that accepts a vanilla AttributeInfo as a parameter is on the MethodInfo class.
In cases where the method did not need (or have) a stack frame already, you get a null. If you create an AttributeInfo structure for a new stack frame map, you shouldn't add the to the MethodInfo (where the addAttribute API is), but the CodeAttribute where it belongs.
This is what I was doing:
MethodInfo mi ...
AttributeInfo attr ...
mi.addAttribute(attr);
This is what I needed to do:
CodeAttribute ca ...
ca.getAttributes().add(attr);
(Of course, ca.getAttributes()returns an untyped List because we all miss 2004.)
I dug into the method that allows you to add a type StackFramMap to the CodeAttribute and figured out this work around for the lack of a generic API.
The result of using the top construct is that javap will make it appear that you have a proper StackMapTable. You do, but it's attached to the wrong object and you can't see that from the javap output.
I didn't use ASM for my project because I found its obsessive use of the Visitor Pattern to be annoying. I now admit that this was a bad decision. Since javassist hasn't had an update since 2012, I'm wondering if the project is dead. I certainly have a truckload of revisions I'd push. It is a mess.
EDIT Oh wow. Javassist internal code assumes that any StackMapTable attribute it its own internal StackMapTable type (because how else would you add the StackMapTable attribute). I guess I could create my own StackMapTable instance, except the SFM constructors are package protected for no apparent reason. It just gets worse and worse...
http://www.csg.ci.i.u-tokyo.ac.jp/~chiba/javassist/html/javassist/bytecode/MethodInfo.html
Now you can call:
mi.rebuildStackMapIf6(pool, cf);
It will rebuild stack map, and don't use or need -XX:-UseSplitVerifier

Segment definitions for linux on x86

Linux 3.4.6 defines the following macros in arch/x86/include/asm/segment.h. Can anybody explain why the __USER macros add 3 to the defined constant and why this is not done for __KERNEL macros?
#define __KERNEL_CS (GDT_ENTRY_KERNEL_CS*8)
#define __KERNEL_DS (GDT_ENTRY_KERNEL_DS*8)
#define __USER_DS (GDT_ENTRY_DEFAULT_USER_DS*8+3)
#define __USER_CS (GDT_ENTRY_DEFAULT_USER_CS*8+3)
These four symbols represent segment descriptors. The two least-significant bits of these descriptors contain the privilege level associated with them, and the third least-significant bit contains the descriptor table type (GDT or LDT). This is made clearer by code occurring a little later:
/* User mode is privilege level 3 */
#define USER_RPL 0x3
/* LDT segment has TI set, GDT has it cleared */
#define SEGMENT_LDT 0x4
#define SEGMENT_GDT 0x0
/* Bottom two bits of selector give the ring privilege level */
#define SEGMENT_RPL_MASK 0x3
/* Bit 2 is table indicator (LDT/GDT) */
#define SEGMENT_TI_MASK 0x4
To achieve this, the descriptor table entry is multiplied by 8, which shifts it three bits to the left, and then ORed with the table type and privilege level (using addition):
/* GDT, ring 0 (kernel mode) */
#define __KERNEL_CS (GDT_ENTRY_KERNEL_CS*8)
/* GDT, ring 3 (user mode) */
#define __USER_CS (GDT_ENTRY_DEFAULT_USER_CS*8+3)

Resources