For an Ethernet traffic, in the kernel stmmac driver, when an Ethernet frame is received, the function stmmac_rx is called here:
static int stmmac_rx(struct stmmac_priv *priv, int limit)
I would like to access the content of each frame in the kernel. How can I access the frame content?
Related
I have a device memory mapped to kernel virtual address via ioremap. Userspace needs to access a page at offset x from this device memory.
The way i can achieve it rightnow is via using mmap in userspace and writing a small memory mapping at driver side.
Is there any way to use offset ( lets assume kernel passes the offset to userspae )and achieve samething without making any mapping at driver side.
Can ioremapped kernel virtual addresses be used here ?
I want to access the registers of PLIC interrupt controller from my driver.
I tried to map in my driver the address of the PLIC, I took it from the device tree. Then I access the registers with readl(), writel(). Is this the right way? Knowing that PLIC and my device share the same APB.
void __iomem *plic_regs = ioremap(plic_phys_add, size);
pr_info(" PLIC_ENABLE_PER_HART : %#x\n", readl(plic_regs+PLIC_ENABLE_PER_HART));
When I check linux kernel USB driver code, I saw port will reset port at the beginning of init port stage. Call stack as: hub_port_init -> hub_port_reset -> set_port_feature. And set_port_feature function will call urb to send Reset request.
But I want know what is the transfer speed of this urb message? Is it same as controller's speed? For example xhci will use 450Mbps(High Speed) to transfer?
If it is true, will 1.5Mbps(Low Speed) device still use 450Mbps speed to transfer urb packet?
Thanks!
I'm using PCIe bus on Freescale MPC8308 (as root complex) and the endpoint device is an ASIC with just one 256 MB memory region and just one BAR register. The device configuration space registers are readily accessible through "pciutils" package. At first I tried to access memory region by using mmap() but it didn't work. So at the next level, I prepared a device driver for the PCIe endpoint device which is a kernel module that I load into kernel after Linux booting.
In my driver the endpoint device is identified from device ID table but when I want to enable the device by pci_enable_device(), I see this error:
driver-pci 0000:00:00.0: device not available because of BAR 0 [0x000000-0xfffffff] collisions
Also when I want to allocate memory region for PCIe device by using pci_request_region(), it is not possible.
Here is the part of driver code which is not working:
pci_enable_result = pci_enable_device (pdev);
if (pci_enable_result)
{
printk(KERN_INFO "PCI enable encountered a problem \n");
return pci_enable_result;
}
else
{
printk(KERN_INFO "PCI enable was succesfull \n");
}
And here is the result in "dmesg" :
driver-pci 0000:00:00.0: device not available because of BAR 0 [0x000000-0xfffffff] collisions
PCI enable encountered a problem
driver-pci: probe of 0000:00:00.0 failed with error -22
It is worth noting that in the driver I can read and write configuration registers correctly by using functions like pci_read_config_dword() and pci_write_config_dword().
What's the problem do you think? is it possible that the problem appears because the kernel initializes the device prior to kernel module? what should I do to prevent this to occur?
BAR registers access are generally for small region. Your BAR0 size seems to be too large. Try with less memory (less than 1MB), it should works.
I am testing netlink filter application on 1Gbit/sec network: i have user space function sending verdict to netlink socket; another user space routine performs async read of marked packets from netlink socket and some custom filter function. For the bitrates >300 Mbps i see netlink socket read errors "no buffer space available". I take it as netlink buffer overflow.
Can someone recommend an approach on how to improve netlink throughput for high speed network? My kernel version is 2.6.38.
There is socket between kernel to user space. via the socket packet upload to user space. the socket buffer is full , so you get an error.
in c you can define the socket buffer size and increase it (this is done by netlink)