Offloaded XDP program to Netronome Smart NIC unsupported function - linux-kernel

I'm trying to offload a small EBPF program to the NIC that uses a map. I can lookup elements in the hash map, but when I add the command bpf_map_update_elem I get back an error when I attempt to load.
14: (85) call bpf_map_update_elem#2
[nfp] map_update: not supported by FW
The driver I'm running:
$ ethtool -i $ETHNAME
driver: nfp
version: 5.15.0-27-generic
firmware-version: 0.0.3.5 0.31 bpf-2.0.6.124 ebpf
expansion-rom-version:
bus-info: 0000:06:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
According to https://www.netronome.com/media/documents/UG_Getting_Started_with_eBPF_Offload.pdf this function should be supported.
Has anybody found a solution?

The document you link states:
Since Kernel 4.17, map updates are supported by our driver. As of this writing, our public firmware does
not contain map update support from the datapath, but this is available on request.
You should contact Netronome's customer support service to get the version of the firmware which supports map updates.
(I worked on this guide and can confirm that, to my knowledge, the firmware with map updates has not been publicly released.)

Related

Where to find device-tree?

Coming form this question yesterday, I decided to port this library to my board. I was aware that I needed to change something, so I compiled the library, call it on a small program and see what happens. The 1st problem is here:
// Check for GPIO and peripheral addresses from device tree.
// Adapted from code in the RPi.GPIO library at:
// http://sourceforge.net/p/raspberry-gpio-python/
FILE *fp = fopen("/proc/device-tree/soc/ranges", "rb");
if (fp == NULL) {
return MMIO_ERROR_OFFSET;
}
This lib is aimed for Rpi, os the structure of the system on my board is not the same. So I was wondering if somebody could tell me where I could find this file or how it looks like so I can find it by my self in order to proceed the job.
Thanks.
You don't necessarily want that "file" (or more precisely /proc node).
The code this is found in is setting up to do direct memory mapped I/O using what appears to be a pi-specific gpio-flavored version of the /dev/mem type of device driver for exposing hardware special function registers to userspace.
To port this to your board, you would need to first determine if there is a /dev/mem or similar capability in your kernel which you can activate. Then you would need to determine the appropriate I/O registers for GPIO pins. The pi-specific code is reading the Device Tree to figure this out, but there are other ways, for example you can manually read the programmer's manual of the SoC on which you are running.
Another approach you can consider is adding some small microcontroller (or yes, barebones ***duino) to the system, and using that to collect information from various sensors and peripherals. This can then be forwarded to the SoC over a UART link, or queried out via I2C or similar - add a small amount of cost and some degree of bottleneck, but also means that the software on the SoC then becomes very portable - to a different comparable chip, or perhaps even to run on a desktop PC during development.

My attributes are way too racy, what should I do?

In a linux device driver, creating sysfs attributes in probe is way too racy--specifically, it experiences a race condition with userspace. The recommended workaround is to add your attributes to various default attribute groups so they can be automatically created before probe. For a device driver, struct device_driver contains const struct attribute_group **groups for this purpose.
However, struct attribute_group only got a field for binary attributes in Linux 3.11. With older kernels (specifically, 3.4), how should a device driver create sysfs binary attributes before probe?
Quoting (emphasis mine) Greg Kroah-Hartman from his comment to a merge request (that was merged by Linus as a part of 3.11 development cycle):
Here are some driver core patches for 3.11-rc2. They aren't really
bugfixes, but a bunch of new helper macros for drivers to properly
create attribute groups, which drivers and subsystems need to fix up a
ton of race issues with incorrectly creating sysfs files (binary and
normal) after userspace has been told that the device is present.
Also here is the ability to create binary files as attribute groups, to solve that race condition, which was impossible to do before this, so that's my fault the drivers were broken.
So it looks like there really is no way to solve this problem on old kernels.

Difference between uart_register_driver and platform_driver_register?

I am studying UART Driver in kernel code and want to know, who first comes into picture, device_register() or driver_register() call?
For difference between them follow this.
and in UART probing, we call
uart_register_driver(struct uart_driver *drv)
and after successfully registration,
uart_add_one_port(struct uart_driver *drv, struct uart_port *uport)
Please explain this in details.
That's actually two questions, but I'll try to address both of them.
who first comes into picture, device_register() or driver_register() call?
As it stated in Documentation/driver-model/binding.txt, it doesn't matter in which particular order you call device_register() and driver_register().
device_register() adds device to device list and iterates over driver list to find the match
driver_register() adds driver to driver list and iterates over device list to find the match
Once match is found, matched device and driver are binded and corresponding probe function is called in driver code.
If you are still curious which one is called first (because it doesn't matter) -- usually it's device_register(), because devices are usually being registered on initcalls from core_initcall to arch_initcall, and drivers are usually being registered on device_initcall, which executed later.
See also:
[1] From where platform device gets it name
[2] Who calls the probe() of driver
[3] module_init() vs. core_initcall() vs. early_initcall()
Difference between uart_register_driver and platform_driver_register?
As you noticed there are 2 drivers (platform driver and UART driver) for one device. But don't let this confuse you: those are just two driver APIs used in one (in fact) driver. The explanation is simple: UART driver API just lacks some functionality we need, and this functionality is implemented in platform driver API. Here is responsibility of each API in regular tty driver:
platform driver API is used for 3 things:
Matching device (described in device tree file) with driver; this way probe function will be executed for us by platform driver framework
Obtaining device information (reading from device tree)
Handling Power Management (PM) operations (suspend/resume)
UART driver API: handling actual UART functionality: read, write, etc.
Let's use drivers/tty/serial/omap-serial.c for driver reference and arch/arm/boot/dts/omap5.dtsi for device reference. Let's say, for example, we have next device described in device tree:
uart1: serial#4806a000 {
compatible = "ti,omap4-uart";
reg = <0x4806a000 0x100>;
interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>;
ti,hwmods = "uart1";
clock-frequency = <48000000>;
};
It will be matched with platform driver in omap-serial.c by "ti,omap4-uart" string (you can find it in driver code). Then, using that platform driver, we can read properties from device tree node above, and use them for some platform stuff (setting up clocks, handling UART interrupt, etc.).
But in order to expose our device as standard TTY device we need to use UART framework (all those uart_* functions). Hence 2 different APIs: platform driver and UART driver.

Using SetupDiSetDeviceRegistryProperty with SPDRP_HARDWAREID

The docs for the SetupDiSetDeviceRegistryProperty function say,
The following values are reserved for use by the operating system and
cannot be used in the Property parameter ...
SPDRP_HARDWAREID
However, there are lots of examples of code out there, including MS DevCon utility which uses this function with the SPDRP_HARDWAREID parameter, ie:
SetupDiSetDeviceRegistryProperty(DeviceInfoSet,
&DeviceInfoData,
SPDRP_HARDWAREID,
(LPBYTE)hwIdList,
(lstrlen(hwIdList)+1+1)*sizeof(TCHAR)))
They also have an article which suggests doing so:
If an installer detects a non-PnP device, the installer should select a driver for the device as follows: create a device information element (SetupDiCreateDeviceInfo), set the SPDRP_HARDWAREID property by calling SetupDiSetDeviceRegistryProperty
I'd like to (and do) use this function to set Hardware ID for my virtual device. The question is - is it a typo in the manual, or it's some sort of unsupported behavior and therefore it can stop working any time?
TL;DR: If you're creating a root-enumerated device node, you're free to set SPDRP_HARDWAREID/SPDRP_COMPATIBLEIDS yourself by calling SetupDiSetDeviceRegistryProperty. Otherwise you're not allowed to do so.
This was an error in the docs that was fixed at some point.
Today the docs of SetupDiSetDeviceRegistryProperty read:
SPDRP_HARDWAREID or SPDRP_COMPATIBLEIDS can only be used when DeviceInfoData represents a root-enumerated device. For other devices, the bus driver reports hardware and compatible IDs when enumerating a child device after receiving IRP_MN_QUERY_ID. [Emphasis mine]
... which is exactly what DevCon does.

U-boot 1.6.6 : I am trying to Edit the Source code to Enable PCI for MPC8640 Custom Board

I have gone through the Source Code . The CONFIG_PCI is defined in the u-boot/include/configs/MPC8641HPCN.h file [General]. This MACRO has been Enabled in the .h file which says if i compile the configuration for this board the PCI will be Enumerated and will find out the devces Located on the Bus.
Using the Functions Defined : cmd_pci.c : u-boot/common/cmd_pci.c
void pci_header_show(pci_dev_t dev);
void pci_header_show_brief(pci_dev_t dev);
void pciinfo(int BusNum, int ShortPCIListing)
PCI_BDF(BusNum, Device, Function);
pci_read_config_word(dev, PCI_VENDOR_ID, &VendorID);
pci_read_config_byte(dev, PCI_HEADER_TYPE, &HeaderType);
Now guys , I have stucked and Some concept..Memory mapping...I have DDR interfaced to the Processor..and Also not able to find which code points to the Memory Mapping in the Current U-boot.
Also if any body could just state the Flow of Enumeration functions..files and Memory mapping ..with few hints will be Beneficial..I have been going through some books "PCI system Architecture " but they are general and does not point to any topic specifying..how to Enumerate A PCI ..when MPC8640 Processor is acting as HOST.
Thanks And Regard's
Hrishikesh!
U-Boot 1.6.6 doesn't exist. See the list of U-Boot releases at ftp://ftp.denx.de/pub/u-boot/. If you're talking about 1.1.6, then that's a prehistoric release, and you will hardly get any sort of support from the open-source community if you stick with such an old version.

Resources