Where to find device-tree? - linux-kernel

Coming form this question yesterday, I decided to port this library to my board. I was aware that I needed to change something, so I compiled the library, call it on a small program and see what happens. The 1st problem is here:
// Check for GPIO and peripheral addresses from device tree.
// Adapted from code in the RPi.GPIO library at:
// http://sourceforge.net/p/raspberry-gpio-python/
FILE *fp = fopen("/proc/device-tree/soc/ranges", "rb");
if (fp == NULL) {
return MMIO_ERROR_OFFSET;
}
This lib is aimed for Rpi, os the structure of the system on my board is not the same. So I was wondering if somebody could tell me where I could find this file or how it looks like so I can find it by my self in order to proceed the job.
Thanks.

You don't necessarily want that "file" (or more precisely /proc node).
The code this is found in is setting up to do direct memory mapped I/O using what appears to be a pi-specific gpio-flavored version of the /dev/mem type of device driver for exposing hardware special function registers to userspace.
To port this to your board, you would need to first determine if there is a /dev/mem or similar capability in your kernel which you can activate. Then you would need to determine the appropriate I/O registers for GPIO pins. The pi-specific code is reading the Device Tree to figure this out, but there are other ways, for example you can manually read the programmer's manual of the SoC on which you are running.
Another approach you can consider is adding some small microcontroller (or yes, barebones ***duino) to the system, and using that to collect information from various sensors and peripherals. This can then be forwarded to the SoC over a UART link, or queried out via I2C or similar - add a small amount of cost and some degree of bottleneck, but also means that the software on the SoC then becomes very portable - to a different comparable chip, or perhaps even to run on a desktop PC during development.

Related

Device Drivers to Read and Write on a Virtual memory on linux

I am working with a SoC Cyclone V board. I want to exchange data between the HPS and FPGA. They share a common RAM, whose address can be seen on Qsys. I would like to Read and write data in this shared Memory, but dont want to use devmem2 every time i do it. I understand that a driver would be much safer. I was thinking of writing a char driver as it is one of the easy drivers to write for the basic read and write operations.
Is there a way to specify the address to be used by the char driver when we build and insert it?
If not, what driver can be written for the this function (to be able to read and write float values on a specific range of virtual address)?
I have found that user io device drivers or block drivers could be good options. But I am new to this area of development and don't know if these are the only options, or are they any more.
I could really use some help in deciding which driver is appropriate, it would be better if it was a char driver where the address can be specified.
Thank you.

How to pin a interrupt to a CPU in driver

Is it possible to pin a softirq, or any other bottom half to a processor. I have a doubt that this could be done from within a softirq code.
But then inside a driver is it possible to pin a particular IRQ to a
core.
From user mode, you can easily do this by writing to /proc/irq/N/smp_affinity to control which processor(s) an interrupt is directed to. The symbols for the code implementing this are not exported though, so it's difficult to do from the kernel (at least for a loadable module which is how most drivers are structured).
The fact that the implementing function symbols aren't exported is a sign that the kernel developers don't want to encourage this. Presumably that's because it takes control away from the user. And also embeds assumptions about number of processors and so forth into the driver.
So, to answer your question, yes, it's possible, but it's discouraged, and you would need to do one of several "ugly" things to implement it ((a) change kernel exports, (b) link your driver statically into main kernel, or (c) open/write to the proc file from kernel mode).
The usual way to achieve this is by writing a user-mode program (can even be a shell script) that programs core numbers/masks into the appropriate proc file. See Documentation/IRQ-affinity.txt in the kernel source directory for details.

what is the use of Flattened device tree - Linux Kernel

I am going through the Uboot & kernel startup process. What exactly is the use of the FDT (Flat device tree) ?
Many link i have read they state that uboot pass the board & SOC configuration information to Kernel in the form of FDT
https://wiki.freebsd.org/FlattenedDeviceTree
Why kernel need the board configuration information ?
I am asking this question because when ever we make device driver in linux we use to initialize the device at probe() or module_init() call & use request_mem_region() & ioremap() function to get the range of address
& then set the clock & other register of the driver.
What does request_mem_region() actually do and when it is needed?
Now if my device drivers for onchip & offchip devices are doing the full board initialisation.
Then what is the use of flattened device tree for the kernel ?
You are right in assuming that the board files and device-trees are required for initialisation of on-chip blocks and off-chip peripherals.
While booting-up, the respective drivers for all on-chip blocks of an SoC and off-chip peripherals interfaced to it need to be "probed" i.e. loaded and called. On bus-es like USB and PCI, the peripherals can be detected physically and enumerated and their corresponding driver probed. However in general such a facility is NOT available is case of the rest of the peripherals on the rest of the buses like I2C, SPI etc.
In addition to above, when the device-driver is probed, one also needs to provide some information to it about the way in which we intend to configure and utilise the hardware. This varies depending upon the use case. For example the baud-rate at which we would like to operate an UART port.
Both the above classes of information i.e.
Physical Topology of the hardware.
Configuration options of the hardware.
were usually defined as structs within the "board" file.
However using the board-file approach required one to re-build the kernel even to simply modify a configurable option to a different value during initialisation. Also when several physical boards differing slightly in their topology/configuration exist, the "board" file approach becomes too cumbersome to maintain.
Hence the interest in maintaining this information separately in a device-tree. Any device-driver can parse the relevant branches and leaves of the device-tree to obtain the information it requires.
When developing your own device-driver, if your platform supports the device-tree, then you are encouraged to utilise the device tree to store the "platform data" required by your device-driver. This should help you clearly separate :
the generic driver code for your device in the <driver.c> file and
the device's config options specific to this platform into the device-tree.
A step-by-step approach to porting the Linux kernel to a board/SoC should help you appreciate the nuances involved and the advantages of using a device-tree.

Callback from userspace to kernel space

I am looking into the fpga driver code which will write some value to FPGA device at low level. At top level in user space value is being written to /dev/fpga, now I guess this is the logic how driver gets its value from user-space and exposed file in user space is "/dev/fpga".
But now how actually this value from fpga is reached to device , there must some callback maintained.
But I really could not figure out how it actually happens,Is there any standard way?
Anybody can help me find out this userspace to kernel space link.
It's probably a character device. You can create one in your kernel module, and your callback functions will be called in the kernel when it is opened, something is written to it etc. See:
http://linux.die.net/lkmpg/x569.html
for an explanation how it works and sample code.

How linux drive many network cards with the same driver?

I am learning linux network driver recently, and I wonder that if I have many network cards in same type on my board, how does the kernel drive them? Does the kernel need to load the same driver many times? I think it's not possible, insmod won't do that, so how can I make all same kind cards work at same time?
regards
The state of every card (I/O addresses, IRQs, ...) is stored into a driver-specific structure that is passed (directly or indirectly) to every entry point of the driver which can this way differenciate the cards. That way the very same code can control different cards (which means that yes, the kernel only keeps one instance of a driver's module no matter the number of devices it controls).
For instance, have a look at drivers/video/backlight/platform_lcd.c, which is a very simple LCD power driver. It contains a structure called platform_lcd that is private to this file and stores the state of the LCD (whether it is powered, and whether it is suspended). One instance of this structure is allocated in the probe function of the driver through kzalloc - that is, one per LCD device - and stored into the platform device representing the LCD using platform_set_drvdata. The instance that has been allocated for this device is then fetched back at the beginning of all other driver functions so that it knows which instance it is working on:
struct platform_lcd *plcd = to_our_lcd(lcd);
to_our_lcd expands to lcd_get_data which itself expands to dev_get_drvdata (a counterpart of platform_set_drvdata) if you look at include/linux/lcd.h. The function can then know the state of the device is has been invoked for.
This is a very simple example, and the platform_lcd driver does not directly control any device (this is deferred to a function pointer in the platform data), but add hardware-specific parameters (IRQ, I/O base, etc.) and you get how 99% of the drivers in Linux work.
The driver code is only loaded once, but it allocates a separate context structure for each card. Typically you will see a struct pci_driver with a .probe function pointer. The probe function is called once for each card by the PCI support code, and it calls alloc_etherdev to allocate a network interface with space for whatever private context it needs.

Resources