Device tree writing for raspberry pi - linux-kernel

I'm looking for any appropriate solution for Device tree writing in raspberry pi
I went through:
basic device driver module loading and unloading
adding a kernel module in kernel source tree so that it can loaded automatically just like predefined kernel modules.
But now i have doubt about how one can write device tree in raspberry-pi for binding a particular driver or module at the booting time phase?
I researched lots of available resources within the Internet but unfortunately I could not find any precise solution that suits my need.

So you just need to add a node in your device tree and set the "compatible" property point to your driver. Check below link for reference.
https://github.com/saiyamd/skeleton-dt-binding/blob/master/skeleton.c

Related

configfs do not mount device-tree/overlays

I'm working on a Cyclone V SOC FPGA from Altera with a double Cortex-A9 processor. The embedded system (linux 4.15.7) is created with Buildroot-2018.02. U-boot is used to load the system i-e FPGA.rbf file, device tree blob and zImage and everything works fine.
I want now to integrate the RBF file to my linux and program the FPGA from Linux. I found several methods and the one I understand is the most common is to use CONFIGFS with a device-tree overlay.
So I changed my device tree to integrate the overlay, the u-boot boot script to disable FPGA load and also the following options in the linux ".config" file with make linux-xconfig :
+CONFIG_OF_OVERLAY=y
+CONFIG_ALTERA_STAPL=y
+CONFIG_CONFIGFS_FS=y
+CONFIG_SAMPLES=y
+CONFIG_SAMPLE_CONFIGFS=m
These options are the state were I am now after several try.
After a make and a reboot, once the kernel is loaded, I enter the following command in the console :
mkdir /config
mount -t configfs none /config
At this state, I'm expecting to see some device tree files in the /config folder but there wasn't any, only one rdma_cm folder :
# ls /config
rdma_cm
I continued my reading on this topic and found that I must enable the CONFIG_OF_CONFIGFS option in my linux kernel.
PROBLEM: This option is not available in my linux kernel. Also, file drivers/of/configfs.c is no here too. I've searched in vain to find how to enable device tree overlay for my kernel version.
How can I configure my kernel to make device-tree available in configfs ?
I had the same problem as you. So I had to make a device driver by myself.
This device driver is tentative and I expect Linux mainline to officially support Device Tree Overlay ConfigFS.
The device driver I made is available at the following URL.
https://github.com/ikwzm/dtbocfg
If you are using Debian, you can build the Debian Package of the device driver with the following URL.
https://github.com/ikwzm/dtbocfg-kmod-dpkg
If you want to try Device Tree Overlay using this device driver, please refer to the following URL.
https://github.com/ikwzm/FPGA-SoC-Linux
https://github.com/ikwzm/FPGA-SoC-Linux-Example-1-DE10-Nano

Linux kernel dtb vs dtbo

I am developing a device driver for a device.I wanted to know besides writing the device driver what and when it is necessary for it - a device tree blob (dtb) or a device tree overlay (dtbo).
Is it possible to dynamically insert the dtb (after compiling it using dtc compiler) and test the driver(dynamically loadable) .
For statically building dtb is there any Kconfig for the dtb files which I have to take care of apart from the device driver's Kconfig.
You don't mention what platform this is, but I'm assuming it is one of the architectures that extensively uses devicetrees for HW description e.g. ARM, PPC and that you actually need devicetree.
Device tree overlays require support from userspace, in the form of a overlay manager that knows what overlays to load at runtime. Unless your device is in a very dynamic environment where it might go away, for most cases, you want a simple hardcoded device tree.
After writing your driver, you need to define the compatible property to tell the kernel when to load this driver and then add a node to the devicetree (.dts/.dtsi) file under arch/<foo>/boot/dts/*/* that best describes your board.
e.g. See this compatible registration and the corresponding HW description in a bunch of devicetrees 1,2,3 that are SoC-specific. This one driver works on all those SoC by quirking SoC-specific fucntionality behind compatible flags.

Detect spi device from another driver

I have a Freescale imx.6q (arm) based board.
Hardware is configured with devicetree.
It had a change major incompatible change to timings and voltage for an onboard fpga, but these changes are invisible to the kernel.
The EE's tell us we shouldn't load the old fpga firmware for fear of damaging it. I would like to support both hardware from the same code (It is already causing confusion)
The solution I have thought of is this:
There are several new spi temperature sensors on the board. If I can read from one of those devices, I can infer that I need the new firmware.
How can I (in one driver) grab an spi device and then release it?
I suspect that I might be able to do something like this with device tree,
But I don't want to make the device unavailable.
Any ideas or examples of something like this being done?
After reading question i think your concern is how to add software support for more than one hardware.
If that is the case i think we can write two drivers supporting both hardware's with different configuration such as irq, voltage, register set etc.
So i will enable both drivers in Makefile and config file.
So at the time of boot when probe of drivers gets called we can check the hardware id by using spi_read command from driver.
If hardware id matches then driver probe gets successful and driver can be used to interact with hardware.
If spi_read fails then driver probe itself will fail.
I think this will do the trick.
EDIT (answer the question)
To detect use an SPI device from another driver use a reference to the device in the devicetree structure.
Short answer: add a reference to the spi device in your devices dts entry.
Slightly longer answer:
When adding spi to another device driver, you are effectively adding a subdevice, which may want its own driver. I have an FPGA which loads its firmware over (something close enough to be considered) SPI. I started with the idea of just treating the spi device as part of the larger driver, but the more work went into it, the more obvious it was that it is its own device, with a purpose and function that is distinct from the rest of the driver. I separated that code into its own driver.
Now instead of a reference to an SPI device, my driver just has a reference to an FPGA Manager device.
See line 98, 370 of https://github.com/d4ddi0/linux/blob/v4.12evi/arch/arm/boot/dts/imx6q-evi.dts
and
make sure the spi driver is loaded before your driver completes loading
My original answer to my question (for historical purposes):
What I ended up doing was using different devicetree files. The difference is know at initial install time (based on the serial number). The bootloader knows which dts filename to load.
There are multiple FPGA firmware versions and the right one is chosen based on the description in the dts.
This way, I can still update the driver and/or dts without breakage.
This works well in practice even though it does not detect anything at runtime.
One problem still exists, if I take an SD card from a new revision, and put it into an old one, the incorrect firmware will be loaded. To really solve this last problem, we've talked about adding an EEPROM to uniquely identify the hardware revision on future boards.

Linux kernel modules

I've not clear what is the difference between drivers that can be "embedded" inside a monolithic kernel and drivers available only as external modules.
What kind of effort is requested to "port" some driver (provided as "external module" only) to a monolithic kernel?
I would like to be able to run Vmware Tools disabling loadable modules support and getting rid of the initrd bazaar.
Though the driver more or less remains the same(in both cases),there are definitely benefits for using "drivers" embedded in monolithic kernel.
I'll try to explain the "effort in porting" the driver part which you've asked.
Depending on the kind of driver you've, essentially you've to figure out how it will fit in the current kernel source tree, its compilation(include your .ko in the uImage) and loading of it while kernel booting. Let's illustrate each step a bit:
a.) Locate the folder (in the kernel source tree) where you think it is best suited to keep your driver code.
b.) Work on to make sure your driver code is getting compiled.[i.e ultimately it will be part of monolithic kernel image(uImage or whatever you call it)]. In this context, You've to work on your Makefile for your driver. You might have to introduce some CONFIG flags to compile your driver code. There are tons of Makefiles' and driver code lying in the source tree. Roam around and you will get a good reference of how it is being done.
c.) Make sure that your driver code is independent of any other
loadable kernel module(i.e such modules which are not part of the
"monolithic" kernel image). Because if you invoke your driver
code(which is monolithic now and is in memory) which depends on
loadable module code then it may cause some kernel
panic/segmentation fault kind of error.
d.) Make sure that your driver is registered with a higher level of
subsystem which will be initializing all the registered drivers
during boot-up time.(for example: an i2c driver once registered
with i2c driver framework will be loaded automatically when i2c subsystem is initialized during system startup). This step might not be really required if you can figure out another way of invoking your driver's __init and __exit functions.
e.) Now, Your Driver _init and (_exit sections) "should" be called
if it is getting loaded by any device driver framework or directly(i.e. while
kernel is booting up ).
f.) In case of h/w drivers, we have .probe implementation in driver
which will be invoked once the kernel finds a corresponding device.
In case of s/w drivers, I guess __init and __exit is all you have.
g.) Once it is loaded, you can use it like you were using it earlier as a loadable kernel module
h.) I'll recommend reading source code of similar device drivers in the linux kernel tree and see how they are operating.
Hope this helps.

View linux kernel drivers built into the kernel, and how do they get binded/mounted/started

I'm having a bit of a hard time fully understanding how the kernel starts in linux. I'm a wince developer and our company decided to run with linux instead now.
We outsourced all of the board bringup and the package I recieved is quit a bit different for the prototype board we have compared to the nitrogen6x we have been using.
Before i start listing the differences for the distro we created, the kernels are identical. The distro we have been using is a busybox system. The one we recieved from the vendor is sysvinit. I removed mdev from busybox and we are only using udev.
when I use the kernel on our busybox build the touch screen drivers breaks, or doesn' run, or does something totally magical. I'm not quit sure... there is a /dev/input/event0 device which when run on the sysvinit side is a touch device.. Is the kernel not the mechanism that binds the built-in drivers to a device node? I thought udev was for more dynamic events in the system.
On the other hand I can't really tell whats been loaded on my device. Is there a way to list running drivers that were built into the kernel? my touch pad is up? This is a fairly simple process of looking at the registry on wince to see which devices were loaded.
I guess what I'm really trying to discover, isn't so much how to add a driver to the kernel, its how the whole thing gets is plumbed together. I've found plenty of documents on createing kernel modules, but i haven't found a good resource on how to pull everything together from scratch so you can actually use said modules. Going back to the example of the touchscreen driver, its built into the kernel, how does that get plugged into /dev/input/event0??
I'm kind of having a difficult time finding good resources mostly because searching google for varations of linux/drivers/device nodes/ piles in tons of random crap from everywhere.
What you probably want to use now is evtest. It will allow you to know what are the input devices that are present and ready to use on your system.
To get more information on the input subsystem and more generic information on how the kernel is working, I can direct you to our training materials. The materials are free to download, use and redistribute.
The general answer is, there is no single, easy place to look to discover what drivers have been loaded by the kernel if they are compiled in. Of course, lsmod will display any drivers that were dynamically loaded after kernel boot.
The kernel does not create device nodes. That is, to quote your question, the kernel does not "bind" the driver to the device node. The association between kernel driver and device node is contained in the major and minor numbers registered when the driver is initialized. You can have a device node on your file system for which there is no corresponding driver (common especially in older devices where device nodes were statically created on the file system) and you can also have a driver installed for which there is no device node.
Modern Linux distros have dynamically created device nodes created on a mount point called /dev and this is usually a tmpfs file system, meaning it is volatile - it gets destroyed on every boot and recreated dynamically on each new boot.
udev is the magic that creates most device nodes based on events that it receives from the kernel when a new device is discovered (this can be after boot on device plugin, like a USB disk) or on startup when udev reads the queued events and acts on them. As you noted, busybox has a limited udev implementation called mdev.
Study udev and you will get a much better understanding of the process. Hope this helps a little.

Resources