I have a pocketbeagle board and I was trying to build an os image for it using buildroot.
so I downloaded the latest buildroot, there was no defconfig file for pocketbeagle. so decided to use beaglebone_defconfig. The build was successful but when I try to boot the pocketbeagle with this image it is continuously printing this message on the uart console:
Could not initialize timer (err -19)
Could not initialize timer (err -19)
Could not initialize timer (err -19)
Could not initialize timer (err -19)
Could not initialize timer (err -19)
Could not initialize timer (err -19)
I think this message is coming from uboot.
This is the steps I used to build the image:
cd buildroot-2021.02.10
make beaglebone_defconfig
make
do I need to apply some patch? or is it because of some other issue? also tried with the buildroot available in beagleboard github repo.
Obviously there are differences in the 2 boards. Your build would succeed but wont run on pocket beagle because it is not meant to run on it.
specifications
pocket beagle
Beagle bone
SoC
OSD3358-SM
AM3358/9
CPU
Sitara AM3358 M Cortex-A8
Cortex-A8 + Dual PRU (200 MHz)
Frq (MHz)
1000
720
To build BSP for pocket beagle follow below steps,
In Target options
– Change Target architecture to ARM (little endian)
– Change Target architecture variant to Cortex-A8
In Build options, set global patch directories to board/e-ale/pocketbeagle/patches/.
This will allow you to put patches for Linux, U-Boot other packages in subdirectories of board/e-ale/pocketbeagle/patches/.
Toolchain - you can use external or internal.
In Kernel
– Enable the Linux kernel, obviously!
– Choose Custom version as the Kernel version
– Choose 4.14.24 as Kernel version
– Patches will already be applied to the kernel, thanks to us having defined a global patch directory above.
– Choose omap2plus as the Defconfig name
– We’ll need the Device Tree of the PocketBeagle, so enable Build a Device Tree Blob (DTB)
– And use am335x-pocketbeagle as the Device Tree Source file names
Target packages - as per your requirement.
In Filesystem images, enable ext2/3/4 root filesystem, select the ext4 variant.
In Bootloaders, enable U-Boot, and in U-Boot:
– Switch the Build system option to Kconfig: use U-Boot
– Use a Custom version of value 2018.01.
– Use am335x_pocketbeagle as the Board defconfig
As you have noticed, in the configuration, you have referenced board/e-ale/pocketbeagle/ patches as a directory containing patches for various packages. We now need to add the U-Boot and Linux patches that add support for the PocketBeagle, which are not upstream. Use patches, and just copy it to board/e-ale/pocketbeagle so that you get the following directory hierarchy:
Then build the BSP and run on your device.
source
Related
I am trying to read the LDD book by Jonathan Corbet, Greg Kroah-Hartman, Alessandro Rubini and implement the sample modules. So to begin with, I tried setting up a development system. Installed Ubuntu 16.04 Xenial. Now, I just created a directory and wrote the hello_world module with a Makefile. Got it built and run it, verified the dmesg logs.
Is that all the development setup? I searched online and found articles where they are asking to download and compile the kernel, use a VM to boot the kernel. What is the reason? Or what am I missing?
Is there any better article which clarifies this?
Thanks
hago
You can try one more way:
If you have native windows, install virtual machine software such as
Virtual box. Get your favourite Linux distribution (no bias, just
an example - Ubuntu) and install it through Virtual box.
Get the latest kernel (or of your choice) from kernel.org.
Choose the platform you want to build this kernel for. E.g arm64 or x86.
In case you do not have real boards (e.g RPi for arm variant), you can use qemu-arm64 or qemu-x86 to run your compiled kernel. This is also a good option when users do not have the boards.
Another good use case for using qemu for the newbie kernel developers is even they write some modules which crashes, then the qemu instance is crashed so no harm.
I think using qemu is a good option for people who starts to learn kernel programming and also want to try writing some of their modules and do not intend to purchase hardware at this point of time.
It depends on your target. For your case, you have made a kernel driver for your computer (it run Linux kernel).
But if you want to develop a Kernel driver for another target like Rasberry Pi, ARM board, X86-X64 board, ... you must learn to compile, edit Kernel config, boot Kernel image, ... because each target has different kernel versions.
You can refer to this training for more detail: https://bootlin.com/training/embedded-linux/
When trying to build the boost_log library [only] for RPI3 the builder runs out of memory
I use:
./b2 --with-log
And the help text for the builder states:
--with-<library> Build and install the specified <library>. If this
option is used, only libraries specified using this
option will be built.
after quite some time building I see:
virtual memory exhausted: Cannot allocate memory
Do I have any options aside from trying to cross compile on a larger system (the RPI3 has 1G RAM and a small 100M swap partition).
You really only have two options I can think of considering your Pi's physical constraints: 1) figure out whether you could attach an external device (SSD, flash drive, etc.) and have the system swap to that, or 2) set up a cross-compilation environment on your more powerful rig.
I would personally recommend #2 as it's gonna be faster and more flexible. The internet is full of guides on how to cross-compile for Pi on multitude of hosts.
I'm working on a Cyclone V SOC FPGA from Altera with a double Cortex-A9 processor. The embedded system (linux 4.15.7) is created with Buildroot-2018.02. U-boot is used to load the system i-e FPGA.rbf file, device tree blob and zImage and everything works fine.
I want now to integrate the RBF file to my linux and program the FPGA from Linux. I found several methods and the one I understand is the most common is to use CONFIGFS with a device-tree overlay.
So I changed my device tree to integrate the overlay, the u-boot boot script to disable FPGA load and also the following options in the linux ".config" file with make linux-xconfig :
+CONFIG_OF_OVERLAY=y
+CONFIG_ALTERA_STAPL=y
+CONFIG_CONFIGFS_FS=y
+CONFIG_SAMPLES=y
+CONFIG_SAMPLE_CONFIGFS=m
These options are the state were I am now after several try.
After a make and a reboot, once the kernel is loaded, I enter the following command in the console :
mkdir /config
mount -t configfs none /config
At this state, I'm expecting to see some device tree files in the /config folder but there wasn't any, only one rdma_cm folder :
# ls /config
rdma_cm
I continued my reading on this topic and found that I must enable the CONFIG_OF_CONFIGFS option in my linux kernel.
PROBLEM: This option is not available in my linux kernel. Also, file drivers/of/configfs.c is no here too. I've searched in vain to find how to enable device tree overlay for my kernel version.
How can I configure my kernel to make device-tree available in configfs ?
I had the same problem as you. So I had to make a device driver by myself.
This device driver is tentative and I expect Linux mainline to officially support Device Tree Overlay ConfigFS.
The device driver I made is available at the following URL.
https://github.com/ikwzm/dtbocfg
If you are using Debian, you can build the Debian Package of the device driver with the following URL.
https://github.com/ikwzm/dtbocfg-kmod-dpkg
If you want to try Device Tree Overlay using this device driver, please refer to the following URL.
https://github.com/ikwzm/FPGA-SoC-Linux
https://github.com/ikwzm/FPGA-SoC-Linux-Example-1-DE10-Nano
So I am on the quest of learning embedded Linux and have a few questions that I cannot seem to find an answer for.
1) Does the kernel depend on the dtb/dts files when compiling? I thought that the kernel only needs to know the chip architecture (i.e. arm) and the dtb file is loaded by the boot loader (uBoot) so therefore the kernel only needs to load its drivers which are configured by the dtb file.
2) Mixing and matching: I'm under the impression that I can mix and match any combination of boot loader, dtb, kernel, rootfs, and modules given the following
kernel: must know which chip it is compiled for
dtb: must know the board details and chip, i.e. how much ram, configure a GPIO for SPI
boot loader: must know the chip and uEnv.txt must have params for the kernel and dtb location
rootfs: completely independent
modules: must be compiled with the specific version of kernel
3) Drivers: If I want to load a SPI driver do I need anything specific or will the kernel know how to operate this because the dtb file setup the required registers?
4) Modules: Are these just dependent on the kernel or do they need to know something about the chip and board (when I say chip what I mean is do they have to know more than a simple arm or x86 architecture)?
Thank you in advance, I know these are some basic questions but any help is appreciated.
1) Does the kernel depend on the dtb/dts files when compiling? I thought that the kernel only needs to know the chip architecture (i.e. arm) and the dtb file is loaded by the boot loader (uBoot [sic]) so therefore the kernel only needs to load its drivers which are configured by the dtb file.
The Linux kernel is compiled without any dependency on the Device Tree.
The compilation of the kernel does depend on the chip architecture, but which code modules that are compiled depends on the board configuration(s) and feature selection.
BTW it's U-Boot for Universal Boot, not microBoot.
2) Mixing and matching: I'm under the impression that I can mix and match any combination of boot loader, dtb, kernel, rootfs, and modules given the following
kernel: must know which chip it is compiled for
dtb: must know the board details and chip, i.e. how much ram, configure a GPIO for SPI
boot loader: must know the chip and uEnv.txt must have params for the kernel and dtb location
rootfs: completely independent
modules: must be compiled with the specific version of kernel
Essentially correct, but typically one doesn't go overboard in trying to "mix-n-match". There are often optimal or preferred (or at least appropriate) choices.
By "rootfs" I'm assuming you mean type of filesystem for the rootfs, rather some image of a rootfs. (See Addendum below.)
3) Drivers: If I want to load a SPI driver do I need anything specific or
There are two types of "SPI driver", the master and protocol.
The SPI master driver is for the SPI controller chip that serves as the one interface master. This is usually a platform driver and not have a device node in /dev.
For each SPI slave device there must be a protocol driver. This driver will typically have a device node in /dev.
will the kernel know how to operate this because the dtb file setup the required registers?
The Device Tree must specify which driver is for which device and any/all resources allocated/assigned to each device.
The dtb file does not "setup" anything. It's only configuration data; there is no executable code. A device driver, typically during its probe or initialization phase, is responsible for acquiring/allocating its resources.
4) Modules: Are these just dependent on the kernel or do they need to know something about the chip and board?
Your use of "modules" is ambiguous. Source code files are sometimes referred to as "modules". Presumably you really mean loadable kernel modules.
Although most people associate kernel modules (only) with device drivers, other kernel services such as filesystems and network protocol handlers can also be built as modules.
The primary rationale for a kernel module versus static linkage (i.e. built in the kernel) is for runtime configurability (which in turn improves memory efficiency). Optional features, services and drivers can be left out of the kernel that is booted, but can still be loaded later when needed.
Loadable modules are "dependent" on the kernel simply because of linking requirements for proper execution. The degree of "chip and board knowledge" obviously depends on the functionality of the module, just like any other piece of kernel code.
Addendum
when I say rootfs I am referring to a prebuilt rootfs
A kernel image and (prebuilt) rootfs image are not "completely independent".
The executable binaries and the shared libraries in the rootfs image must be compatible with the kernel features. More significantly, since kernel loadable modules are installed in the rootfs and not with the kernel image, and these modules can be strictly tied to a specific build of a kernel version, it makes sense to pair a kernel image with a rootfs image.
I am trying to bring up the kernel and RFS generated by buildroot on a Raspberry Pi board. I am able to bring up the minimal kernel and access shell via a serial cable.
I could see some .ko files that looks like peripheral drivers rpi-firmware package that is downloaded by buildroot. Is it possible to integrate those into the kernel image ? if so , how?
Figured it out. I just have to enable the required drivers from the linux configuration menu (make linux-menuconfig) .
If I enable them as modules, they will be copied into a folder in /lib. Otherwise, they will be integrated in the zImage