Buiding kernel for an ARM processor - linux-kernel

I working on Odroid XU3 with the ubuntu platform. For the DS5 software to crosscompile for profiling , I need to build Linux kernel with specific configuration. I am new to this stuff, but I have created the UImage of the kernel on the host machine for the Arm processor. I need to ask how one can get that kernel copy in the target platform i.e. Odroid.
Because for the profiling I need to have gatord and kernel with specific configuration installed on the target machine. I am done with gatord and build the kernel on host. Just need to copy it on target. But it is not happening using the sdcard of the odroid. Please let me know.

So if you have created uImage that seems like you have U-Boot as a bootloader on your target board. U-Boot in its turns can download kernel uImages via TFTP. I haven't worked with such devices as yours, but if it has Ethernet port, you could use it.
Also you have to know the U-Boot commands (fortunatelly there are a lot of information can be found over Internet, just ask google.)

Related

setting up a development environment for Linux device driver

I am trying to read the LDD book by Jonathan Corbet, Greg Kroah-Hartman, Alessandro Rubini and implement the sample modules. So to begin with, I tried setting up a development system. Installed Ubuntu 16.04 Xenial. Now, I just created a directory and wrote the hello_world module with a Makefile. Got it built and run it, verified the dmesg logs.
Is that all the development setup? I searched online and found articles where they are asking to download and compile the kernel, use a VM to boot the kernel. What is the reason? Or what am I missing?
Is there any better article which clarifies this?
Thanks
hago
You can try one more way:
If you have native windows, install virtual machine software such as
Virtual box. Get your favourite Linux distribution (no bias, just
an example - Ubuntu) and install it through Virtual box.
Get the latest kernel (or of your choice) from kernel.org.
Choose the platform you want to build this kernel for. E.g arm64 or x86.
In case you do not have real boards (e.g RPi for arm variant), you can use qemu-arm64 or qemu-x86 to run your compiled kernel. This is also a good option when users do not have the boards.
Another good use case for using qemu for the newbie kernel developers is even they write some modules which crashes, then the qemu instance is crashed so no harm.
I think using qemu is a good option for people who starts to learn kernel programming and also want to try writing some of their modules and do not intend to purchase hardware at this point of time.
It depends on your target. For your case, you have made a kernel driver for your computer (it run Linux kernel).
But if you want to develop a Kernel driver for another target like Rasberry Pi, ARM board, X86-X64 board, ... you must learn to compile, edit Kernel config, boot Kernel image, ... because each target has different kernel versions.
You can refer to this training for more detail: https://bootlin.com/training/embedded-linux/

configfs do not mount device-tree/overlays

I'm working on a Cyclone V SOC FPGA from Altera with a double Cortex-A9 processor. The embedded system (linux 4.15.7) is created with Buildroot-2018.02. U-boot is used to load the system i-e FPGA.rbf file, device tree blob and zImage and everything works fine.
I want now to integrate the RBF file to my linux and program the FPGA from Linux. I found several methods and the one I understand is the most common is to use CONFIGFS with a device-tree overlay.
So I changed my device tree to integrate the overlay, the u-boot boot script to disable FPGA load and also the following options in the linux ".config" file with make linux-xconfig :
+CONFIG_OF_OVERLAY=y
+CONFIG_ALTERA_STAPL=y
+CONFIG_CONFIGFS_FS=y
+CONFIG_SAMPLES=y
+CONFIG_SAMPLE_CONFIGFS=m
These options are the state were I am now after several try.
After a make and a reboot, once the kernel is loaded, I enter the following command in the console :
mkdir /config
mount -t configfs none /config
At this state, I'm expecting to see some device tree files in the /config folder but there wasn't any, only one rdma_cm folder :
# ls /config
rdma_cm
I continued my reading on this topic and found that I must enable the CONFIG_OF_CONFIGFS option in my linux kernel.
PROBLEM: This option is not available in my linux kernel. Also, file drivers/of/configfs.c is no here too. I've searched in vain to find how to enable device tree overlay for my kernel version.
How can I configure my kernel to make device-tree available in configfs ?
I had the same problem as you. So I had to make a device driver by myself.
This device driver is tentative and I expect Linux mainline to officially support Device Tree Overlay ConfigFS.
The device driver I made is available at the following URL.
https://github.com/ikwzm/dtbocfg
If you are using Debian, you can build the Debian Package of the device driver with the following URL.
https://github.com/ikwzm/dtbocfg-kmod-dpkg
If you want to try Device Tree Overlay using this device driver, please refer to the following URL.
https://github.com/ikwzm/FPGA-SoC-Linux
https://github.com/ikwzm/FPGA-SoC-Linux-Example-1-DE10-Nano

Cross-compile for PowerPC Linux in Windows environment

Because of some constraints I will have to build Linux user space applications for PowerPC QorIQ (P1020) using Windows environment.
Has anyone tried it and could recommend some toolchain (working natively in Win 7)?
Using virtual machines for building applications is not acceptable. What can be done on VM is only kernel and rootfs preparation.
There are some toolchains like http://gnutoolchains.com/powerpc-eabi/ however they do not provide Linux libraries. I guess I need something like powerpc-fsl-linux-gcc.exe.
I have also tried to attach meta-mingw layer to Yocto and bitbake meta-toolchain, but I'm not sure if it's the right path - I got many build errors.

Simple bootloader for running Linux kernel on a simulator

We have built a simple instruction set simulator for the sparc v8 processor. The model consists of a v8 processor, a main memory and a character input and a character output device. Currently I am able to run simple user-level programs on this simulator which are built using a cross compiler and placed in the modeled main memory directly.
I am trying to get a linux kernel to run on this simulator by building a simplest bootloader. (I'm considering uClinux which is made for mmu-less systems). The uncompressed kernel and the filesystem are both assumed to be present in the main memory itself, and all that my bootloader has to do is pass the relevant information to the kernel and make a jump to the start of the kernel code. I have no experience in OS development or porting linux.
I have the following questions :
What is this bare minimum information that a bootloader has to supply to the kernel ?
How to pass this information?
How to point the kernel to use my custom input/output devices?
There is some documentation available for porting linux to ARM boards, and from this documentation, it seems that the bootloader passes information about the size of RAM etc
via a data structure called ATAGS. How is it done in the case of a Sparc processor? I could not find much documentation for Sparc on the internet. There exists a linux bootloader for the Leon3 implementation of Sparc v8, but I could not find the specific information I was looking for in its code.
I will be grateful for any links that explain the bare minimum information to be passed to a kernel and how to pass it.
Thanks,
-neha

RaspberryPi rpi-firmware and .kos in buildroot package

I am trying to bring up the kernel and RFS generated by buildroot on a Raspberry Pi board. I am able to bring up the minimal kernel and access shell via a serial cable.
I could see some .ko files that looks like peripheral drivers rpi-firmware package that is downloaded by buildroot. Is it possible to integrate those into the kernel image ? if so , how?
Figured it out. I just have to enable the required drivers from the linux configuration menu (make linux-menuconfig) .
If I enable them as modules, they will be copied into a folder in /lib. Otherwise, they will be integrated in the zImage

Resources