Yocto kernel config propagation - linux-kernel

In my Yocto system, I have a layer defining a bunch of patches on the linux kernel, as well as a file "defconfig" containing kernel configuration. When I modify this file, changes are reflected in the image I build.
However, a few changes are being overruled and I have a hard time figuring out how or where. I do find a bunch of defconfig files in other layers, but is there any easy way to figure out which ones are applied and in what order?
Thanks

It is not other defconfigs that overrule your configuration (at least not in an even only remotely sane setup), but configuration fragments (creating fragments). You can find out what happens exactly like that:
bitbake -e virtual/kernel | less
(you can of course choose another pager, or redirect to a file for additional processing)
And look for:
KERNEL_FEATURES
--> here you can find a list of kernel configuration fragments in the form of .scc files that are applied to your build
SRC_URI
--> this should mention the path to your defconfig file, and no second one.
Please note that this description only holds entirely true for setups that include a kernel defconfig. If you are working without one, things can be different.

Related

Can a Linux Kernel defconfig file be sorted?

So I use defconfig files to keep the kernel settings/configuration for an embedded project.
Whenever I do changes to the actual config through means like make menuconfig and then do a make savedefconfig afterwards the resulting defconfig file looks completely different to the preexisting one. The order of contained config options seems random at least or following some order not obviously understandable.
Problem about this for me the defconfig files are not diff-able anymore.
To work around this I want to sort the defconfig file before checking it in.
This way I can perfectly track any changes using any diff-tool.
Question is:
Is there any inherent order necessary in defconfig files for the linux kernel. Will this mess with the kernel build system in any way?

Integrating changes in the kernel using Yocto using patches

If you want to modify the linux kernel such that it excludes certain modules, you usually go to /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig, which has a bunch of Kconfig symbols, and the ones I want to exclude are commented out, as shown below.
CONFIG_PPP=y
#CONFIG_PPPOL2TP=y
CONFIG_PPP_ASYNC=y
Then I build the linux image by running bitbake virtual/kernel which should ideally have my changes integrated, but when I boot the image, I still see some of the logs of the commented module showing up.
I checked yocto documentation and looks like they create a patch of the file they want to modify, and then append that modified file in the .bbappend file like:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://0001-calibrate-Add-printk-example.patch"
So in my case, if I were to modify /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig, I would:
create a copy of this original file
paste it into poky/meta-bsp/recipes-kernel/linux-msm/files
rename it
include this file in the .bbappend (as shown above)
But how will the above patch override the original /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig that I planned on modifying by this approach?
You are right, sources modifications are done with help of patches in Yocto. Answering your question
But how will the above patch override the original /kernel/msm-4.9/arch/arm/configs/vendor/_defconfig that I planned on modifying by this approach?
it is done automagically:)
That is, build system (bitbake) automatically detects files with extension .patch among SRC_URI content, copies those files from meta-layer directory to build directory somewhere under build/tmp/work/... and automatically applies thous patches on source code (sources directory is defined by S variable in recipe).
But there are some recommendations about other stuff in your question. Some concepts here, skip it to the right way is or even fast way below if bored))
One of the main ideas behind Yocto is reproducible and extensible builds. That means, that all metadata is split into different repositories (that is layers), that are maintained by different teams. Basically teams do not have access to each others repos and none has access to some external project (kernel in your case) source code. Of cource, one may create git-forks of all this, but this is way more complicated than writing metadata. That's why Yocto was created - to take external metadata (ex., poky layer), source code ('kernel') and make it work only by writing your one metadata (your own layer), with no any git-forks.
So, adding your patch and .bbappend to poky layer make no sense, because you wouldn't be able to commit poky layer and distribute this to your customers - poky layer belongs not to you) You may create git-fork however, but that complicates the process of distribution.
That's why the right way is:
create your own BSP-layer
create .bbappend in your layer
add patch to the kernel in your layer (actually the right way is to use .scc files, put patch also will go)
This shall make your build
reproducible - rerunning kernel compilation, or even fresh clone of Yocto with your BSP-layer, would be able to produce the same kernel
distributable (extensible) - customers need only your BSP-layer, other Yocto stuff may be found on internet
And finally, the fast way. If you don't need all this reproducible and extensible but just build some stuff for your home-project and send Yocto to trash, hack the build like this:
clean all kernel-stuff you have bitbake -c cleansstate virtual/kernel
get kernel sources (clone and patch) bitbake -c patch virtual/kernel
handle manually everything you need in kernel sources, found somewhere on working stuff like <TOPDIR>/build/tmp/work/<machine-name>-poky-linux-gnueabi/linux-yocto/<version>/git
build the kernel bitbake -c package virtual/kernel (cloning and patching would be skipped, as bitbake remembers this tasks are done; kernel build artifacts wouldn't be removed, because task rm_work goes after package, and kernel stuff actually is never removed)
But in this case, if this kernel stuff is removed, or breaks somehow, you'll have to walk through this algorithm again (remember - not reproducible).
To easy rebuild the kernel manually (not through bitbake) you can run script <TOPDIR>/build/tmp/work/<machine-name>-poky-linux-gnueabi/linux-yocto/<version>/temp/run.do_compile

Should the STM32 HAL be included as a precompiled library

I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.

How can I build bootimage without going through all Makefiles

I am currently working on the Linux kernel for an Android phone. My workflow is:
Make changes to kernel code
Build with make bootimage
Flash with fastboot flash boot
This works fine. However building takes unnecessary long because make bootimage first goes through the entire tree and includes all Android.mk files. This takes longer than actually compiling the kernel and creating a boot image. Including these files should not be necessary since there are no changes to them. To decrease turnaround time in my workflow, I would like to speed up the build step.
When building other other projects, there are ways to to not build dependencies and thus skip reading all Android.mk files (for instance mm).
There is a make target bootimage-nodeps which seems to do the right thing: It makes a new boot image without going through all Android.mk files. Unfortunately dependencies also include the kernel itself (which therefore does not get built although there are changes).
My question is: Is there a way to build the kernel and create a boot image withouth having to read all Android.mk files.
In case you 're still looking into it, try using the showcommands goal at make, for example :
make bootimage showcommands
The showcommands goals will show all commands needed to build the kernel and the bootimage. Some of the commands, including the one to create the bootimage have $(hide) in front and are not shown.
Once you know the commands and the arguments, next time you need to make the bootimage you can run the commands manually (without using make bootimage and without including all the makefiles).
I have exactly the same problem and this is the only working solution I've found.
I am not sure if you can save time this way (since this solution needs to zip/unzip multiple times which needs more time then to search for all Android.mks on my machine) but since your question is:
Is there a way to build the kernel and create a boot image withouth
having to read all Android.mk files.
you could give this a try:
Preperations:
call make dist once
unzip the target_files.zip in out/dist/
now create a script that does the following for you:
overwrite the kernel in your unpacked target_files with your newly build kernel
zip your target_files with your new kernel
use the python script img_from_target_files from build/tools/releasetools/ with the extra parameter -z. Example: img_from_target_files -z out/dist/target_files.zip new_imgs.zip
inside the newly created new_imgs.zip you will find your new boot.img with your new kernel
You can try the make SINGLE_SHOTcommand - if you know the path to your Andorid.mk:-
m -j8 ONE_SHOT_MAKEFILE=build/target/board/Android.mk bootimage
This worked for me pretty well in Android L/M/N releases

Buildroot speed up

For a project(Arm i.mx6) I use buildroot for creating my setup.
This works very well.
My problem is that I want to re-use the build to create an installer.
Without using the make clean option.
What I am currently doing:
1]. compile buildroot with file system overlay (A)
2]. Make clean
3]. compile buildroot with file system overlay (B)
This takes a lot of time.
I have tried:
1]. compile buildroot with file system overlay (A)
2]. compile buildroot with file system overlay (B)
But then the files from system overlay (A) are inserted in system overlay (B).
I need A and B because in B I have the installer scripts and the Tar.gz result of A.
What I do not want in B is the scripts of A due to the fact that A has network settings which I do not need in B.
There is at the moment not really a good way to achieve this in buildroot. It is possible to build your toolchain separately and use it for both build configurations. In other words, you would have three configurations:
toolchain_defconfig
overlay_a_defconfig
overlay_b_defconfig
toolchain_defconfig should set BR2_HOST_DIR to point to a permanent location, e.g. /opt/toolchain. The other configs should use a custom external toolchain based at that location.
This option is only possible for the toolchain itself (so compiler and libc). It is not possible to extend the toolchain with additional libraries and use these in the build.
A second possibility is to enable BR2_CCACHE. ccache is able to avoid most of the actual compilation steps. This is particularly helpful for big packages (Qt, Webkit, Node, ...).
Finally, if the only difference between your two configurations is the overlay, you also have the option to build everything except the overlay with buildroot, and write a custom script to combine the tarball generated by buildroot with your overlay and create the final rootfs from that.
As far as I understand you just need two "independent" root file systems. You can create two folders and use them with two different configs:
cd buildroot
mkdir rootfsa
mkdir rootfsb
make O=rootfsa menuconfig
make O=rootfsa
make O=rootfsb menuconfig
make O=rootfsb
Now you have two different rootfs.tar files: one in rootfsa/images and another one in rootfsb/images

Resources