fastest to rebuild the linux kernel using rpmbuild - linux-kernel

I just built the linux kernel for CentOS using the instructions that can be found here: https://wiki.centos.org/HowTos/Custom_Kernel
Now, I made my changes and I would like to rebuild the kernel and test it with my changes. How do I do that but:
1. Without having to recompile everything. So, build process should reuse whatever object files generated by the first build that wont need to be modified.
2. Without having to build the other packages that are build with the kernel (e.g., debuginfo, tools, debug-devel, ...etc.).
Thanks.

You cannot. The paradigm of rpmbuild is to always start from a clean slate to ensure reproducibility and predictability. The subpackages would be also be invalidated because they depend on the exact output of your kernel build, e.g. locations within the binary images where certain symbols are defined, that may have changed when you rebuilt it.

Related

Integrating changes in the kernel using Yocto using patches

If you want to modify the linux kernel such that it excludes certain modules, you usually go to /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig, which has a bunch of Kconfig symbols, and the ones I want to exclude are commented out, as shown below.
CONFIG_PPP=y
#CONFIG_PPPOL2TP=y
CONFIG_PPP_ASYNC=y
Then I build the linux image by running bitbake virtual/kernel which should ideally have my changes integrated, but when I boot the image, I still see some of the logs of the commented module showing up.
I checked yocto documentation and looks like they create a patch of the file they want to modify, and then append that modified file in the .bbappend file like:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://0001-calibrate-Add-printk-example.patch"
So in my case, if I were to modify /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig, I would:
create a copy of this original file
paste it into poky/meta-bsp/recipes-kernel/linux-msm/files
rename it
include this file in the .bbappend (as shown above)
But how will the above patch override the original /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig that I planned on modifying by this approach?
You are right, sources modifications are done with help of patches in Yocto. Answering your question
But how will the above patch override the original /kernel/msm-4.9/arch/arm/configs/vendor/_defconfig that I planned on modifying by this approach?
it is done automagically:)
That is, build system (bitbake) automatically detects files with extension .patch among SRC_URI content, copies those files from meta-layer directory to build directory somewhere under build/tmp/work/... and automatically applies thous patches on source code (sources directory is defined by S variable in recipe).
But there are some recommendations about other stuff in your question. Some concepts here, skip it to the right way is or even fast way below if bored))
One of the main ideas behind Yocto is reproducible and extensible builds. That means, that all metadata is split into different repositories (that is layers), that are maintained by different teams. Basically teams do not have access to each others repos and none has access to some external project (kernel in your case) source code. Of cource, one may create git-forks of all this, but this is way more complicated than writing metadata. That's why Yocto was created - to take external metadata (ex., poky layer), source code ('kernel') and make it work only by writing your one metadata (your own layer), with no any git-forks.
So, adding your patch and .bbappend to poky layer make no sense, because you wouldn't be able to commit poky layer and distribute this to your customers - poky layer belongs not to you) You may create git-fork however, but that complicates the process of distribution.
That's why the right way is:
create your own BSP-layer
create .bbappend in your layer
add patch to the kernel in your layer (actually the right way is to use .scc files, put patch also will go)
This shall make your build
reproducible - rerunning kernel compilation, or even fresh clone of Yocto with your BSP-layer, would be able to produce the same kernel
distributable (extensible) - customers need only your BSP-layer, other Yocto stuff may be found on internet
And finally, the fast way. If you don't need all this reproducible and extensible but just build some stuff for your home-project and send Yocto to trash, hack the build like this:
clean all kernel-stuff you have bitbake -c cleansstate virtual/kernel
get kernel sources (clone and patch) bitbake -c patch virtual/kernel
handle manually everything you need in kernel sources, found somewhere on working stuff like <TOPDIR>/build/tmp/work/<machine-name>-poky-linux-gnueabi/linux-yocto/<version>/git
build the kernel bitbake -c package virtual/kernel (cloning and patching would be skipped, as bitbake remembers this tasks are done; kernel build artifacts wouldn't be removed, because task rm_work goes after package, and kernel stuff actually is never removed)
But in this case, if this kernel stuff is removed, or breaks somehow, you'll have to walk through this algorithm again (remember - not reproducible).
To easy rebuild the kernel manually (not through bitbake) you can run script <TOPDIR>/build/tmp/work/<machine-name>-poky-linux-gnueabi/linux-yocto/<version>/temp/run.do_compile

I stopped the make command mid-way, will previous build be affected?

I built a very big project, which had a number of sub-projects, using make command. It took me 3 hours. Then by mistake (without cleaning the previous build) I re-executed the make command for a few minutes and then stopped it.
Have I ruined my previous build? How does make actually work behind the scenes? Are building the object files done in an atomic and safe manner?
Note: I cannot really run any of my binary files to see if they are broken since that's another lengthy process. I just want to know if I am fine or I have to re-run the make and let it finish.
If you want to publish this binary as a production version of your commercial product, then I would not rely on it, always be 100% sure that you are using a successfully built version of a fully saved and committed code base.
On the other hand, I you need this for debugging purposes, then you could use this! why? because the make system overrides the output binary only once if finishes compiling all the object files and only if it detects changes that requires a relink of the binary:
After recompiling whichever object files need it, make decides whether to relink edit. This must be done if the file edit does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than edit, so edit is relinked.
From GNU make: How Make Works
So if you haven't changed your code base, the linker will not relink the binary, leaving it as it was created by the successful build.

How can I build bootimage without going through all Makefiles

I am currently working on the Linux kernel for an Android phone. My workflow is:
Make changes to kernel code
Build with make bootimage
Flash with fastboot flash boot
This works fine. However building takes unnecessary long because make bootimage first goes through the entire tree and includes all Android.mk files. This takes longer than actually compiling the kernel and creating a boot image. Including these files should not be necessary since there are no changes to them. To decrease turnaround time in my workflow, I would like to speed up the build step.
When building other other projects, there are ways to to not build dependencies and thus skip reading all Android.mk files (for instance mm).
There is a make target bootimage-nodeps which seems to do the right thing: It makes a new boot image without going through all Android.mk files. Unfortunately dependencies also include the kernel itself (which therefore does not get built although there are changes).
My question is: Is there a way to build the kernel and create a boot image withouth having to read all Android.mk files.
In case you 're still looking into it, try using the showcommands goal at make, for example :
make bootimage showcommands
The showcommands goals will show all commands needed to build the kernel and the bootimage. Some of the commands, including the one to create the bootimage have $(hide) in front and are not shown.
Once you know the commands and the arguments, next time you need to make the bootimage you can run the commands manually (without using make bootimage and without including all the makefiles).
I have exactly the same problem and this is the only working solution I've found.
I am not sure if you can save time this way (since this solution needs to zip/unzip multiple times which needs more time then to search for all Android.mks on my machine) but since your question is:
Is there a way to build the kernel and create a boot image withouth
having to read all Android.mk files.
you could give this a try:
Preperations:
call make dist once
unzip the target_files.zip in out/dist/
now create a script that does the following for you:
overwrite the kernel in your unpacked target_files with your newly build kernel
zip your target_files with your new kernel
use the python script img_from_target_files from build/tools/releasetools/ with the extra parameter -z. Example: img_from_target_files -z out/dist/target_files.zip new_imgs.zip
inside the newly created new_imgs.zip you will find your new boot.img with your new kernel
You can try the make SINGLE_SHOTcommand - if you know the path to your Andorid.mk:-
m -j8 ONE_SHOT_MAKEFILE=build/target/board/Android.mk bootimage
This worked for me pretty well in Android L/M/N releases

Does 'make'-ing something from source make it self-contained?

Forgive me before I start, as I'm not a C / C++ etc programmer, a mere PHP one :) but I've been working on projects that use some others sourced from online open source repos, such as svn and git. For some of these projects, I need to install libraries and then run "./configure", "make" and then "make all" (as an example) and I do this on a "build" virtual machine to get the binaries that I need to use within my project.
The ultimate goal of some of my projects is to then take these "compiled" (if that's the correct term) binaries and place them onto a virtual machine which I would then re-distribute (according to licenses etc).
My question is this : when I build these binaries on my build machine, with all the pre-requisities that I need in order to build them in the first place ("build-essential" and "cmake" and "gcc" etc etc) - once the binaries are on my build machine (in /usr/lib for example) are they self-contained to the point that I can merely copy those /usr/lib binary files that the build created and place them in the same folder on the virtual machines that I would distribute, without the build servers having all the build components installed on them?
With all the dependencies that I would need to build the source in the place, would that finally built binary contain them all in itself, or would I have to include them on the distributed servers as well?
Would that work? Is the question a little too general and perhaps it would all depend on what I'm building?
Update from original posting after a couple of responses
I will be distributing the VMs myself, inasmuch as I will build them and then install my projects upon them. Therefore, I know the OS and environment completely. I just don't want to "bloat" them with unnecessary software that's been installed that I don't actually need because the compiled executables I will place on the distributed VMs in for example /usr/local/bin ...
That depends on how you link your program to libraries it depends on. In most cases, the default is to link dynamically, which means that you need to distribute your executable along its deps. You can check out what libraries are required to run the file using ldd command.
Theoretically, you can link everything statically, which means that library code would be compiled into executable. Thus, executable would really be self-contained, but linking statically is not always possible. This depends on actual libraries you are using and probably require playing with ./configure args when building them.
Finally, there are some liraries that always linked dynamically, such as libc. The good thing is that machine you are distributing to would surely have this library. The bad thing is that versions of these libraries may differ, and you might face ABI mismatch.
In short, if your project not huge and there is possibility to link everything statically, go this way. If not, read about AppImage and Docker.
The distribution of built libraries and headers (binary distribution) is a possible way and should work. (I do it in my projects always.)
It is not necessary that all of the libraries you built are installed into /usr/lib. To keep your target machine clean you can install it in other folder to, e.g.
/usr/local/MYLIB/lib/libmylib.so
/usr/local/MYLIB/include/mylib.h
/usr/local/MYOTHERLIB/lib/libmyotherlib.so
/usr/local/MYOTHERLIB/include/libmyotherlib.so
Advantages:
Easy installation, easy remove
All files within one subfolder, no files are missing, no mix with other libs
Disadvantage:
The loader must know the extra search path

Can GNU make create broken binaries when building in parallel?

I am working in a project where we have just added parallelism to our build system, using GNU Make.
We build both libraries and the programs in parallel.
First we build all the libs necessary for the binaries. After the libs are created we start building the binaries.
Now when running our programs we have found that one of the binaries dont run as expected. Is it possible that GNU Make could cause broken binaries when building in parallel but still link correctly? If that is the case, what is the common cause and how can one avoid it?
Correct parallel builds depend on a correct makefile. If a build works serially but not in parallel, that means that your makefile has not declared all the prerequisites that it needs, so make doesn't realize it can't build target X until after target Y is complete.
However, it's extremely unlikely that these kinds of errors would allow the build to succeed: that is, the compiler or linker will almost always fail if things are building in the wrong order. It's hard for me to imagine how the build would succeed except by the purest chance, if at all (maybe if your tools overwrite an existing file instead of deleting it and writing it from scratch). Of course you given no information about exactly what "don't run as expected" means so it's hard to say for sure.
To investigate you need to do some testing: does it fail the same way every time you do a parallel build? Does it fail even if you use different amounts of parallelism (different -j levels)? Does it continue to fail if you switch back to non-parallel builds? Does the build succeed with -j even if you start with a completely clean workspace (nothing built)?

Resources