How can I build bootimage without going through all Makefiles - makefile

I am currently working on the Linux kernel for an Android phone. My workflow is:
Make changes to kernel code
Build with make bootimage
Flash with fastboot flash boot
This works fine. However building takes unnecessary long because make bootimage first goes through the entire tree and includes all Android.mk files. This takes longer than actually compiling the kernel and creating a boot image. Including these files should not be necessary since there are no changes to them. To decrease turnaround time in my workflow, I would like to speed up the build step.
When building other other projects, there are ways to to not build dependencies and thus skip reading all Android.mk files (for instance mm).
There is a make target bootimage-nodeps which seems to do the right thing: It makes a new boot image without going through all Android.mk files. Unfortunately dependencies also include the kernel itself (which therefore does not get built although there are changes).
My question is: Is there a way to build the kernel and create a boot image withouth having to read all Android.mk files.

In case you 're still looking into it, try using the showcommands goal at make, for example :
make bootimage showcommands
The showcommands goals will show all commands needed to build the kernel and the bootimage. Some of the commands, including the one to create the bootimage have $(hide) in front and are not shown.
Once you know the commands and the arguments, next time you need to make the bootimage you can run the commands manually (without using make bootimage and without including all the makefiles).
I have exactly the same problem and this is the only working solution I've found.

I am not sure if you can save time this way (since this solution needs to zip/unzip multiple times which needs more time then to search for all Android.mks on my machine) but since your question is:
Is there a way to build the kernel and create a boot image withouth
having to read all Android.mk files.
you could give this a try:
Preperations:
call make dist once
unzip the target_files.zip in out/dist/
now create a script that does the following for you:
overwrite the kernel in your unpacked target_files with your newly build kernel
zip your target_files with your new kernel
use the python script img_from_target_files from build/tools/releasetools/ with the extra parameter -z. Example: img_from_target_files -z out/dist/target_files.zip new_imgs.zip
inside the newly created new_imgs.zip you will find your new boot.img with your new kernel

You can try the make SINGLE_SHOTcommand - if you know the path to your Andorid.mk:-
m -j8 ONE_SHOT_MAKEFILE=build/target/board/Android.mk bootimage
This worked for me pretty well in Android L/M/N releases

Related

Integrating changes in the kernel using Yocto using patches

If you want to modify the linux kernel such that it excludes certain modules, you usually go to /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig, which has a bunch of Kconfig symbols, and the ones I want to exclude are commented out, as shown below.
CONFIG_PPP=y
#CONFIG_PPPOL2TP=y
CONFIG_PPP_ASYNC=y
Then I build the linux image by running bitbake virtual/kernel which should ideally have my changes integrated, but when I boot the image, I still see some of the logs of the commented module showing up.
I checked yocto documentation and looks like they create a patch of the file they want to modify, and then append that modified file in the .bbappend file like:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://0001-calibrate-Add-printk-example.patch"
So in my case, if I were to modify /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig, I would:
create a copy of this original file
paste it into poky/meta-bsp/recipes-kernel/linux-msm/files
rename it
include this file in the .bbappend (as shown above)
But how will the above patch override the original /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig that I planned on modifying by this approach?
You are right, sources modifications are done with help of patches in Yocto. Answering your question
But how will the above patch override the original /kernel/msm-4.9/arch/arm/configs/vendor/_defconfig that I planned on modifying by this approach?
it is done automagically:)
That is, build system (bitbake) automatically detects files with extension .patch among SRC_URI content, copies those files from meta-layer directory to build directory somewhere under build/tmp/work/... and automatically applies thous patches on source code (sources directory is defined by S variable in recipe).
But there are some recommendations about other stuff in your question. Some concepts here, skip it to the right way is or even fast way below if bored))
One of the main ideas behind Yocto is reproducible and extensible builds. That means, that all metadata is split into different repositories (that is layers), that are maintained by different teams. Basically teams do not have access to each others repos and none has access to some external project (kernel in your case) source code. Of cource, one may create git-forks of all this, but this is way more complicated than writing metadata. That's why Yocto was created - to take external metadata (ex., poky layer), source code ('kernel') and make it work only by writing your one metadata (your own layer), with no any git-forks.
So, adding your patch and .bbappend to poky layer make no sense, because you wouldn't be able to commit poky layer and distribute this to your customers - poky layer belongs not to you) You may create git-fork however, but that complicates the process of distribution.
That's why the right way is:
create your own BSP-layer
create .bbappend in your layer
add patch to the kernel in your layer (actually the right way is to use .scc files, put patch also will go)
This shall make your build
reproducible - rerunning kernel compilation, or even fresh clone of Yocto with your BSP-layer, would be able to produce the same kernel
distributable (extensible) - customers need only your BSP-layer, other Yocto stuff may be found on internet
And finally, the fast way. If you don't need all this reproducible and extensible but just build some stuff for your home-project and send Yocto to trash, hack the build like this:
clean all kernel-stuff you have bitbake -c cleansstate virtual/kernel
get kernel sources (clone and patch) bitbake -c patch virtual/kernel
handle manually everything you need in kernel sources, found somewhere on working stuff like <TOPDIR>/build/tmp/work/<machine-name>-poky-linux-gnueabi/linux-yocto/<version>/git
build the kernel bitbake -c package virtual/kernel (cloning and patching would be skipped, as bitbake remembers this tasks are done; kernel build artifacts wouldn't be removed, because task rm_work goes after package, and kernel stuff actually is never removed)
But in this case, if this kernel stuff is removed, or breaks somehow, you'll have to walk through this algorithm again (remember - not reproducible).
To easy rebuild the kernel manually (not through bitbake) you can run script <TOPDIR>/build/tmp/work/<machine-name>-poky-linux-gnueabi/linux-yocto/<version>/temp/run.do_compile

fastest to rebuild the linux kernel using rpmbuild

I just built the linux kernel for CentOS using the instructions that can be found here: https://wiki.centos.org/HowTos/Custom_Kernel
Now, I made my changes and I would like to rebuild the kernel and test it with my changes. How do I do that but:
1. Without having to recompile everything. So, build process should reuse whatever object files generated by the first build that wont need to be modified.
2. Without having to build the other packages that are build with the kernel (e.g., debuginfo, tools, debug-devel, ...etc.).
Thanks.
You cannot. The paradigm of rpmbuild is to always start from a clean slate to ensure reproducibility and predictability. The subpackages would be also be invalidated because they depend on the exact output of your kernel build, e.g. locations within the binary images where certain symbols are defined, that may have changed when you rebuilt it.

Recommended tool to automate complicated build procedure

I am developing an OS for embedded devices that runs bytecode. Basically, a micro JVM.
In the process of doing so, I am able to compile and run Java applications to bytecode(ish) and flash that on, for instance, an Atmega1284P.
Now I've added support for C applications: I compile and process it using several tools and with some manual editing I eventually get bytecode that runs on my OS.
The process is very cumbersome and heavy and I would like to automate it.
Currently, I am using makefiles for automatic compilation and flashing of the Java applications & OS to devices.
All steps, roughly, for a C application are as follows and consist of consecutive manual steps:
(1) Use Docker to run a Linux container with lljvm that compiles a .c file to a .class file (see also https://github.com/davidar/lljvm/tree/master)
(2) convert this c.class file to a jasmin file (https://github.com/davidar/jasmin) using the ClassFileAnalyzer tool (http://classfileanalyzer.javaseiten.de/)
(3) manually edit this jasmin file in a text editor by replacing/adjusting some strings
(4) convert the modified jasmin file to a .class file again using jasmin
(5) put this .class file in a folder where the rest of my makefiles (the ones that already make and deploy the OS and class files from Java apps) can take over.
Current options seem to be just keep using makefiles but this is a bit unwieldly (I already have 5 different makefiles and this would further extend that chain). I've also read a bit about scons. In essence, I'm wondering which are some recommended tools or a good approach for complicated builds.
Hopefully this may help a bit, but the question as such could probably be a subject for a heated discussion without much helpful results.
As pointed out in the comments by others, you really need to automate the steps starting with your .c file to the point you can integrated it with the rest of your system.
There is generally nothing wrong with make and you would not win too much by switching to SCons. You'd get more ways to express what you want to do. Among other things meaning that if you wanted to write that automation directly inside the build system and its rules, you could also use Python and not only shell (should that be of a concern though, you could just as well call that Python code from make). But the essence of target, prerequisite, recipe is still there. And with that need for writing necessary automation for those .c to integration steps.
If you really wanted to look into alternative options. bazel might be of interest to you. The downside being the initial effort to write the necessary rules to fit your needs could be costly. And depending on size of your project, might just be too much. On the other hand once done with that, it'd be very easy to use (apply those rules on growing code base) and you could also ditch the container and rely on its more lightweight sand-boxing and external rules to get the tools and bits you need for your build... all with a single system for build description.

I stopped the make command mid-way, will previous build be affected?

I built a very big project, which had a number of sub-projects, using make command. It took me 3 hours. Then by mistake (without cleaning the previous build) I re-executed the make command for a few minutes and then stopped it.
Have I ruined my previous build? How does make actually work behind the scenes? Are building the object files done in an atomic and safe manner?
Note: I cannot really run any of my binary files to see if they are broken since that's another lengthy process. I just want to know if I am fine or I have to re-run the make and let it finish.
If you want to publish this binary as a production version of your commercial product, then I would not rely on it, always be 100% sure that you are using a successfully built version of a fully saved and committed code base.
On the other hand, I you need this for debugging purposes, then you could use this! why? because the make system overrides the output binary only once if finishes compiling all the object files and only if it detects changes that requires a relink of the binary:
After recompiling whichever object files need it, make decides whether to relink edit. This must be done if the file edit does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than edit, so edit is relinked.
From GNU make: How Make Works
So if you haven't changed your code base, the linker will not relink the binary, leaving it as it was created by the successful build.

How to Debug Following Fortran Program

I am trying to compile the following software so that I can step through and debug it. I am only a novice programmer and I am trying to understand how this whole makefile business works with Fortran. I know that there is a ton of literature on makefiles but I just need to insert a simple debug flag and I think if someone provided me with the answer to this question that would be the best way for me to learn.
So the program I am trying to compile, TINKER, is actually made up of several packages, located at http://dasher.wustl.edu/tinkerwiki/index.php/Main_Page. I would like to compile and debug JUST ONE specific executable, "analyze". I contacted the developer and received the following reply but I am still stuck...
Since TINKER has lots of small source code files, what we do is
compile each of the small files to an object file using the "-c" flag.
Then we put all of these object code files (ie, the ".o" files) into
an object library. Finally, we link each of the TINKER top level
programs, such as "analyze", against the object library. There is a
Makefile supplied with TINKER that does this. We also supply
individual scripts called "compile.make", "library.make" and
"link.make" for various CPU/compiler combinations that can be run in
order to perform the steps I describe above. To build a "debuggable"
executable, you just need to include the appropriate debug flags
(usually "-g") as part of the compile and link stages.
I am currently running OSX 10.6.8. If someone could show me which folders I cd into, what commands I enter that would be so great!
Thanks!
My follow up question (once I can figure out how to answer the above via command line will concern how to import the same procedure but using the Photran IDE - http://wiki.eclipse.org/PTP/photran/documentation/photran5#Starting_a_Project_with_a_Hand-Written_Makefile)
The directions are at http://dasher.wustl.edu/tinkerwiki/index.php/Main_Page#Installing_TINKER_on_your_Computer
Maybe out of date? g77 is obsolete -- it would be better to use gfortran.
The key steps: "The first step in building TINKER using the script files is to run the appropriate compile.make script for your operating system and compiler version. Next you must use a library.make script to create an archive of object code modules. Finally, run a link.make script to produce the complete set of TINKER executables. The executables can be renamed and moved to wherever you like by editing and running the ‘‘rename’’ script."
So cd to the directory for the Mac -- based on "we also provide machine-specific directories with three separate shell scripts to compile the source, build an object library, and link binary executables." Then run the command scripts. Probably ./compile.make. Look around for the directories ... you can probably figure it out from the names. Or search for the file "compile.make".
Or find someone local to you who knows more about programming.

Resources