How does the kernel Makefile magically knows what to compile? - makefile

I'm new in writing Linux device driver, and I'm wondering how the kernel Makefile magically knows what to compile. To illustrate what I don't understand, consider the following case:
I did a #include <linux/irq.h> in my driver code and I'm able to find the header file irq.h in the kernel directory KDIR/include/linux. However, this is only the header file, so I thought the irq.c source code must be out there somewhere. Hence, I looked into the KDIR/arch/arm searching for irq.c (since I'm using the ARM architecture). My confusion begins here when I found really many irq.c inside KDIR/arch/arm. To simply list a few, I got:
KDIR/arch/arm/mach-at91/irq.c
KDIR/arch/arm/mach-davinci/irq.c
KDIR/arch/arm/mach-omap1/irq.c
KDIR/arch/arm/mach-orion5x/irq.c
many more...
In my Makefile, I have a line like this:
$(MAKE) -C $(KDIR) M=$(PWD) CROSS_COMPILE=arm-none-linux-gnueabi- ARCH=arm modules
So I understand that the kernel Makefile knows that I'm using the ARM architecture, but under KDIR/arch/arm/, there are so many irq.c with the same name. I'm guessing that the mach-davinci/irq.c is compiled since davinci is the cpu name I'm using. But then, how can the kernel Makefile knows this is the one to compile? Or if I would like to have a look for the irq.c that I'm actually using, which one should I refer to?
I believe there must be a way to know this besides reading the long kernel Makefile. Thanks for any help!

Beyond the ARCH variable, you can also choose the system type (mach) from the configuration menu (there is actually a sub-menu called "System type" when you type make menuconfig for instance). This selection will include and compile all files under linux2.6/arch/$ARCH/mach-$MACH, and in your case this is how only one irq.c gets compiled.
That aside, it is interesting to understand how the kernel chooses which files to compile. The mechanism behind this is called Kconfig, and it is what allows you to precisely configure your kernel using make menuconfig and others, to compile a module from the outside like you are doing, and to select the right files to compile from simple Makefiles. While it is simple to use, its internals are rather complex - but this article, although rather old, explains it rather well:
http://www.linuxjournal.com/article/6568

To make a very long story short, there's a target make config, which you can trace. That one generates .config, that is your main guideline to making dependencies and controlling what will be compiled, what not, what as module and what will be compiled into the kernel.
This guide should give you a basic understanding of building a kernel module (and I assume that's where you want to start with your driver).

Related

Methods for Targeting Multiple Embedded Hardware Platforms with GNU Make

How can I ensure that object files compiled for one hardware target will not be used for a different hardware target that needs to be compiled differently?
I am using the GNU ARM Embedded Toolchain while I am learning about embedded development. I already have a couple of development boards (with STM32F0 and STM32F4 processors), and plan to make my own boards in the future. I want to have several iterations of hardware using a common software repository.
Obviously I will have multiple targets in my Makefile, invoking the appropriate defines and compiler flags for each platform, and perhaps a make all to build for all platforms at once. As I understand it, make is an incremental build system that only re-compiles object code (*.o) files if the source file has been changed, it won't recompile if I have use different defines and options, and the wrong object code will be passed to the linker.
It seems that I could diligently make clean when switching between different targets, but that would rely on the human action and could produce bad builds if I forgot, and could not be used for a make all that produces multiple binaries for their respective hardware.
Edit Notes: Per feedback comments, I have shorted and rearranged to make the question more clear and objective. I'm not asking generically how to use Make, but rather how to prevent, say mylib.o being compiled for an STM32F0 and then later being re-used in a build for an STM32F4.
I am curious about alternative tools, and welcome discussion in the comments, but this question is specific to GNU Make.
To avoid the need for a clean build between targets, it is necessary for each target to have separate build directories in order that the target dependencies are independent and specifically generated using the appropriate tool chain and build switches etc.

setting up autotools to link single system library statically

I have a project, where I want to link one of system libraries statically. The project uses GNU build system.
In configure.ac I have:
AC_CHECK_LIB(foobar, foobar_init)
On development machine this library is installed in /usr/lib/x86_64-linux-gnu. It is detected, but it is linked dynamically, which causes issues, as it is not present on some machines. Linking it statically (-Wl,-Bstatic etc.) works fine, but I don't know how to set this up in autotools. I tried forcing this into Makefile.am link flags for the project, but it still gives a preference to dynamic library.
I also tried using --enable-static with ./configure, but it seems to have no effect on system libraries.
If you want to link the whole program statically, then you should pass the --disable-shared option to configure. You might or might not need also to pass --enable-static, depending on the default value for that option (which you can influence via your configure.ac file). You really should consider doing this.
You should also consider making this the installer's problem, not the build system's. Let it be the installer's responsibility to ensure that all the shared libraries needed by the program are provided by the systems where it is installed. This is very common; in fact, it is one of the inspirations for package-management systems such as yum / dnf and apt, and their underlying packaging formats.
If you insist on linking only one library statically, while linking everything else dynamically, then you'll need to jump through a few more hoops. The objective will be to emit link options that cause just that library to be linked statically, without changing other libraries' linking. With the GNU toolchain, and supposing that the program is otherwise being linked dynamically, that would be this combination of options:
-Wl,-Bstatic -lfoobar -Wl,-Bdynamic
Now consider the documentation of the AC_CHECK_LIB() macro:
Macro: AC_CHECK_LIB (library, function, [action-if-found],
[action-if-not-found], [other-libraries])
[...] action-if-found is a list of shell commands to run if the link with the library succeeds; action-if-not-found is a list of shell
commands to run if the link fails. If action-if-found is not
specified, the default action prepends -llibrary to LIBS and defines
'HAVE_LIBlibrary' (in all capitals). [...]
Note in particular the default behavior in the event that the optional arguments are not provided (your present case) -- that's not quite what you want, at least not by itself. I suggest providing at least an alternative behavior for action-if-found case, and you could consider also making configure fail in the action-if-not-found case. The latter is left as an exercise; implementing just the former might look like this:
AC_CHECK_LIB([foobar], [foobar_init], [
LIBS="-Wl,-Bstatic -lfoobar -Wl,-Bdynamic $LIBS"
AC_DEFINE([HAVE_LIBFOOBAR], [1], [Define to 1 if you have libfoobar.])
])
You should also pay attention to the order of your AC_CHECK_LIB() invocations. As its docs go on to say:
This macro is intended to support building LIBS in a right-to-left
(least-dependent to most-dependent) fashion such that library
dependencies are satisfied as a natural side effect of consecutive
tests. Linkers are sensitive to library ordering so the order in which
LIBS is generated is important to reliable detection of libraries.
If you find that you still aren't getting what you want, then have a look at the link commands that make actually executes. You need to understand what's wrong about them before you can determine how to fix the problem.
With all that said, I observe that the above treatment is basically a hack, and it makes your build system much less resilient. It introduces dependencies on GNU toolchain options (which some other toolchains may nevertheless accept), and it assumes dynamic linking is being performed overall. It may be possible to resolve those issues with additional Autoconf code, but I urge you to instead go with one of the first two alternatives I described.

sdcc Makefile for 8051 microcontrollers

Hey is there anybody who works with SDCC to make projects for 8051 microcontroller series on Macbook. If yes then can you please post the working make file, specially the part which loads the program in the device. I am confused what to write specifically with the program tag in the Makefile.
It is not necessarily the case that a makefile include loading the code to the part, so just any old example will not help. Makefiles are very simple in essence; you have a target, and its dependencies - if the target does not exist, or any dependency is newer than the target, the target is rebuilt by executing the commands.
In your case you need a phony target (one that never exists), that is dependent on the binary image or hex-file (or whatever format you load file is), the the command to execute would be to launch whatever flash-programming or bootloader tool your toolchain uses to load the image:
.PHONY loadimage
loadimage : myprogram.hex
loader.exe myprogram.hex

Changing the GCC Code. How to test the addition of newly added features?

I am learning compilers and want to make changes of my own to GCC parser and lexer. Is there any testing tool or some another way available which let me change gcc code and test it accordingly.
I tried changing the lexical analysis file but now I am stuck because I don't know how to compile these files. I tried the compilation using other GCC compiler but show errors. I even tried configure and make but doing this with every change does not seems efficient.
The purpose of these changes is just learning and I have to consider GCC only as this is the only compiler my instructor allowed.
I even tried configure and make but doing that wit every change is not at all efficient.
That is exactly what you should be doing. (Well, you don't need to re-configure after every change, just run make again.) However, by default GCC configures itself in bootstrap mode, which means not only does your host compiler compile GCC, that compiled GCC then compiles GCC again (and again). That is overkill for your purposes, and you can prevent that from happening by adding --disable-bootstrap to the configuration options.
Another option that can help significantly reduce build times is enabling only the languages you're interested in. Since you're experimenting, you'll likely be very happy if you create something that works for C or for C++, even if for some obscure reason Java happens to break. Testing other languages becomes relevant when you make your changes available for a larger audience, but that isn't the case just yet. The configuration option that covers this is --enable-languages=c,c++.
Most of the configuration options are documented on the Installing GCC: Configuration page. Throroughly testing your changes is documented on the Contributing to GCC page, but that's likely something for later: you should know how to make your own simpler tests pass, by simply trying code that makes use of your new feature.
You make changes (which are made "permanent" by saving the files you modify), compile the code, and run the test suite.
You typically write additional tests or remove those that are invalidated by your changes and that's it.
If your changes don't contribute anything "positive" to the compiler upstream will probably never accept them, and the only "permanence" you can get is the modifications in your local copy.

how to manually include .config when compiling external kernel module?

I'm afraid my question is a bit complex. Appreciate anyone who can help.
Some background:
I have a 3rd party SW package that compile both kernel modules and user space applications.
Unfortunately, this 3rd party is very complex, and doesn't use Kbuild for building kernel modules (I tried without success)
When compiling the kernel modules, I add -I{path to kernel headers}, but I see the .config file is not being parsed in the compilation, which, of course, causes many errors.
I tried to manually add all flags from .config to gcc in command line (using a script to generate the command line) but that was a very very long line and gcc couldn't handle it.
So my question would be: Is there a way to force all these flags to gcc somehow?
Appreciate your ideas :)
Clarification:
The 3rd party SW can compile on older kernels (2.6, 2.4) I'm trying to compile it for 3.2
Maybe if someone can explain how the original kernel Makefile manages the .config file, I can mimic that behavior.
After digging in the kernel sources, I found the answer. Here it is in case someone needs it.
There's an automatically generated h file called autoconf.h which contains all the relevant definitions in C pre processor format. Just need to include it manually when compiling the module.
Also in theory, I could use my script to create such a file and include it from the sources.
Hope this helps someone. Now on to the next problem :)

Resources