Multiarch Build System - gcc

I am trying to understand the multiarch Makefile based build system found in opensource projects such as linux kernel, glibc and so on. In all of these projects the source code is organized as C/C++ modules(e.g. filesystem, networking, logging) etc, generic headers such as include/ and then arch specific directory tree (arch/{x86,mips,arm}) etc. I want to understand how to layout of such projects is formed what specific language constructs and techniques are used to map the highlevel C routines to their arch specific implementations. I had tried to look around on the internet and didn't find information covering exactly this subject or introducing how to layout and design the Makefiles, organize C and assembly so that the build system correctly creates the build.
For a concrete example, lets consider the glibc, which utilizes a macro HP_TIMING_NOW whose implementation differs for different architectures (x86_64, powerpc...).
I do understand that the whole build system works through invoking make recursively but may understanding of this process is superficial and I am looking for some concrete explanation in the form of resources or answers to cover this subject.
Thanks in advance

Related

Does the Linux kernel project use some build automation software to create their makefile?

Does the Linux Kernel Project use any build automation software such as autotools to generate their makefile?
Do they create the makefile manually? By browsing their project Github webpage, it seems to me so, or I am missing something. But given the complexity of the project, isn't using some build automation software more convenient?
Do they use some tools to manage the complexity of their makefile?
The Makefiles are managed manually, but most of the complexity is confined to a few common Makefiles. See the kbuild makefile documentation for details on the Makefiles used by kbuild.
Configuration of the kernel is rather complex, as many drivers or features depend on the presence of others. The source tree includes KConfig files and several utilities for creating a valid kernel build configuration eithe interactively or from text files. See the kbuild documentation for more details.

Methods for Targeting Multiple Embedded Hardware Platforms with GNU Make

How can I ensure that object files compiled for one hardware target will not be used for a different hardware target that needs to be compiled differently?
I am using the GNU ARM Embedded Toolchain while I am learning about embedded development. I already have a couple of development boards (with STM32F0 and STM32F4 processors), and plan to make my own boards in the future. I want to have several iterations of hardware using a common software repository.
Obviously I will have multiple targets in my Makefile, invoking the appropriate defines and compiler flags for each platform, and perhaps a make all to build for all platforms at once. As I understand it, make is an incremental build system that only re-compiles object code (*.o) files if the source file has been changed, it won't recompile if I have use different defines and options, and the wrong object code will be passed to the linker.
It seems that I could diligently make clean when switching between different targets, but that would rely on the human action and could produce bad builds if I forgot, and could not be used for a make all that produces multiple binaries for their respective hardware.
Edit Notes: Per feedback comments, I have shorted and rearranged to make the question more clear and objective. I'm not asking generically how to use Make, but rather how to prevent, say mylib.o being compiled for an STM32F0 and then later being re-used in a build for an STM32F4.
I am curious about alternative tools, and welcome discussion in the comments, but this question is specific to GNU Make.
To avoid the need for a clean build between targets, it is necessary for each target to have separate build directories in order that the target dependencies are independent and specifically generated using the appropriate tool chain and build switches etc.

Fortran dependency management and dynamic config-/documentfile merges using "maven equivalent"

I have a large project written mostly in FORTRAN90 consisiting of a core and numerous add-on modules also written in FORTRAN90. At any given time I'd like to be able to:
package the core module together with any number of the modules
create a new config-file merging the core-config and module-configs
merge the various latex-files from the core and modules
The code+configs+documentation lives in SVN...
Can/shall MAVEN be used for this use-case?
******* UPDATE *******
#haraldkl:
Ok, I'll try to elaborate as it definitely is in my interest to gather as much info as possible on this - I really appreciate the comments I get!
My project contains a core module which is mandatory. In order to add additional functionality you may select an arbitrary number of add-on modules. The core and each module resides in their own directory and is under SVN-control. For a given delivery I would like to be able to select the "core" and an arbitrary number of modules and calculate the dependency chain in order to build the modules in the correct order as they sometimes, unfortunately, might have cross-dependencies. When the build order has been set I need to be able to merge property-files from the selected modules with the property-file for the "core" so I end-up with an assembled/aggregated property-file with the aggregated properties from the "core" and all the selected modules. The same goes for the latex-files: I'd like to get an assembled document based on the "core" + the selected modules latex-files, thus ending up with one latex-file.
So, the bottom line: a solution something like:
tick selected modules to go with the delivery (core is mandatory so no need to tick)
click "Assemble" (code is gathered from SVN)
The solution calculates correct build order
The solution merges property-files -> "package.property"
The solution merges latex-files -> "document.latex"
Currently we use make under UNIX but I'm a little uncertain as to what extent make is able to handle 4 and 5 above.
DONE!
Here is my take on it:
I believe steps 1 to 3 are completely achievable with commonly used configuration tools. Also steps 4 and 5 present, as far as I can see, just another build task, there is no reason why Make should not be able to do that. I regularly generate files for LaTeX processing via Make and then assemble them with latexmk mostly. The main point is how to select what to merge and how it has to be merged, you are a little bit unclear on the how, the what should be handled by the configuring system. Your choice probably also depends on what should be done at the end of step 3. Should the code actually be compiled, or do you need to have some written out version of the dependencies?
Traditional configure system on Unix is the autotools suite. However, as far as I know, it does not support the identification of Fortran dependencies out of the box, and you would need to enhance it in that direction.
A popular replacement for the autotools is CMake, which does include Fortran dependency resolution. It might best suite your needs as pointed out by casey, as it allows you to create various generators, so for example you could have it generating an appropriate Makefile for your selection of files.
Waf gives you great deal of flexibility to handle steps 4 and 5 in your list, it is also capable to identify Fortran dependencies, but I think, it is not as straight forward to generate for example Makefiles out of it as in CMake. The flexibility here is due to the fact, that your waf scripts are just ordinary Python scripts, so you could easily utilize any Python tools in your workflow and describe steps 4 and 5 in any complicated manner you desire.
Maven can compile Fortran code, though I do not have any experience with it, I would doubt that it also gives you automatic Fortran dependency resolution. In my understanding, it is not quite as well fit for your setup as the other tools.
The Fortranwiki has some more tools, for example you could come up with your own environment building Makefiles and use makedepf90 to generate the dependencies.

What is happening when you set a compilation path?

I understand it is somehow making a connection so that a compiler when envokes connects a source code to whatever libraries that it needs to.
But what is going on a more technical level, or better put what do I need to know in order to confidentally compile code.
I'm working with C++ and MinGW, and have started to look into build files and stuff for Sublime Text 2 (Have learned mostly under unix, or Java + eclipse so far). But what I don't understand what is adding a compiler to your path do for you?
Do I need to add it for every folder I want to compile from? Or is it system wide? I'm really learning this stuff for the first time, we we're never showed how to set up development environments or even deploy code on other systems.
You probably mean include paths and library paths in the compiler:
include paths: where the compiler will look for headers; and
library paths: where the linker, invoked by the compiler, will look for binary libraries to finish building your project.
If that is the case, look here for a gentle explanation.
Basically, what is happening is that the compiler looks in certain places for symbols defined by the operating system and other libraries installed system-wide.
In addition to those paths, you need to tell the compiler where to find the symbols defined in your own project.
You may also mean something related to installing the compiler itself or configuring the editor to use it.
In that case, what is happening is that you need to tell the build system where to find the executable for the compiler.
Basically, what is probably happening is that your editor wants to know where the compiler is so that it can provide real time feedback on your code. Adding the compiler to the system path will usually, but not always, solve your problem.
In more detail:
A C++ build is a rather complex tool chain, involving determining dependencies, preprocessing, compiling, and linking. There are tools that automate that tool chain, and those tools are in turn wrapped into the functionality of modern IDEs like Eclipse, Visual C++, or Sublime Text 2. You many need to tell your editor where to find the tools it uses to provide you with those services.

What are the major differences between makefile and CMakeList

I've searched for the major differences between makefile and CMakeLists, but found weak differences such as CMake automates dependency resolution whereas Make is manual.
I'm seeking major differences, what are some pros and cons of me migrating to CMake?
You can compare CMake with Autotools. It makes more sense! If you do this then you can find out the shortcomings of make that form the reason for the creation of Autotools and the obvious advantages of CMake over make.
Autoconf solves an important problem—reliable discovery of system-specific build and runtime information—but this is only one piece of the puzzle for the development of portable software. To this end, the GNU project has developed a suite of integrated utilities to finish the job Autoconf started: the GNU build system, whose most important components are Autoconf, Automake, and Libtool.
Make can't do that. Not out of the box anyway. You can make it do it but it would take a lot of time maintaining it across platforms.
CMake solves the same problem (and more) but has a few advantages over GNU Build System.
The language used to write CMakeLists.txt files is readable and easier to understand.
It doesn't only rely on make to build the project. It supports multiple generators like Visual Studio, Xcode, Eclipse etc.
When comparing CMake with make there are several more advantages of using CMake:
Cross platform discovery of system libraries.
Automatic discovery and configuration of the toolchain.
Easier to compile your files into a shared library in a platform agnostic way, and in general easier to use than make.
Overall CMake is clearly the choice when compared to make but you should be aware of a few things.
CMake does more than just make so it can be more complex. In the long run it pays to learn how to use it but if you have just a small project on only one platform, then maybe make can do a better job.
The documentation of CMake can seem terse at first. There are tutorials out there but there are a lot of aspects to cover and they don't do a really good job at covering it all. So you'll find only introductory stuff mostly. You'll have to figure out the rest from the documentation and real life examples: there are a lot of open source projects using CMake, so you can study them.

Resources