Yocto, remove autotools from (userspace) package build process - makefile

Userspace package built for and along with root file system image of some embedded Linux-based system here (using Yocto project) apparently uses autotools - one can see Makefile.am's and configure.ac in package's sources. pkg-config or its successor seems to be used too (.pc.in is present), however out-of scope here.
Package in focus here does it this way (by involving autotools) as in the beginning of its development it was apparently the line of lowest resistance to copy and adopt build scripts from similar but already-existing package.
Actually autotools seem to be dispensable when building with Yocto, as Yocto build system meta data do specify target precisely enough for every target. For good reason standard build flow in Yocto is download, unpack, patch, configure, build,... with scan-and-detect-target-environment not included in this chain.
Now I wonder if it was good to streamline package's build process by removing autotools stage. I'm going to conduct it by proceeding in sequence of few steps starting with replacing .am file with real makefile. Question is if it will be sufficient enough to find env. variables defined and used in .am and .ac then transfer them to makefile? Remaining target-device specification should actually come from Yocto build system meta data. Possibly it will work this straightforward if to build package in scope of root file system image build. But how to ensure build environment provides complete target device specification when building only this package bitbake package-name?

Replacing autotools with a bare makefile isn't a trivial operation, as https://nibblestew.blogspot.co.uk/2017/12/a-simple-makefile-is-unicorn.html demonstrates nicely.
If you don't want to use autotools in your packages then alternatives such as Meson are generally faster.

Related

Does the Linux kernel project use some build automation software to create their makefile?

Does the Linux Kernel Project use any build automation software such as autotools to generate their makefile?
Do they create the makefile manually? By browsing their project Github webpage, it seems to me so, or I am missing something. But given the complexity of the project, isn't using some build automation software more convenient?
Do they use some tools to manage the complexity of their makefile?
The Makefiles are managed manually, but most of the complexity is confined to a few common Makefiles. See the kbuild makefile documentation for details on the Makefiles used by kbuild.
Configuration of the kernel is rather complex, as many drivers or features depend on the presence of others. The source tree includes KConfig files and several utilities for creating a valid kernel build configuration eithe interactively or from text files. See the kbuild documentation for more details.

Why do we need cmake?

I don't understand, why do we need cmake to build libraries ? I am sorry if my question is stupid, but i need to use some libraries on Widnows, and what ever library i choose i need to build it and/or compile it with cmake.. What is it for ? Why cant i just #include "path" the things that i need into my project, and than it can be compiled/built at the same time as my project ?
And also, sometimes i needed to install Ruby, Perl, Python all of them some specific version so cmake can build libraries... Why do i need those programs, and will i need them only to build library or later in my project too ? (concrete can i uninstall those programs after building libraries ?)
Building things in c++ on different platforms is a mess currently.
There are several different build system out there and there is no standard way to do this. Just providing a visual studio solution wont help compilation on linux or mac.
If you add a makefile for linux or mac you need to repeat configurations between the solution and the makefiles. Which can result in a lot of maintenance overhead. Also makefiles are not really a good build tool compared to the new ones out there.
That you have only CMake libraries is mostly a coincidence. CMake is though a popular choice currently.
There are several solutions out there to unify builds. CMake is a build tool in a special way. It can create makefiles and build them but you can also tell cmake to create a visual studio solution if you like.
The same goes with external programs. They are the choice of the maintainer of the library you use and there are no standards for things like code generation.
While CMake may not be "the" solution (although the upcoming visual studio 2015 is integrating cmake support) but the trend for those build system which are cross-platform is going more and more in this direction.
To your question why you cannot only include the header:
Few libraries are header only and need to be compiled. Either you can get precompiled libs/dlls and just include the header + add the linker path. This is easier in linux because you can have -dev packages which just install a prebuild library and it's header via the package manager. Windows has no such thing natively.
Or you have to build it yourself with whatever buildtool the library uses.
The short answer is that you don't, but it would probably be difficult to build the project without it.
CMake does not build code, but is instead a build file generator. It was developed by KitWare (during the ITK project around 2000) to make building code across multiple platforms "simpler". It's not an easy language to use (which Kitware openly admits), but it unifies several things that Windows, Mac, and Linux do differently when building code.
On Linux, autoconf is typically used to make build files, which are then compiled by gcc/g++ (and/or clang)
On Windows, you would typically use the Visual Studio IDE and create what they call a "Solution" that is then compiled by msvc (the Microsoft Visual C++ compiler)
On Mac, I admit I am not familiar with the compiler used, but I believe it is something to do with XCode
CMake lets you write a single script you can use to build on multiple machines and specify different options for each.
Like C++, CMake has been divided between traditional/old-style CMake (version < 3.x) and modern CMake (version >= 3.0). Use modern CMake. The following are excellent tutorials:
Effective CMake, by Daniel Pfeifer, C++Now 2017*
Modern CMake Patterns, by Matheiu Ropert, CppCon 2017
Better CMake
CMake Tutorial
*Awarded the most useful talk at the C++Now 2017 Conference
Watch these in the order listed. You will learn what Modern CMake looks like (and old-style CMake) and gain understanding of how
CMake helps you specify build order and dependencies, and
Modern CMake helps prevent creating cyclic dependencies and common bugs while scaling to larger projects.
Additionally, the last video introduces package managers for C++ (useful when using external libraries, like Boost, where you would use the CMake find_package() command), of which the two most common are:
vcpkg, and
Conan
In general,
Think of targets as objects
a. There are two kinds, executables and libraries, which are "constructed" with
add_executable(myexe ...) # Creates an executable target "myexe"
add_library(mylib ...) # Creates a library target "mylib"
Each target has properties, which are variables for the target. However, they are specified with underscores, not dots, and (often) use capital letters
myexe_FOO_PROPERTY # Foo property for myexe target
Functions in CMake can also set some properties on target "objects" (under the hood) when run
target_compile_definitions()/features()/options()
target_sources()
target_include_directories()
target_link_libraries()
CMake is a command language, similar shell scripting, but there's no nesting or piping of commands. Instead
a. Each command (function) is on its own line and does one thing
b. The argument(s) to all commands (functions) are strings
c. Unless the name of a target is explicitly passed to the function, the command applies to the target that was last created
add_executable(myexe ...) # Create exe target
target_compile_definitions(...) # Applies to "myexe"
target_include_directories(...) # Applies to "myexe"
# ...etc.
add_library(mylib ...) # Create lib target
target_sources(...) # Applies to "mylib"
# ...etc.
d. Commands are executed in order, top-to-bottom, (NOTE: if a target needs another target, you must create the target first)
The scope of execution is the currently active CMakeLists.txt file. Additional files can be run (added to the scope) using the add_subdirectory() command
a. This operates much like the shell exec command; the current CMake environment (targets and properties, except PRIVATE properties) are "copied" over into a new scope ("shell"), where additional work is done.
b. However, the "environment" is not the shell environment (CMake target properties are not passed to the shell as environment variables like $PATH). Instead, the CMake language maintains all targets and properties in the top-level global scope CACHE
PRIVATE properties get used by the current module. INTERFACE properties get passed to subdirectory modules. PUBLIC is for the current module and submodules (the property is appropriate for the current module and applies to/should be used by modules that link against it).
target_link_libraries is for direct module dependencies, but it also resolves all transitive dependencies. This means when you link to a library, you gets all the PUBLIC properties of the parent modules as well.
a. If you want to link to a library that has a direct path, you can use target_link_libraries, and
b. if you want to link to a module with a project and take its interface, you also use target_link_libraries
You run CMake on CMakeLists.txt files to generate the build files you want for your system (ninja, Visual Studio solution, Linux make, etc.) and the run those to compile and link the code.

How to install and use open source library on Windows?

I'd like to use open source library on Windows. (ex:Aquila, following http://aquila-dsp.org/articles/iteration-over-wave-file-data-revisited/) But I can't understand anything about "Build System"... Everyone just say like, "Unzip the tar, do configure, make, make file" at Linux, but I want to use them for Windows. There are some several questions.
i) Why do I have to "Install" for just source code? Why can't I use these header files by copying them to the working directory and throw #include ".\aquila\global.h" ??
ii) What are Configuration and Make/Make Install? I can't understand them. I just know that configuration open source with Windows need "CMake", and it is configuration tool... But what it actually does??
iii) Though I've done : cmake, mingw32-make, mingw32-make install... My compiler said "undefined references to ...". What this means and what should I do with them?
You don't need to install for sources. You do need to install for the libraries that get built from that source code and that your code is going to use.
configure is the standard name for the script that does build configuration for the software about to be built. The usual way it is run (and how you will see it mentioned) is ./configure.
make is a build management tool (as the tag here on SO will tell you). One of the most common mechanisms for building code on linux (etc.) is to use the autotools suite which uses the aforementioned configure script to generate build configuration information for use by generated makefiles which make then uses to build the software. make is also the way to run the default build target defined in a makefile (which is often the all target and which usually builds the appropriate library/binary/etc.).
make install is a specific, secondary, invocation of the make tool on the install target which (generally) installs the (in this case previously) built code into an appropriate location (in the autotools/configure universe the default location is generally under /usr/local).
cmake is, again as the SO tag says, a build system that generates configuration files for other build tools (make, VS, etc.). This allows the developers to create the build configuration once and build on multiple platforms/etc. (at least in theory).
If running cmake worked correctly then it should have generated the correct information for whatever target system you told it to use (make or VS or whatever). Assuming that was make that should have allowed mingw32-make to build the software correctly (assuming additionally that mingw32-make is not a distinct cmake target than make). If that is not working correctly then something is still missing from your system (and cmake probably should have caught that).
But to give any more detail you will need to give more detail about what errors you are actually getting and from what command.
(Oh, and on Windows, and especially if you plan on building your software with VS (or some other non-mingw32-make tool) the chances of you needing to run mingw32-make install are incredibly small).
For Windows use cmake or latest ninja.
The process is not simple or straight, but achievable. You need to write CMake configuration.
Building process is not simple and straight, that's why there exists language like Java(that's another thing though)
Rely on CMake build the library, and you will get the Open-Source library for Windows.
You can distribute this as library for Windows systems, distribute and integrate with your own software, include the Open Source library, in either cases, you would have to build it for Windows.
Writing CMake helps, it would be helpful to build for other platforms as well.
Now Question comes: Is there any other way except CMake for Windows Build
Would you love the flavor of writing directly Assembly?
If obviously answer is no, you would have to write CMake and generate sln for MSVC and other compilers.
Just fix some of the errors comes, read the FAQ, Documentation before building an Open Source library. And fix the errors as they lurk through.
It is like handling burning iron, but it pays if you're working on something meaningful. Most of the server libraries are Open Source(e.g. age old Apache httpd). So, think before what you're doing.
There are also not many useful Open Source libraries which you could use in your project, but it's the way to Use the Open Source libraries.

From configure scripts to Makefiles?

I'd like to build my own GNU/Linux system from scratch using cross-compilation (just like the CLFS project). Most of the packages I would use are distributed with a configure script, and you just have to run it with the right arguments. For various reasons, I'd like to skip this step, and run make instead. Of course I need a custom Makefile for this to work. The question is: is it feasible to create custom Makefiles without having to read and comprehend all the source code? Is it possible to just read the configure.ac files or something like those? Thanks.
Probably not. What happens is that configure tests which of a number of options are most suited for your environment then substitutes them into Makefile.in to build the Makefile, config.h.in to build config.h etc. You could skip running configure and just determine what these values should be from simple cases of configure.ac (or just keep one huge cache if your environment won't change) but I think packages can define extra inline checks in configure.ac that you'd have to parse and implement correctly. It's going to be a lot easier to just run configure, even if you do have to figure out the correct parameter values for your cross-compiled environment without runtime checks.
However hopefully you only need to build a small number of packages cross (kernel, glibc, gcc, make, bash, etc.), then you can switch into your new environment and build the remaining packages there using configure? If you want inspiration as to what switch values you should be using you can always look at the parameters in Fedora SRPMs or Debian source-debs.

Perfect makefile

I'd like to use make to get a modular build in combination with continuous integration, automatic unit testing and multi-platform builds. Similar setups are common in Java and .NET, but I'm having a hard time putting this together for make and C/C++. How can it be achieved?
My requirements:
fast build; non-recursive make (Stack Overflow question What is your experience with non-recursive make?)
modular system (that is, minimal dependencies, makefile in subdirectory with components)
multiplatform (typically PC for unit testing, embedded target for system integration/release)
complete dependency checking
ability to perform (automatic) unit tests (Agile engineering)
hook into continuous integration system
easy to use
I've started with non-rec make. I still find it a great place to start.
Limitations so far:
no integration of unit tests
incompatibility of windows based ARM compilers with Cygwin paths
incompatibility of makefile with Windows \ paths
forward dependencies
My structure looks like:
project_root
/algorithm
/src
/algo1.c
/algo2.c
/unit_test
/algo1_test.c
/algo2_test.c
/out
algo1_test.exe
algo1_test.xml
algo2_test.exe
algo2_test.xml
headers.h
/embunit
/harnass
makefile
Rules.top
I'd like to keep things simple; here the unit tests (algo1_test.exe) depend on both the 'algorithm' component (ok) and the unit test framework (which may or may not be known at the time of building this). However, moving the build rules to the top make does not appeal to me as this would distribute local knowledge of components throughout the system.
As for the Cygwin paths: I'm working on making the build using relative paths. This resolves the /cygdrive/c issue (as compilers can generally handle / paths) without bringing in C: (which make dislikes). Any other ideas?
CMake together with the related tools CTest and CDash seem to answer your requirements. Worth giving it a look.
Bill Hoffman (A lead CMake developer) refers to the Recursive Make Considered Harmful paper in a post at the CMake mailing list:
... since cmake is creating the makefiles for you, many of the disadvantages
of recursive make are avoided, for example you should not have to debug
the makefiles or even think about how they work. There are other examples
of things in that paper that cmake fixes for you as well.
See also this answer for "Recursive Make - friend or foe?" here on stackoverflow.
-
Recursive Make - friend or foe?
Ok here is what I do:
I use one Makefile at the root and wildcard patterns to collect all files in a directory. Note that I assume that foo/*.c will make up foo.so for example. This makes the maintaining the Makefile minimal, since just adding a file to the directory automatically adds it to the build.
Since it is make you are using I am assuming (I do that for my projects) that a compiler is used that uses gcc (cc) compatible command line syntax. So MSC is out of order; but don't get frustrated, I do most of my development (unfortunately) on Windows and use MinGW with MSys; works like a charm. Produces native binaries, but was built with a Posix compliant build environment.
Dependency checking is done with the somewhat standard -MD switch. I then include all the *.d files into the Makefile. I build the patterns out of the automatically collected source files.
Finally unit tests are implemented with the "standard" check target. The check target is like the all target, except it depends on the unit test and executes that once everything is built. I do it this way so that you can just build the project or build the unit tests (and the rest of the project) separably. When I am not developing the project I want to just build it and be done with it.
Here is an example of how I do it: https://github.com/rioki/c9y/blob/master/Makefile
It also has the install, uninstall and dist targets.
As you can see everything is plain make, no recursive make calls and all is relatively simple. I used automake and autoconf and I will never do that again; also other build tools are out of the question, if I need to install foojam or barmake to build something, I normally ditch that project immediately.

Resources