Large variety of open-source projects are distributed in source-code and supposed to be compiled with ./configure && make approach. But if I want to cross-compile, at which of those two steps I am supposed to tell them what target platform I want to get the binary?
Does it have to do with configure/make in general, or this is specific to every project? What could be an example of compiling some project, library or console application and specifying target?
I know many projects have a web-page on their websites that is dedicated to "cross compiling this program". So it seems to be project-specific setting. But the project still uses configure/make, so what is the relation of all that?
If your system is using standard GNU autoconf, then you would always define the cross-compilation at configure time, not at make time. If the configure script does not know you're cross-compiling it may obtain incorrect answers when it probes the system looking for what is supported and what is not supported.
Cross-compilation is what the --build, --host, and --target flags to configure are for. You should never need to set --build: it always refers to the system you're running configure on, and configure can figure that out for itself. For a normal cross-compilation you also do not set --host, and you would set --target to the cross-compilation target. You may also need to set the CC (for C programs) and/or CXX (for C++ programs), LD, AR, STRIP, and a few others, if needed. Personally I prefer to build in a separate directory as well, although some packages don't support it unfortunately):
tar xzf foo-1.1.tar.gz
mkdir obj
cd obj
../foo-1.1/configure --target=... CC=...-gcc CXX=...-g++ ...
make
Note this is all provided by basic autoconf / automake, so all projects will do it the same way (although in my experience many projects which do not attempt cross-compilation somewhat regularly, do something wrong such that it doesn't work so well).
Related
I'm currently working on a rather generic communication stack. It gets bytes in on one end, parses the packet and calls a callback.
I want to have this stack in a static library (i.e. libcommstack.a).
The library is aimed towards embedded ARM Cortex-M devices. At the moment we have specified that at least a Cortex-M3 should be used (but it should also work for an M4 or M33).
Right now I'm integrating it into another application to verify that linking it is possible. In the future the idea is that we will ship this .a file to customers so they can build their application around it, without having direct access to our sources (to encapsulate our IP).
We are using GCC ARM v7.2.1 to compile both the library and the application that is linked to it.
The application I'm trying to integrate it with is compiled for a Cortex M33 with -mfloat-abi=hard -mfpu-fpv6-sp-d16.
The code for the library does not use any floating points and is compiled using -march=archv7-m (both have the -mthumb flag).
Linking seemed to all go well, until I actually called a function from the lib. At that point the linker starts to complain:
application.elf uses VFP register arguments, libcommstack.a(somefile.c.obj) does not
failed to merge target specific data of file libcommstack.a(somefile.c.obj)
Since I'm not using floating points in the library and I don't know (upfront) if the target application does or does not have an FPU (or even uses floats), I'm not sure how to approach this.
I figured there would be two approaches:
Compile a single version of the lib, using an instruction set that all of the microcontrollers understand. I was hoping that this would be the case with ARMv7 (although I'm not yet 100% confident that the M23/M33 also support this).
Compile a lot of different libs for the different flavors based on the different architectures, FPU, etc.
As you can imagine, I would prefer to keep it simple and go for option 1, but I'm not sure how to "convince" the linker to link these two (or perhaps how to convince the compiler NOT to care about floating points for the lib).
Does anyone know if option 1 is feasible and how it can be achieved?
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Does anyone know if option 1 is feasible
Well, feasible, probably.
how it can be achieved?
Get all the processors you want to support and determine the instructions sets available on all these processors. Then compile for that instruction set.
But, please don't, that is a workaround.
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Gcc has something like "multilib profiles". See arm-none-eabi-gcc --print-multi-lib output. If you have newlib installed, you can go to /usr/arm-none-eabi/lib/thumb/ and see the directories there - newlib is compiled for each profile and installs separate library for it and different library is picked up depending on configuration. Compile for each of those profiles, and package your library by putting libraries in proper /usr/arm-none-eabi/lib/proper/directory/here and compiler will pick them up by itself (see gcc -v output for library search paths). For an example search newlib sources where it happens, can't find it. (Here's my example). With cmake as a backend as a example you could compile and install as follows:
arm-none-eabi-gcc --print-multi-lib |
while IFS=';' read -r dir opts; do
cmake -B builddir CMAKE_C_FLAGS="$opts" CMAKE_INSTALL_LIBDIR="$dir"
cmake --build builddir
cmake --install builddir --prefix "/usr/arm-none-eabi/"
done
I happen to think (but maybe is a myth) that Cmake is greater than Autotools about making easy supporting Microsoft.
At the same time, I'm kind of sure that Autotools is even more straightforward than Cmake when it comes to important UNIX derivatives such as macOS and most popular Linux distros.
What if I can't choose?
Can a project support both Autotools and Cmake at the same time?
Bonus for: can a project support both Autotools and Cmake and even simply bare Make at the same time?
By "at the same time" I mean that ideally one should not necessarily run a clean script when changing from trying one of the build systems to another. But I guess it would be a reasonable configuration, if necessary.
Finally, do you know an example project that uses both Autotools and Cmake? One that uses both Autotools, Cmake and simply bare Make?
Yes, you can very easily support both CMake and Autotools at the same time, since they don't overlap (that is, the files you use to create those environments are different, so you can have both types of files in your project at the same time). One example of this is the GNU uCommon C++ framework.
No, you can't (easily) support bare make and either of the above systems at the same time. Neither Autotools nor CMake are actually build tools themselves. They're "build tool generators". So you don't run autotools or cmake and the result is your built project: instead you run autotools or cmake and they generate control files for a build tool. Then you run the build tool and the result is your built project.
Autotools generates makefiles, and cmake generates many different types of control files, where makefiles are one of the most common.
So, you can't have your OWN makefile in your project, because they'll conflict with the makefile generated by autotools or cmake.
Of course, you can do things like put your own makefiles in a subdirectory then invoke make with an argument like make -f rawmake/makefile or something like that. But there's no convenient way to support them all.
Realistically, I would never choose to support more than one of the above options. You will spend a lot of time getting it right, and every time you need to change your build environment it's two or three times as much work. People will find issues with whichever one of them you tend to use less often. It's a huge hassle for not that much benefit.
Which you choose depends a lot on your project. If your project runs only (or almost exclusively) on POSIX-type systems, you want it to be maximally portable even to much older systems even though it uses a lot of special OS features, or you want its installation and build options to be extremely flexible (straightforward support for cross-compilation, etc.) then autotools is a good choice. If your project runs on lots of different OS types (Windows in particular) and you want people to be able to develop with their choice of IDE (Visual Studio, Xcode, etc.) easily, then cmake is a good choice.
If your program is straightforward to build and needs hardly any configuration or customization, or you are already familiar with makefiles and don't feel like learning a whole new language just for builds, then raw makefiles may be a good choice.
I don't understand, why do we need cmake to build libraries ? I am sorry if my question is stupid, but i need to use some libraries on Widnows, and what ever library i choose i need to build it and/or compile it with cmake.. What is it for ? Why cant i just #include "path" the things that i need into my project, and than it can be compiled/built at the same time as my project ?
And also, sometimes i needed to install Ruby, Perl, Python all of them some specific version so cmake can build libraries... Why do i need those programs, and will i need them only to build library or later in my project too ? (concrete can i uninstall those programs after building libraries ?)
Building things in c++ on different platforms is a mess currently.
There are several different build system out there and there is no standard way to do this. Just providing a visual studio solution wont help compilation on linux or mac.
If you add a makefile for linux or mac you need to repeat configurations between the solution and the makefiles. Which can result in a lot of maintenance overhead. Also makefiles are not really a good build tool compared to the new ones out there.
That you have only CMake libraries is mostly a coincidence. CMake is though a popular choice currently.
There are several solutions out there to unify builds. CMake is a build tool in a special way. It can create makefiles and build them but you can also tell cmake to create a visual studio solution if you like.
The same goes with external programs. They are the choice of the maintainer of the library you use and there are no standards for things like code generation.
While CMake may not be "the" solution (although the upcoming visual studio 2015 is integrating cmake support) but the trend for those build system which are cross-platform is going more and more in this direction.
To your question why you cannot only include the header:
Few libraries are header only and need to be compiled. Either you can get precompiled libs/dlls and just include the header + add the linker path. This is easier in linux because you can have -dev packages which just install a prebuild library and it's header via the package manager. Windows has no such thing natively.
Or you have to build it yourself with whatever buildtool the library uses.
The short answer is that you don't, but it would probably be difficult to build the project without it.
CMake does not build code, but is instead a build file generator. It was developed by KitWare (during the ITK project around 2000) to make building code across multiple platforms "simpler". It's not an easy language to use (which Kitware openly admits), but it unifies several things that Windows, Mac, and Linux do differently when building code.
On Linux, autoconf is typically used to make build files, which are then compiled by gcc/g++ (and/or clang)
On Windows, you would typically use the Visual Studio IDE and create what they call a "Solution" that is then compiled by msvc (the Microsoft Visual C++ compiler)
On Mac, I admit I am not familiar with the compiler used, but I believe it is something to do with XCode
CMake lets you write a single script you can use to build on multiple machines and specify different options for each.
Like C++, CMake has been divided between traditional/old-style CMake (version < 3.x) and modern CMake (version >= 3.0). Use modern CMake. The following are excellent tutorials:
Effective CMake, by Daniel Pfeifer, C++Now 2017*
Modern CMake Patterns, by Matheiu Ropert, CppCon 2017
Better CMake
CMake Tutorial
*Awarded the most useful talk at the C++Now 2017 Conference
Watch these in the order listed. You will learn what Modern CMake looks like (and old-style CMake) and gain understanding of how
CMake helps you specify build order and dependencies, and
Modern CMake helps prevent creating cyclic dependencies and common bugs while scaling to larger projects.
Additionally, the last video introduces package managers for C++ (useful when using external libraries, like Boost, where you would use the CMake find_package() command), of which the two most common are:
vcpkg, and
Conan
In general,
Think of targets as objects
a. There are two kinds, executables and libraries, which are "constructed" with
add_executable(myexe ...) # Creates an executable target "myexe"
add_library(mylib ...) # Creates a library target "mylib"
Each target has properties, which are variables for the target. However, they are specified with underscores, not dots, and (often) use capital letters
myexe_FOO_PROPERTY # Foo property for myexe target
Functions in CMake can also set some properties on target "objects" (under the hood) when run
target_compile_definitions()/features()/options()
target_sources()
target_include_directories()
target_link_libraries()
CMake is a command language, similar shell scripting, but there's no nesting or piping of commands. Instead
a. Each command (function) is on its own line and does one thing
b. The argument(s) to all commands (functions) are strings
c. Unless the name of a target is explicitly passed to the function, the command applies to the target that was last created
add_executable(myexe ...) # Create exe target
target_compile_definitions(...) # Applies to "myexe"
target_include_directories(...) # Applies to "myexe"
# ...etc.
add_library(mylib ...) # Create lib target
target_sources(...) # Applies to "mylib"
# ...etc.
d. Commands are executed in order, top-to-bottom, (NOTE: if a target needs another target, you must create the target first)
The scope of execution is the currently active CMakeLists.txt file. Additional files can be run (added to the scope) using the add_subdirectory() command
a. This operates much like the shell exec command; the current CMake environment (targets and properties, except PRIVATE properties) are "copied" over into a new scope ("shell"), where additional work is done.
b. However, the "environment" is not the shell environment (CMake target properties are not passed to the shell as environment variables like $PATH). Instead, the CMake language maintains all targets and properties in the top-level global scope CACHE
PRIVATE properties get used by the current module. INTERFACE properties get passed to subdirectory modules. PUBLIC is for the current module and submodules (the property is appropriate for the current module and applies to/should be used by modules that link against it).
target_link_libraries is for direct module dependencies, but it also resolves all transitive dependencies. This means when you link to a library, you gets all the PUBLIC properties of the parent modules as well.
a. If you want to link to a library that has a direct path, you can use target_link_libraries, and
b. if you want to link to a module with a project and take its interface, you also use target_link_libraries
You run CMake on CMakeLists.txt files to generate the build files you want for your system (ninja, Visual Studio solution, Linux make, etc.) and the run those to compile and link the code.
I'd like to use open source library on Windows. (ex:Aquila, following http://aquila-dsp.org/articles/iteration-over-wave-file-data-revisited/) But I can't understand anything about "Build System"... Everyone just say like, "Unzip the tar, do configure, make, make file" at Linux, but I want to use them for Windows. There are some several questions.
i) Why do I have to "Install" for just source code? Why can't I use these header files by copying them to the working directory and throw #include ".\aquila\global.h" ??
ii) What are Configuration and Make/Make Install? I can't understand them. I just know that configuration open source with Windows need "CMake", and it is configuration tool... But what it actually does??
iii) Though I've done : cmake, mingw32-make, mingw32-make install... My compiler said "undefined references to ...". What this means and what should I do with them?
You don't need to install for sources. You do need to install for the libraries that get built from that source code and that your code is going to use.
configure is the standard name for the script that does build configuration for the software about to be built. The usual way it is run (and how you will see it mentioned) is ./configure.
make is a build management tool (as the tag here on SO will tell you). One of the most common mechanisms for building code on linux (etc.) is to use the autotools suite which uses the aforementioned configure script to generate build configuration information for use by generated makefiles which make then uses to build the software. make is also the way to run the default build target defined in a makefile (which is often the all target and which usually builds the appropriate library/binary/etc.).
make install is a specific, secondary, invocation of the make tool on the install target which (generally) installs the (in this case previously) built code into an appropriate location (in the autotools/configure universe the default location is generally under /usr/local).
cmake is, again as the SO tag says, a build system that generates configuration files for other build tools (make, VS, etc.). This allows the developers to create the build configuration once and build on multiple platforms/etc. (at least in theory).
If running cmake worked correctly then it should have generated the correct information for whatever target system you told it to use (make or VS or whatever). Assuming that was make that should have allowed mingw32-make to build the software correctly (assuming additionally that mingw32-make is not a distinct cmake target than make). If that is not working correctly then something is still missing from your system (and cmake probably should have caught that).
But to give any more detail you will need to give more detail about what errors you are actually getting and from what command.
(Oh, and on Windows, and especially if you plan on building your software with VS (or some other non-mingw32-make tool) the chances of you needing to run mingw32-make install are incredibly small).
For Windows use cmake or latest ninja.
The process is not simple or straight, but achievable. You need to write CMake configuration.
Building process is not simple and straight, that's why there exists language like Java(that's another thing though)
Rely on CMake build the library, and you will get the Open-Source library for Windows.
You can distribute this as library for Windows systems, distribute and integrate with your own software, include the Open Source library, in either cases, you would have to build it for Windows.
Writing CMake helps, it would be helpful to build for other platforms as well.
Now Question comes: Is there any other way except CMake for Windows Build
Would you love the flavor of writing directly Assembly?
If obviously answer is no, you would have to write CMake and generate sln for MSVC and other compilers.
Just fix some of the errors comes, read the FAQ, Documentation before building an Open Source library. And fix the errors as they lurk through.
It is like handling burning iron, but it pays if you're working on something meaningful. Most of the server libraries are Open Source(e.g. age old Apache httpd). So, think before what you're doing.
There are also not many useful Open Source libraries which you could use in your project, but it's the way to Use the Open Source libraries.
Of course we all know building GCC version >= 4.1.x requires the supplementary packages MPFR, GMP and MPC to be present.
There's a few ways to handle these GCC dependencies:
1) Download and build each supporting package separately and then tell make where the binaries are located during GCC build time.
2) Download each supporting package, untar and move the source into your GCC build directory and make will automatically build each of the packages when needed.
(Executing the gcc-src/contrib/download_prerequisites script does the same as option 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Is there an advantage to either method? Does pre-compiling the binaries provide something I'm missing by taking the "easy route" and just dumping the package's source into my GCC build directory and letting make figure it out?
I've seen it done more frequently in various build scripts by pre-compiling each package to a binary, and then telling make where they are located during gcc compilation. Is this the "preferred" way to do it? Why?
To add context, I'm mainly building cross-compilers targeting various ARM platforms.
For most use cases I believe that option 2 is just as good as option 1. However, I can see a few situations in which one would want to do it manually.
A package maintainer wants to build separately as they want separate packages for mpfr et al.
Someone who wants to pass different configure arguments/CFLAGS to each of the packages.
A GCC developer who wants to keep their source and build trees small as they don't make any changes to MPFR/GMP/etc.
I haven't done too much work with the (rather ugly) GCC build system, but I haven't seen any obvious differences in how the binaries are built.
I'm not the biggest authority on this though, so YMMV; I may be wrong.