I have a few questions about the build process for building GCC that i was hoping someone could explain to me.
Why is it necessary to unset C_INLCUDE_PATH CPLUS_INCLUDE_PATH LD_LIBRARY_PATH LIBRARY_PATH
Why does GCC require MPFR, MPC, and GMP to build? And if old versions (as downloaded with the download_prerequisites) and then newer versions are installed later, which will be used by a compiled program?
Why does GCC require MPFR, MPC, and GMP to build?
I can answer this part. MPFR and MPC are necessary to apply floating-point operations at compile-time. In theory MPFR could be used to parse decimal constants in the source code (GCC developers have said several time that since they depend on MPFR now, they might as well use it for that but to my knowledge, GCC's decimal-to-floating-point conversion still relies on their own code in real.c). Using MPFR also allows to make cross-compilers hosted on a machine that doesn't have floating-point (or has floating-point with different characteristics than the target architecture).
GMP is just a dependency of the other two.
It used not to be like this, the dependency towards MPFR is a relatively recent change (say a couple of years).
And if old versions (as downloaded with the download_prerequisites) and then newer versions are installed later, which will be used by a compiled program?
The GMP, MPFR, MPC libraries are used at compile-time only. Any program that has already been compiled was compiled with the version of these libraries that the compiler used at that time. It doesn't change anything from the point of view of a compiled program if you update the library afterwards.
While I'm here, I think I can explain the other thing as well:
Why is it necessary to unset C_INLCUDE_PATH CPLUS_INCLUDE_PATH LD_LIBRARY_PATH LIBRARY_PATH
Because the build process uses these variables for its own purposes and it will mess it up if you set them.
Related
Ubuntu 16.04 comes with GCC 5.4 which does support c++11 and it is the default compiler. By default c++11 is not enabled in that particular version of GCC.
My intent is to use some of the binary libraries (not header only) from their repository (e.g. boost). In my projects I will enable c++ 11.
How were c++ libraries from the repository compiled? Is it possible to use them with c++ 11 enabled? I know that c++ libraries can be called from different languages (Java, Pythons, C# etc) by hiding all c++ stuff behind plain C interface. With boost it is not a case. If a certain function returns me a string or a vector or anything from STL then it is a problem. AFAIK STL objects binary representation depends on compiler flags (eg. std=c++11).
Thank you.
Which exact libraries are you talking about?
If you are talking about the standard library, libstdc++ is a part of gcc. It is always okay to link it no matter which standard you compile at. gcc also made a decision to include ABI tags, so that they can be ABI compatible with code compiled at C++11 and pre C++11. See for instance TC's really nice answer to a question I asked here:
Is this simple C++ program using <locale> correct?
If by
How were c++ libraries from the repository compiled?
you mean, how are all of the C++ libraries in the ubuntu repositories compiled, the answer is, it may be different for each one.
For instance if you want to use libfreetype6-dev or libsdl2-dev, these are C libraries, they will be okay to link to no matter what standard you target.
If you want to use libsilly-dev from CEGUI, that is a C++ library, and it is usually best to use the exact same compiler for your project and the C++ lib that you are linking to. If it appears in ubuntu repository, you can assume it was built with the default g++ version that ubuntu is shipping. If you need to use a different compiler, it's probably best to build the C++ lib yourself -- in general C++ is not ABI stable across different compilers, or even different versions of the same compiler.
If you want to use compiled boost libraries, it's probably best to use the libs they give you and use the compiler they give you. If you only use header-only boost, then the compiler doesn't matter since you don't actually have to link with something they built. So you then have more flexibility with respect to compilers.
Often, if you need to use C++ libraries, it's best to integrate their build system into yours so that it can be easily rebuilt from source and you only have to configure the compiler once. (At least in my experience.) This can save a lot of time when you decide to upgrade compilers later. If you use cmake then it's often feasible, but sometimes this can be hard, especially if you have a lot of C++ dependencies. If you don't use cmake, well, many libraries use cmake and it won't be that easy to integrate them this way. cmake is still kind of a pain anyways, so this might not be such a loss.
How I can to know what is a minimal version of glibc for gcc or binutils?
Regards.
binutils doesn't generally have a minimal glibc requirement because it doesn't have too much glibc-specific details in it. it's merely a collection of low level tools like an assembler and linker and objdumper all of which are built on code included in binutils.
gcc is a different beast -- it needs to know intimate details about C library capabilities. in the specific version of gcc you have, consult the INSTALL/index.html file (and particularly, the Prerequisites page) for the requirements.
I have downloaded a library that was compiled with a gcc 4.8 before the ABI change in GCC.
On my laptop (latest kubuntu) I have GCC 5.2. And When I installed boost, it seems that it used the new ABI but then I get the following link errors
undefined symbol.....__cxx11....
How can I install boost using old ABI with GCC5 ?
To my knowledge, there are no prebuilt Boost packages for the old ABI in the official Kubuntu repositories, so you will have to build Boost yourself. The building process is documented here.
Make sure you're building the same Boost version that was used when your library was built. If there were any Boost configuration macros defined, you will also have to define them the similar way. Otherwise you may encounter ABI incompatibilities between the library and Boost you've built.
In order to switch libstdc++ to the old ABI you will also have to define _GLIBCXX_USE_CXX11_ABI to 0, as described here. For example:
b2 -j8 variant=release define=_GLIBCXX_USE_CXX11_ABI=0 stage
You will also need to define the macro when you build your own code that uses Boost and the library.
The define property, along with many others, is documented here.
Of course we all know building GCC version >= 4.1.x requires the supplementary packages MPFR, GMP and MPC to be present.
There's a few ways to handle these GCC dependencies:
1) Download and build each supporting package separately and then tell make where the binaries are located during GCC build time.
2) Download each supporting package, untar and move the source into your GCC build directory and make will automatically build each of the packages when needed.
(Executing the gcc-src/contrib/download_prerequisites script does the same as option 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Is there an advantage to either method? Does pre-compiling the binaries provide something I'm missing by taking the "easy route" and just dumping the package's source into my GCC build directory and letting make figure it out?
I've seen it done more frequently in various build scripts by pre-compiling each package to a binary, and then telling make where they are located during gcc compilation. Is this the "preferred" way to do it? Why?
To add context, I'm mainly building cross-compilers targeting various ARM platforms.
For most use cases I believe that option 2 is just as good as option 1. However, I can see a few situations in which one would want to do it manually.
A package maintainer wants to build separately as they want separate packages for mpfr et al.
Someone who wants to pass different configure arguments/CFLAGS to each of the packages.
A GCC developer who wants to keep their source and build trees small as they don't make any changes to MPFR/GMP/etc.
I haven't done too much work with the (rather ugly) GCC build system, but I haven't seen any obvious differences in how the binaries are built.
I'm not the biggest authority on this though, so YMMV; I may be wrong.
In the PSPSDK (Homebrew) we are using the Binutils 2.16.1 to assemble and link the code for the PlayStation Portable, however that release is getting quite outdated (3 versions have superseded it). The community and me have been updating the GCC and newlib to the latest stable versions and everything seems to work with the old binutils.
Will GCC produce better code with binutils 2.19? Why?
Will binutils 2.19 produce better elf files and libs than 2.16.1? Why?
binutils 2.19 has a new ELF linker called gold which is multi-threaded, written in modern C++, and quite a bit faster than the usual ld linker. I'm not sure however about the work involved to adapt it.
Other than that, well new versions always are a good idea. Performance and bug fixes are likely to have been included, of course. I think i would certainly try it and if something goes wrong you can still backstep.
In general, you don't need to upgrade binutils unless you run into some bug fixed in a later binutils version, or need new features (such as linker build-ids).
In particular, GCC code generation is largely independent of binutils (except for constructs like __thread, which require certain level of support from binutils).