Building GCC with MPFR, GMP and MPC - gcc

Of course we all know building GCC version >= 4.1.x requires the supplementary packages MPFR, GMP and MPC to be present.
There's a few ways to handle these GCC dependencies:
1) Download and build each supporting package separately and then tell make where the binaries are located during GCC build time.
2) Download each supporting package, untar and move the source into your GCC build directory and make will automatically build each of the packages when needed.
(Executing the gcc-src/contrib/download_prerequisites script does the same as option 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Is there an advantage to either method? Does pre-compiling the binaries provide something I'm missing by taking the "easy route" and just dumping the package's source into my GCC build directory and letting make figure it out?
I've seen it done more frequently in various build scripts by pre-compiling each package to a binary, and then telling make where they are located during gcc compilation. Is this the "preferred" way to do it? Why?
To add context, I'm mainly building cross-compilers targeting various ARM platforms.

For most use cases I believe that option 2 is just as good as option 1. However, I can see a few situations in which one would want to do it manually.
A package maintainer wants to build separately as they want separate packages for mpfr et al.
Someone who wants to pass different configure arguments/CFLAGS to each of the packages.
A GCC developer who wants to keep their source and build trees small as they don't make any changes to MPFR/GMP/etc.
I haven't done too much work with the (rather ugly) GCC build system, but I haven't seen any obvious differences in how the binaries are built.
I'm not the biggest authority on this though, so YMMV; I may be wrong.

Related

git2go with libssl and libssh2 in single binary

Could anyone offer some suggestions (or resources) on how I could package a GO program that uses git2go, libssl and libssh2 such that it doesn't require the end user to install these libraries separately?
I am only targeting Linux distros (if it matters)
One way would be to build those dependencies statically as well and use PKG_CONFIG_PATH point to your own copies so everything gets linked statically. That should make CMake choose the static versions.
But if the goal is to avoid depending on the user-installed libraries rather than making everything a single executable, I would recommend shipping the libraries and working with the load path to make sure they get loaded. With gcc you'd pass -Wl,-R to set the search path in the binary itself, so you can set where to search for the shared libraries you're shipping with your app. With go it looks like you can pass -r to the linker (via -ldflags or manually) to do the same thing.
libgit2 is rather extensible, so there is a third option which is to implement the TLS stream and SSH transport in Go and plug those into a version of libgit2 without support for these. This is however a significant amount of work.

How to install and use open source library on Windows?

I'd like to use open source library on Windows. (ex:Aquila, following http://aquila-dsp.org/articles/iteration-over-wave-file-data-revisited/) But I can't understand anything about "Build System"... Everyone just say like, "Unzip the tar, do configure, make, make file" at Linux, but I want to use them for Windows. There are some several questions.
i) Why do I have to "Install" for just source code? Why can't I use these header files by copying them to the working directory and throw #include ".\aquila\global.h" ??
ii) What are Configuration and Make/Make Install? I can't understand them. I just know that configuration open source with Windows need "CMake", and it is configuration tool... But what it actually does??
iii) Though I've done : cmake, mingw32-make, mingw32-make install... My compiler said "undefined references to ...". What this means and what should I do with them?
You don't need to install for sources. You do need to install for the libraries that get built from that source code and that your code is going to use.
configure is the standard name for the script that does build configuration for the software about to be built. The usual way it is run (and how you will see it mentioned) is ./configure.
make is a build management tool (as the tag here on SO will tell you). One of the most common mechanisms for building code on linux (etc.) is to use the autotools suite which uses the aforementioned configure script to generate build configuration information for use by generated makefiles which make then uses to build the software. make is also the way to run the default build target defined in a makefile (which is often the all target and which usually builds the appropriate library/binary/etc.).
make install is a specific, secondary, invocation of the make tool on the install target which (generally) installs the (in this case previously) built code into an appropriate location (in the autotools/configure universe the default location is generally under /usr/local).
cmake is, again as the SO tag says, a build system that generates configuration files for other build tools (make, VS, etc.). This allows the developers to create the build configuration once and build on multiple platforms/etc. (at least in theory).
If running cmake worked correctly then it should have generated the correct information for whatever target system you told it to use (make or VS or whatever). Assuming that was make that should have allowed mingw32-make to build the software correctly (assuming additionally that mingw32-make is not a distinct cmake target than make). If that is not working correctly then something is still missing from your system (and cmake probably should have caught that).
But to give any more detail you will need to give more detail about what errors you are actually getting and from what command.
(Oh, and on Windows, and especially if you plan on building your software with VS (or some other non-mingw32-make tool) the chances of you needing to run mingw32-make install are incredibly small).
For Windows use cmake or latest ninja.
The process is not simple or straight, but achievable. You need to write CMake configuration.
Building process is not simple and straight, that's why there exists language like Java(that's another thing though)
Rely on CMake build the library, and you will get the Open-Source library for Windows.
You can distribute this as library for Windows systems, distribute and integrate with your own software, include the Open Source library, in either cases, you would have to build it for Windows.
Writing CMake helps, it would be helpful to build for other platforms as well.
Now Question comes: Is there any other way except CMake for Windows Build
Would you love the flavor of writing directly Assembly?
If obviously answer is no, you would have to write CMake and generate sln for MSVC and other compilers.
Just fix some of the errors comes, read the FAQ, Documentation before building an Open Source library. And fix the errors as they lurk through.
It is like handling burning iron, but it pays if you're working on something meaningful. Most of the server libraries are Open Source(e.g. age old Apache httpd). So, think before what you're doing.
There are also not many useful Open Source libraries which you could use in your project, but it's the way to Use the Open Source libraries.

Link go program vs GNU readline statically

I'm writing a Go program that uses the GNU readline library for a fancy command line interface. In order to simplify the installing process and not worry about the library version and other stuff, I want to link it statically.
The problem is I don't really know how to do it. If I precompile the library, I would have to provide several versions of my code, with the different versions of the .a or .lib readline library. To avoid this problem I was thinking of just including the current readline code to my go project, and let the go tool compile it when it build the go project. However, to build the readline library, is necessary to use make. Is there a way of telling the go tool how to build the C code?
Yes, you can certainly do that. I've recently done something similar with a different project, mainly because the code was not available as a library (Ubuntu compiles just the command line tool for it). To achieve it, I've run the autoconf script with options that I figured would be sensible in most systems, and copied the C code together with the automatically built config.h header file into the Go package directory. Then, I've built the original C code with make once and observed which options gcc would get while compiling and linking it, and copied the appropriate ones into cgo's LDFLAGS and CFLAGS options (you can also inspect the Makefile, but that was easier).
A couple of side notes:
Have you considered doing the readline work in Go itself? The ssh terminal package works at least as a pretty good seed, if it doesn't solve your problem completely.
Remember that readline, although being a library, is GPL. You'll necessarily have to license your own software as GPL as well if you link or embed it. There are other smilar libraries available with less strict licenses, if you care.
I recommend avoiding readline, better alternatives exist; like https://github.com/edsrzf/fineline

When/how to specify configure/make target

Large variety of open-source projects are distributed in source-code and supposed to be compiled with ./configure && make approach. But if I want to cross-compile, at which of those two steps I am supposed to tell them what target platform I want to get the binary?
Does it have to do with configure/make in general, or this is specific to every project? What could be an example of compiling some project, library or console application and specifying target?
I know many projects have a web-page on their websites that is dedicated to "cross compiling this program". So it seems to be project-specific setting. But the project still uses configure/make, so what is the relation of all that?
If your system is using standard GNU autoconf, then you would always define the cross-compilation at configure time, not at make time. If the configure script does not know you're cross-compiling it may obtain incorrect answers when it probes the system looking for what is supported and what is not supported.
Cross-compilation is what the --build, --host, and --target flags to configure are for. You should never need to set --build: it always refers to the system you're running configure on, and configure can figure that out for itself. For a normal cross-compilation you also do not set --host, and you would set --target to the cross-compilation target. You may also need to set the CC (for C programs) and/or CXX (for C++ programs), LD, AR, STRIP, and a few others, if needed. Personally I prefer to build in a separate directory as well, although some packages don't support it unfortunately):
tar xzf foo-1.1.tar.gz
mkdir obj
cd obj
../foo-1.1/configure --target=... CC=...-gcc CXX=...-g++ ...
make
Note this is all provided by basic autoconf / automake, so all projects will do it the same way (although in my experience many projects which do not attempt cross-compilation somewhat regularly, do something wrong such that it doesn't work so well).

Tutorial on building whole toolchain on CentOS

I am working on CentOS 6 machines, which has very old GCC/GlibC version. I want to build the whole glibc, binutils, gcc toolchain with latest or at least very recent versions in order to use c++11 support in latest gcc, and ld.gold in recent binutils, and possibly improvements in recent glibc.
I want to put the whole toolchain in some separate directory, and not to influence any existing system files. I also want to build gcc with --sys-root so that when using the gcc, I don't need to specify -I/some/directory/include and -L/some/directory/lib or whatever other parameters. Also the generated executable will automatically use the new ld-linux-xxxxx program loader which will automatically find the new libc.so.
Anyone knows if there exists some tutorial on this task?
The compiler is very dependent on glibc, altough you manage to build the compiler either in a chrooted system or equivalent, you will need to build also all libraries needed with the program you will build with this new compiler.
The best you can do is use a fresh new system (vm or whatever) or upgrade your existing one
You can download the latest toolchain from Openembedded or Yocto.
And here you don't have to do any package installation to your current system.
Just download the toolchain, source the environment and thats it you are ready to check the c++11 support.
The location to download the toolchain:
http://downloads.yoctoproject.org/releases/yocto/yocto-1.7/toolchain/ (Just select the architecture either 32bit or 64 bit based on your machine support)
If you need the latest toolchain, you'd better migrate to Fedora.
If you can't/won't, the best bet is to get the pieces as source RPMs for CentOS and Fedora, unpack them and fix up the CentOS by pilfering the sources and patches from Fedora, take care it doesn't overrule the system packages, correct versions and fix to install elsewhere (don't mess up your system too much! /usr/local comes to mind). The pieces are at least binutils, gcc.
I do not knwo Why you need this ? If this is needed that to compile for another computer, I would suggest using a virtual machine running the same OS as target. much more easier !!

Resources