I am working on CentOS 6 machines, which has very old GCC/GlibC version. I want to build the whole glibc, binutils, gcc toolchain with latest or at least very recent versions in order to use c++11 support in latest gcc, and ld.gold in recent binutils, and possibly improvements in recent glibc.
I want to put the whole toolchain in some separate directory, and not to influence any existing system files. I also want to build gcc with --sys-root so that when using the gcc, I don't need to specify -I/some/directory/include and -L/some/directory/lib or whatever other parameters. Also the generated executable will automatically use the new ld-linux-xxxxx program loader which will automatically find the new libc.so.
Anyone knows if there exists some tutorial on this task?
The compiler is very dependent on glibc, altough you manage to build the compiler either in a chrooted system or equivalent, you will need to build also all libraries needed with the program you will build with this new compiler.
The best you can do is use a fresh new system (vm or whatever) or upgrade your existing one
You can download the latest toolchain from Openembedded or Yocto.
And here you don't have to do any package installation to your current system.
Just download the toolchain, source the environment and thats it you are ready to check the c++11 support.
The location to download the toolchain:
http://downloads.yoctoproject.org/releases/yocto/yocto-1.7/toolchain/ (Just select the architecture either 32bit or 64 bit based on your machine support)
If you need the latest toolchain, you'd better migrate to Fedora.
If you can't/won't, the best bet is to get the pieces as source RPMs for CentOS and Fedora, unpack them and fix up the CentOS by pilfering the sources and patches from Fedora, take care it doesn't overrule the system packages, correct versions and fix to install elsewhere (don't mess up your system too much! /usr/local comes to mind). The pieces are at least binutils, gcc.
I do not knwo Why you need this ? If this is needed that to compile for another computer, I would suggest using a virtual machine running the same OS as target. much more easier !!
Related
There is a current "vogue" in some categories of software (I have observed it with 3D software), to force the use of the last library or tool, making them the facto incompatible with (not so) old distribution. e.g.:
some (non free) recent slicers for resin printing are incompatible
with my 2019!!!(doh) linux mint because it was build with a more
recent version of glibc, or even
I looks like I won't be able to build the last release version of
openscad, because it requires a more recent version of cmake...
So, in this last case, I thought I should be able to make a local built of the more recent cmake, but I didn't find an alt-install make's tag like with python, for an example (in python, make alt-install will install in /usr/local/bin instead of /usr/bin which would break your system.
I develop a utility in Go that requires recent version of sqlite. I'm interested only in targeting specific architecture, to be specific: x64 linux. I'm developing that utility on Mac OS X. I'm using go-sqlite3 driver. I use GNU Make + Glide to build my utility. In order to cross compile on my Mac I pass specific arch flags to make.
Repos on Linux platforms that I'm targeting usually have quite old versions of sqlite that don't have features that I need in my utility.
I can manually compile and install required version of sqlite on all the platforms that I need, but it is quite cumbersome. I wonder if there is a good way to either statically link a specific version of sqlite or somehow bundle a utility with specific version of sqlite dynamic library.
Even though I mention sqlite a lot, this question can be generalized to other libraries: how to bundle a golang app with a specific version of C library an outdated version of which may be installed on the target platform.
Also: how to better organize development of that utility so that other devs won't need to manually compile and install specific version of the library - the preference is to use Makefile that would build all the binaries for required target platform. I see that I can just copy code of specific version of library (e.g. sqlite) to my utility's repo though I wonder if there is a better option - maybe I can somehow use glide dependencies for that purpose and build library that I need as part of my other dependencies.
Of course we all know building GCC version >= 4.1.x requires the supplementary packages MPFR, GMP and MPC to be present.
There's a few ways to handle these GCC dependencies:
1) Download and build each supporting package separately and then tell make where the binaries are located during GCC build time.
2) Download each supporting package, untar and move the source into your GCC build directory and make will automatically build each of the packages when needed.
(Executing the gcc-src/contrib/download_prerequisites script does the same as option 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Is there an advantage to either method? Does pre-compiling the binaries provide something I'm missing by taking the "easy route" and just dumping the package's source into my GCC build directory and letting make figure it out?
I've seen it done more frequently in various build scripts by pre-compiling each package to a binary, and then telling make where they are located during gcc compilation. Is this the "preferred" way to do it? Why?
To add context, I'm mainly building cross-compilers targeting various ARM platforms.
For most use cases I believe that option 2 is just as good as option 1. However, I can see a few situations in which one would want to do it manually.
A package maintainer wants to build separately as they want separate packages for mpfr et al.
Someone who wants to pass different configure arguments/CFLAGS to each of the packages.
A GCC developer who wants to keep their source and build trees small as they don't make any changes to MPFR/GMP/etc.
I haven't done too much work with the (rather ugly) GCC build system, but I haven't seen any obvious differences in how the binaries are built.
I'm not the biggest authority on this though, so YMMV; I may be wrong.
I recently installed Debian Squeeze on my machine with C++ programming practice as one of the main goals. I use Boost libraries regularly in my projects. On OS-X and Windows, I had to manually install Boost header libraries prior to using them. However, regarding Linux, the front page of the Boost website mentions
Popular Linux and Unix distributions such as Fedora, Debian, and NetBSD include pre-built Boost packages.
I use mainly the header libraries, not pre-built packages for my current projects. So my question is: Are the header libraries installed by default anywhere on Debian or do I have to install them? I have already looked in /usr/include and it doesn't seem to have any Boost directory. I have googled as well as looked up related discussions on SO, but didn't get a clear answer to my question. If I do have to install the header libraries, is there an 'apt-get' way of doing it or I simply untar and place in a convenient location (/usr/local/include) ?
Second, if I need to manually place the boost headers (say in /usr/local/include/), should the version of the headers match with the pre-installed packages for compatibility with any potential future projects which use both the binaties (libboost-*) and header files?
I am fairly new to programming on a Linux platform. Although, I can make things work using patch-and-match (and googling), I am looking for guidance on long term best practices.
Thanks.
Saying a GNU/Linux distribution "includes" a package such as Boost doesn't mean it is installed automatically, it means the package is available for installation, using your system's package management tool. The package might be tailored for your distribution, so it integrates well with the rest of the OS, or it might just be identical to the upstream version and the benefit is just that it's already built for you and convenient to install from within the OS.
There is tons of documentation on Debian's package mgt tools:
http://wiki.debian.org/PackageManagement
http://www.debian.org/doc/manuals/debian-faq/ch-pkgtools.en.html
http://www.debian.org/doc/manuals/debian-reference/ch02.en.html
So yes, you want to apt-get (or the equivalent with another of Debian's tools) to install Boost in /usr/include, that will be much easier than manually installing them. If you later decide to install Boost manually, keep that installation entirely separate from the system packages, so the libraries and headers from the newer version don't conflict with the system packages. If it's a single-user machine and you don't need the packages to be available to other users on the machine then you can just install them in your home directory, rather than /usr/local/ (which requires superuser access, and you should do as little as possible as the root user)
On the SourceForge page for MinGW, you can download the GCC 4.5.2 and that's the latest version. On the GNU mirrors, you can download the GCC 4.6 source and compile it with one of the possible windows targets:
i[3456789]86-w64-mingw*
i[3456789]86-*-mingw*
x86_64-*-mingw*
Is there a difference between using one of these targets and the traditional GCC for MinGW? Would it make sense to use the regular GCC because it has more up-to-date versions or would it make more sense to wait until an up-to-date GCC for MinGW is released?
As you can see in the README file accompanying the MinGW release of GCC on SourceForge, no local patches were used, and I think this has been the case for quite a while now, so assuming there were no changes in the GCC codebase that require new local patches, you can very well download the GCC sources from one of the mirrors and build them yourself.
I have done so myself in the past, especially because I use gfortran, which is under quite heavy development, so from time to time I take the most recent snapshot and build that myself, so I can use certain new features that were only recently introduced.
(I have to admit that it took some trying to get the build to run without errors, and after a period without problems, I recently ran into some new ones that I couldn't completely smooth out. I will have to try again soon.)