Should I compile GCC with latest version? - gcc

I have to compile 3 versions of gcc, say 9, 10 and 11.
My system gcc is version 8 (let's say).
Question : do I have any advantage on compiling gcc-v9 with gcc-v8, gcc-10 with gcc-v9 and gcc-v11 with gcc-v10 ?
Or I don't have any advantage and I can compile them all the gcc-v8 ?
Thank you for pointing out some directions for further research.

GCC has a "bootstrap" build process. So when you try to build gcc-11 with only gcc-8 installed, it will build a temporary "stage 1" version of gcc-11 using gcc-8, then compile gcc-11 again using gcc-11-stage1. Thus no matter what you start with, the version of gcc-11 that comes out of the build process was effectively compiled with itself.
So all that matters is that gcc-8, or whatever "system compiler" was previously installed, is able to build a stage1 version of gcc-11 that runs well enough to compile the stage 2 version. It doesn't matter whether your system compiler is good at optimizing, and gcc's source code is deliberately written to use a fairly minimal set of language feature (at least for stage 1), so you are not likely to run into trouble with your system compiler having missing or buggy support for obscure corners of the language. Historically, the "system compiler" was often not gcc at all, but some compiler provided by the computer vendor or an unrelated third party, and so one couldn't rely much on its quality; gcc was designed with that in mind.
Theoretically your system compiler could have a bug which miscompiles gcc-11-stage1 in such a way that it appears to work, but itself miscompiles stage2. This is unlikely, and it's even less likely that it would happen in a way that wasn't obvious (e.g. the stage2 compiler simply segfaulting). If worried, there's an option to have stage2 build a stage3 compiler, and then check that both versions are identical. So as long as the build completes, you can be pretty confident that the final installed compiler is fine and unaffected by bugs in the original system compiler. (All that said, a reference to Ken Thompson's "Reflections on Trusting Trust" is obligatory here.)
So in practice, you don't need to worry about the version of gcc used to build a new version. Whatever you happen to have installed already, within reason, will be fine.

Related

building boost with clang 3.8 on windows

From some Googling around it seems that clang's support for windows has been improving recently and boost's support for clang may also have improved. But I'm fairly new to all this heavy-lifting compiler configuration stuff and new to boost, so I'm not sure what the current status really is.
I'm trying to run the command:
b2 --build-dir=build toolset=clang --build-type=complete stage
as suggested in section 5.2.4 in www.boost.org/.../getting_started/windows.
This does work to some extent, but watching the logs being printed to screen I see a few worrying things:
statements starting clang-linux.compile.c++.... even though I am on windows.
12 warnings generated. (or similar) perhaps always these are -Wunused-local-typedef, but I'm not sure.
2 warnings and 8 errors generated (or similar) surely if there are errors the build has failed? How am I supposed to know which component of boost has not built properly and what can I do to fix this?
I'm not clear whether I need MSVC the compiler, Visual Stufio the IDE, and/or MinGW and whether I need to manually set flags to pass to the compiler? Perhaps clang+boost is not ready for windows yet?
Ultimately I want to use boost.python, and at a later date maybe boost.coroutine.
Presumably if I want to use clang for my own projects I need to compile boost with clang too?
bootstrap --with-toolset=clang-win
b2 toolset=clang-win
Make sure that clang.exe is on your PATH.

Building GCC with MPFR, GMP and MPC

Of course we all know building GCC version >= 4.1.x requires the supplementary packages MPFR, GMP and MPC to be present.
There's a few ways to handle these GCC dependencies:
1) Download and build each supporting package separately and then tell make where the binaries are located during GCC build time.
2) Download each supporting package, untar and move the source into your GCC build directory and make will automatically build each of the packages when needed.
(Executing the gcc-src/contrib/download_prerequisites script does the same as option 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Is there an advantage to either method? Does pre-compiling the binaries provide something I'm missing by taking the "easy route" and just dumping the package's source into my GCC build directory and letting make figure it out?
I've seen it done more frequently in various build scripts by pre-compiling each package to a binary, and then telling make where they are located during gcc compilation. Is this the "preferred" way to do it? Why?
To add context, I'm mainly building cross-compilers targeting various ARM platforms.
For most use cases I believe that option 2 is just as good as option 1. However, I can see a few situations in which one would want to do it manually.
A package maintainer wants to build separately as they want separate packages for mpfr et al.
Someone who wants to pass different configure arguments/CFLAGS to each of the packages.
A GCC developer who wants to keep their source and build trees small as they don't make any changes to MPFR/GMP/etc.
I haven't done too much work with the (rather ugly) GCC build system, but I haven't seen any obvious differences in how the binaries are built.
I'm not the biggest authority on this though, so YMMV; I may be wrong.

Difference between MinGW and the regular GCC?

On the SourceForge page for MinGW, you can download the GCC 4.5.2 and that's the latest version. On the GNU mirrors, you can download the GCC 4.6 source and compile it with one of the possible windows targets:
i[3456789]86-w64-mingw*
i[3456789]86-*-mingw*
x86_64-*-mingw*
Is there a difference between using one of these targets and the traditional GCC for MinGW? Would it make sense to use the regular GCC because it has more up-to-date versions or would it make more sense to wait until an up-to-date GCC for MinGW is released?
As you can see in the README file accompanying the MinGW release of GCC on SourceForge, no local patches were used, and I think this has been the case for quite a while now, so assuming there were no changes in the GCC codebase that require new local patches, you can very well download the GCC sources from one of the mirrors and build them yourself.
I have done so myself in the past, especially because I use gfortran, which is under quite heavy development, so from time to time I take the most recent snapshot and build that myself, so I can use certain new features that were only recently introduced.
(I have to admit that it took some trying to get the build to run without errors, and after a period without problems, I recently ran into some new ones that I couldn't completely smooth out. I will have to try again soon.)

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Is there an advantage to upgrade Binutils from 2.16.1 to 2.19? Why?

In the PSPSDK (Homebrew) we are using the Binutils 2.16.1 to assemble and link the code for the PlayStation Portable, however that release is getting quite outdated (3 versions have superseded it). The community and me have been updating the GCC and newlib to the latest stable versions and everything seems to work with the old binutils.
Will GCC produce better code with binutils 2.19? Why?
Will binutils 2.19 produce better elf files and libs than 2.16.1? Why?
binutils 2.19 has a new ELF linker called gold which is multi-threaded, written in modern C++, and quite a bit faster than the usual ld linker. I'm not sure however about the work involved to adapt it.
Other than that, well new versions always are a good idea. Performance and bug fixes are likely to have been included, of course. I think i would certainly try it and if something goes wrong you can still backstep.
In general, you don't need to upgrade binutils unless you run into some bug fixed in a later binutils version, or need new features (such as linker build-ids).
In particular, GCC code generation is largely independent of binutils (except for constructs like __thread, which require certain level of support from binutils).

Resources