This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.
Related
I have to compile 3 versions of gcc, say 9, 10 and 11.
My system gcc is version 8 (let's say).
Question : do I have any advantage on compiling gcc-v9 with gcc-v8, gcc-10 with gcc-v9 and gcc-v11 with gcc-v10 ?
Or I don't have any advantage and I can compile them all the gcc-v8 ?
Thank you for pointing out some directions for further research.
GCC has a "bootstrap" build process. So when you try to build gcc-11 with only gcc-8 installed, it will build a temporary "stage 1" version of gcc-11 using gcc-8, then compile gcc-11 again using gcc-11-stage1. Thus no matter what you start with, the version of gcc-11 that comes out of the build process was effectively compiled with itself.
So all that matters is that gcc-8, or whatever "system compiler" was previously installed, is able to build a stage1 version of gcc-11 that runs well enough to compile the stage 2 version. It doesn't matter whether your system compiler is good at optimizing, and gcc's source code is deliberately written to use a fairly minimal set of language feature (at least for stage 1), so you are not likely to run into trouble with your system compiler having missing or buggy support for obscure corners of the language. Historically, the "system compiler" was often not gcc at all, but some compiler provided by the computer vendor or an unrelated third party, and so one couldn't rely much on its quality; gcc was designed with that in mind.
Theoretically your system compiler could have a bug which miscompiles gcc-11-stage1 in such a way that it appears to work, but itself miscompiles stage2. This is unlikely, and it's even less likely that it would happen in a way that wasn't obvious (e.g. the stage2 compiler simply segfaulting). If worried, there's an option to have stage2 build a stage3 compiler, and then check that both versions are identical. So as long as the build completes, you can be pretty confident that the final installed compiler is fine and unaffected by bugs in the original system compiler. (All that said, a reference to Ken Thompson's "Reflections on Trusting Trust" is obligatory here.)
So in practice, you don't need to worry about the version of gcc used to build a new version. Whatever you happen to have installed already, within reason, will be fine.
I'm currently working on a rather generic communication stack. It gets bytes in on one end, parses the packet and calls a callback.
I want to have this stack in a static library (i.e. libcommstack.a).
The library is aimed towards embedded ARM Cortex-M devices. At the moment we have specified that at least a Cortex-M3 should be used (but it should also work for an M4 or M33).
Right now I'm integrating it into another application to verify that linking it is possible. In the future the idea is that we will ship this .a file to customers so they can build their application around it, without having direct access to our sources (to encapsulate our IP).
We are using GCC ARM v7.2.1 to compile both the library and the application that is linked to it.
The application I'm trying to integrate it with is compiled for a Cortex M33 with -mfloat-abi=hard -mfpu-fpv6-sp-d16.
The code for the library does not use any floating points and is compiled using -march=archv7-m (both have the -mthumb flag).
Linking seemed to all go well, until I actually called a function from the lib. At that point the linker starts to complain:
application.elf uses VFP register arguments, libcommstack.a(somefile.c.obj) does not
failed to merge target specific data of file libcommstack.a(somefile.c.obj)
Since I'm not using floating points in the library and I don't know (upfront) if the target application does or does not have an FPU (or even uses floats), I'm not sure how to approach this.
I figured there would be two approaches:
Compile a single version of the lib, using an instruction set that all of the microcontrollers understand. I was hoping that this would be the case with ARMv7 (although I'm not yet 100% confident that the M23/M33 also support this).
Compile a lot of different libs for the different flavors based on the different architectures, FPU, etc.
As you can imagine, I would prefer to keep it simple and go for option 1, but I'm not sure how to "convince" the linker to link these two (or perhaps how to convince the compiler NOT to care about floating points for the lib).
Does anyone know if option 1 is feasible and how it can be achieved?
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Does anyone know if option 1 is feasible
Well, feasible, probably.
how it can be achieved?
Get all the processors you want to support and determine the instructions sets available on all these processors. Then compile for that instruction set.
But, please don't, that is a workaround.
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Gcc has something like "multilib profiles". See arm-none-eabi-gcc --print-multi-lib output. If you have newlib installed, you can go to /usr/arm-none-eabi/lib/thumb/ and see the directories there - newlib is compiled for each profile and installs separate library for it and different library is picked up depending on configuration. Compile for each of those profiles, and package your library by putting libraries in proper /usr/arm-none-eabi/lib/proper/directory/here and compiler will pick them up by itself (see gcc -v output for library search paths). For an example search newlib sources where it happens, can't find it. (Here's my example). With cmake as a backend as a example you could compile and install as follows:
arm-none-eabi-gcc --print-multi-lib |
while IFS=';' read -r dir opts; do
cmake -B builddir CMAKE_C_FLAGS="$opts" CMAKE_INSTALL_LIBDIR="$dir"
cmake --build builddir
cmake --install builddir --prefix "/usr/arm-none-eabi/"
done
I am working on a project that uses a GCC library (SFML), which is not available for clang, as far as I know. I am using COC with vim for code completions, but for C++ it needs clangd. Is there a way to use GCC as my compiler, but still use the clangd language server?
I have also heard that there may be a way to make clang recognize GCC libraries/headers, but I've never been able to make it work right. If somebody could point me in the right direction there that would be helpfull too. But I'm used to GCC (I've been using it since I started programming C++), so being able to use clangd and GCC would be preferable.
Yes it is. I do it with ccls (which is clang based as well).
Given my installation of clang is not the standard one (I compile it, tune it to use libc++ by default, and I install it somewhere in my personal space) I have to inject paths to header files known by clang but unknown by other clang based tools.
I obtain them with
clang++ -E -xc++ - -Wp,-v < /dev/null
Regarding the other options related to the current project, I make sure to have a compile_commands.json compilation database (generated by CMake, or bear if I have no other choice), and ccls can work from there. I expect clangd to be quite similar in these aspects.
Ops, answered the wrong question.
But for those who use ccls:
create a .ccls file in your project directory and append --gcc-toolchain=/usr to it.
use this tool to generate a compile_commands.json file
see https://github.com/MaskRay/ccls/wiki/FAQ#compiling-with-gcc
I am trying to decompile a piece of code and apparently gcc prefers less complicated instructions over the more complex ones. After reading this answer I suspect it's because gcc is trying to reduce instruction dependencies. Is there any way to instruct gcc to ignore these dependencies?
If it's about decompiling then it is not GCC's business but Binutils'. Hence check your host's Binutils options whether you can disable printing instructions in their simplified representation.
We have Oracle 11 running on HP-UX 11.31 and gcc 4.4.3. It seems that there is no way to link to occi, because it was built with aCC. Is there any workaround for this?
I had the silly idea that I could somehow build a library that basically proxied the connection - build the library with aCC in some way that could be linked to by gcc. Is this possible?
No, there isn't a way around that.
Different C compilers have interchangeable code using a standard ABI. You can mix and match their object code more or less with impunity.
However, different C++ compilers have a variety of different conventions that mean that their object code is not compatible. These relate to class layout (especially in multiple inheritance hierarchies and the dreaded 'diamond-of-death'), but also in name mangling conventions and exception handling. The name mangling schemes are deliberately made different so that you cannot accidentally link objects from one compiler with another.
Generally, if libraries are built using a C++ compiler, you have to link your code using the same - or at least a compatible - C++ compiler. And that almost invariably means a compiler from the same family. For example, you might be able to use G++ 4.5.0 even if the code was built with G++ 4.4.2. However, you won't be able to mix aCC with G++.