I recently compiled GotoBLAS2 (MacOSX 10.6) and linked it to my code, leading to all kind of wrong results. I ran everything through valgrind noticing some illegal reads from the GotoBLAS. When looking at it more carefully I found that the GotoBLAS is compiled with the -m128bit-long-double alignment option. As soon as I did compile my code with this flag as well (although I don't use any long doubles at all) everything works, giving correct results without any valgrind obscurities.
Now my question ist:
Do I have to compile all other library dependencies using the same alignment flag?
Related
I have been trying to go through this tutorial and I always get stuck in the second build of GCC when making the cross-toolchain. It errors out saying that I am attempting to call a poisoned calloc. I have gone through several patches and what they all seem to do is just #include the offending system header (in this case pthread.h) earlier in the source code. Since there are no patches for my particular problem, I have gone ahead and emulated their solutions in my case. While this works (compilation now fails because I don't have some ISL files) it feels like a hack, and I suspect that the root problem is further back in the build.
Thus, I wanted to ask:
Why are symbols poisoned? Why would the GCC maintainers want some symbols not to be used?
What are the general causes for this problem? Is it really just a bug or is this a problem that arises in more general situations?
I am more interested in the generalities of this issue, but if it helps, I am using the latest release of Alpine Linux (with gcc 12.2.1) trying to compile gcc 11.2
.0 for the same target architecture as the host (x86-64).
I am trying to compile a program with Clang 5.1, as included in Xcode 5.1. This program is an early-stage boot loader, and as such its execution environment is very limited. I must pass the -mfpmath=387 compiler flag to produce correct assembly. When I upgraded to Xcode 5.1, I received the following error:
error: the '387' unit is not supported with this instruction set
Does anyone know what this error means? Did the syntax for this flag change, and, if so, what is the new syntax? (I am also interested in knowing what -mfpmath=387 does. I copied it verbatim from a Makefile in boot-132, but never really understood its effect on the compilation procedure.)
As it turns out, for Clang to accept -mfpmath=387, I had to also pass -mno-sse. I found this out by grepping Clang’s source. I still want to know what -mfpmath=387 does, though.
Educated guess:
The 387 refers to the intel x87 floating point instruction set http://en.wikipedia.org/wiki/X87 (hence the 'fpmath' bit of the compiler flag).
It's likely that the flag is telling the compiler to generate code targeting the 387 floating point architecture, rather than a later version.
I am using the 2011 Q3 ARM GCC compiler with an ARM M0 platform. On my current application, if I do not use optimizations (compiling with -O0), my code is too large and doesn't fit. If I use any optimization (-O1, -O2, -O3, -Os), the SWITCH CASE statements do not work. I have validated the code inside this block is not getting executed as simple GPIO toggling operations are not coming through.
I read somewhere that any optimization from -O1 and above will have issues with goto code. However, I can't find the solution to this problem anywhere.
I also tried using the latest GCC ARM compiler, but my tools are not compatible with this release.
Any help on this matter is appreciated!
Try splitting your source code like so: the code that you don't want to optimize (eg. accesses to the memory mapped regions like GPIO) and the rest of it.
After you compile each source file with a different optimization level, you will get a working version with the "fragile" code.
Then, when you will debug the code, you will work with an object (.o) file and compile the rest.
I have a structure defined in h files and some elements of that structure are in compile time flag. The file is compiled using arm-gcc with -Otime flag. When I run it , it is giving Segmentation Fault. But when I remove -Otime flag from the makefile the code runs perfectly.
I am befuddled at this observation. Can someone share some insight on this issue ?
Unfortunately there is never one easy answer for these scenarios. Certainly in the past, whenevr I have encountered such problems, its been down to the code, the compiler or even a combination of both. It would help if you could mention the version of the compiler you are using. It could simply be a Otime bug in the compiler, which is not unheard of.
This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.