I am trying to compile a program with Clang 5.1, as included in Xcode 5.1. This program is an early-stage boot loader, and as such its execution environment is very limited. I must pass the -mfpmath=387 compiler flag to produce correct assembly. When I upgraded to Xcode 5.1, I received the following error:
error: the '387' unit is not supported with this instruction set
Does anyone know what this error means? Did the syntax for this flag change, and, if so, what is the new syntax? (I am also interested in knowing what -mfpmath=387 does. I copied it verbatim from a Makefile in boot-132, but never really understood its effect on the compilation procedure.)
As it turns out, for Clang to accept -mfpmath=387, I had to also pass -mno-sse. I found this out by grepping Clang’s source. I still want to know what -mfpmath=387 does, though.
Educated guess:
The 387 refers to the intel x87 floating point instruction set http://en.wikipedia.org/wiki/X87 (hence the 'fpmath' bit of the compiler flag).
It's likely that the flag is telling the compiler to generate code targeting the 387 floating point architecture, rather than a later version.
Related
I have been trying to go through this tutorial and I always get stuck in the second build of GCC when making the cross-toolchain. It errors out saying that I am attempting to call a poisoned calloc. I have gone through several patches and what they all seem to do is just #include the offending system header (in this case pthread.h) earlier in the source code. Since there are no patches for my particular problem, I have gone ahead and emulated their solutions in my case. While this works (compilation now fails because I don't have some ISL files) it feels like a hack, and I suspect that the root problem is further back in the build.
Thus, I wanted to ask:
Why are symbols poisoned? Why would the GCC maintainers want some symbols not to be used?
What are the general causes for this problem? Is it really just a bug or is this a problem that arises in more general situations?
I am more interested in the generalities of this issue, but if it helps, I am using the latest release of Alpine Linux (with gcc 12.2.1) trying to compile gcc 11.2
.0 for the same target architecture as the host (x86-64).
I'm attempting to port a large set of modules from AIX to Linux. Unfortunately, the AIX xlc compiler allowed you to define a static function and use it prior to the definition with no prototype. Not good, but at least you get the proper static scope. In any case, the code is there, and I can't get it to compile on Linux without explicitly adding a static prototype.
So, is there any way to inhibit the "static declaration follows non-static declaration" error in gcc (or make it a warning instead of a hard error), or do I have to edit each of these modules to add prototypes wherever they're missing? As I understand it, this is a case where the standard behavior is undefined - so it's kind of nasty if gcc wouldn't allow you a way to relax its internal standard to allow for code that compiles elsewhere, no...?
This has been a hard error in GCC since 2004. The only option to get this to compile is to downgrade to a really old version of GCC. I verified that GCC 3.4.6 still compiles this, but GCC 4.0.3 does not.
Of course, depending on your target, getting GCC 3.4 to work might be close to impossible.
I recently compiled GotoBLAS2 (MacOSX 10.6) and linked it to my code, leading to all kind of wrong results. I ran everything through valgrind noticing some illegal reads from the GotoBLAS. When looking at it more carefully I found that the GotoBLAS is compiled with the -m128bit-long-double alignment option. As soon as I did compile my code with this flag as well (although I don't use any long doubles at all) everything works, giving correct results without any valgrind obscurities.
Now my question ist:
Do I have to compile all other library dependencies using the same alignment flag?
I'm trying to port some very old fortran code to windows. I'd like to use mingw and f2c, which has no problem converting the code to usable C on OS X and Ubuntu. I used f2c.exe as distributed by netlib on a fresh install of mingw, and it translated the code fine. I have a "ported" version of libf2c that seems to still contain some unresolved references -- mostly file i/o routines (do_fio, f_open, s_wsfe, e_wsfe) and, peculiarly, one arithmetic routine (pow_dd). To resolve these issues, I tried to build libf2c from source, but ran into an issue during the make process. The make proceeds to dtime_.c, but then fails due to a dependency on sys/times.h, which is no longer a part of the mingw distro. There appears to be a struct defined in times.h that defines the size of a variable in dtime_.c, specifically t and t0 on lines 53 and 54 (error is "storage size of 't' isn't known"; same for t0).
The makefile was modified to use gcc, and make invoked with no other options passed.
Might anyone know of a workaround for this issue? I feel confident that once I have a properly compiled libf2c, I'll be able to link it with gcc and the code will work like it does on linux and os X.
FOLLOW-UP: I was able to build libf2c.a by commenting out the time related files in the makefile (my code does not contain any time related functions, so don't think it will matter). I copied it to a non-POSIX search directory as show in -print-search-dirs, specifically C:\MinGW\lib\gcc\mingw32\3.4.5. That seems to have fixed the issue on the unresolved references, although the need to eliminate the time files does concern me. While my code is now working, the original question stands -- how to handle makefiles that call for sys/times.h in mingw?
Are you sure the MinGW installation went correct? As far as I can tell the sys/times.h header is still there, in the package mingwrt-3.18-mingw32-dev.tar.gz. I'm not familiar with the gui installer, but perhaps you have to tick a box for the mingwrt dev component.
This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.