I was looking at a question about atomic compare and swap and gcc intrinsics. I noticed that an answer quoted from the gcc manual (note the answer I looked at quoted from an earlier version of gcc but I've linked to the latest versions manual because I had checked to see if anything changed). However, when I looked at the text in the manual I saw that it appears to reference Itanium rather than x86:
The following builtins are intended to be compatible with those
described in the Intel Itanium Processor-specific Application Binary
Interface, section 7.4. As such, they depart from the normal GCC
practice of using the “__builtin_” prefix, and further that they are
overloaded such that they work on multiple types.
My question is why does gcc reference Itanium documentation and does that effect how the intrinsics work on x86? Are there any differences or is it safe to assume that even though the gcc manual references the Itanium manual that everything the gcc manual describes will work correctly on an x86 system?
My understanding is that a lot of gcc's ABI decisions (the egcs fork) were based on the ABI specs for the good ship Itanic. This included the name mangling conventions for C++ symbols. There was a large effort (Project Trillian) to have IA-64 Linux (and GCC) ready to go when the actual processor became available. The semantics are intended to be platform-independent, though they will be replaced by the __atomic builtins.
Related
Is it safe to link objects generated from sources compiled with different GCC versions into a shared library?
I assume not, but in case the used GCCs have no difference in regards to code generation and optimization improvement? Is there an information to know which GCC compiler is not backward compatible?
My question is also concerns the binaries, I looked in
https://gcc.gnu.org/onlinedocs/gcc/Compatibility.html
From my understanding, different GCC version would be compatible as long as they Conform to the same ABI
So After doing a research on the web, and reading several GCC release notes, it seems that GCC is backward compatible if there is no ABI change.
In general, this would be stated in the release notes.
I also did some experiment using different GCC compilers and GCC linkers (by different meaning from different versions of GCC) and I got linker errors when it was incompatible (different ABI versions).
I'm learning assembly.
I know that gcc supports at&t syntax but i want my program to run on intel processors.
So would it work on intel processors regardless the syntax or it must be intel syntax to work on intel platform!! i'm confused??
Thanks.
att vs intel syntax has been covered many times, here and other places.
Assembly language is a language defined by the assembler, the particular program used to convert the ASCII assembly language into machine code for the particular target you are interested in. Unlike say a C or C++ compiler where there is a standard that defines the language, you can have 7 assemblers for the same target processor and there is no reason to assume that the assembly languages have to be compatible in any way shape or form. It is the machine code the produce that matters and if that machine code matches the same target then use the tool you like the best for whatever reason.
For this case there was the intel format as defined by the intel documentation and supported by the intel assembler. And then supported sorta by other assemblers. The instructions were close or the same they might have had a compatibility mode and maybe they had their own directives. For example as86 (or was it asm86 or a86?) tasm, masm, and currently nasm. And then you had this AT&T syntax, someone somewhere (ATT?) decided to make an assembler with a goofy assembly language that specifically didnt match the intel documentation at all. And that became the Intel vs AT&T syntax thing. gnu assembler is well known for messing up existing assembly languages as well, and they apparently use AT&T with their own nuances thrown in. they might have an intel syntax switch you should check.
The question you should be asking is the target, and assemblers like gnu assembler for x86 are often capable of generating code for various flavors of x86, so you need to make sure it matches your computer (most likely does if you dont add any target type/specific options).
There is no reason to assume an AT&T syntax assembler (gnu assembler (gas or as)) would not work.
The build system on my cross-platform project has a command line for Intel's Windows C++ that may or may not have /Qstd=c++0x as a result of detecting the compiler feature set. For most of the code base, this works well, however for a small number of CUDA files, I need to disable the more recent dialects of C++ to suit the constraints of the nvcc wrapper compiler.
How should I phrase something like /Qstd=c++98 or /Qnostd=c++0x at the end of the command line so that it overrides any earlier specifications of C++ dialect?
Edit: Having been educated that these flags are actually for the Intel compiler, I have found that appending /Qstd=c++98 is probably the right approach.
You can't for MSVC. Each MSVC version expects its own interpretation of something between two or three standards, and you're stuck with it.
The options you quote are for the Intel Compiler (see here). If possible, I'd suggest using the Intel Compiler then.
I do fail to see how disabling the recent dialects in the C++ compiler will please the nvcc wrapper compiler... Just don't write C++11 code, and you'll be fine right?
This question was emerged from this question.
The problem is that there is a NVidia driver for Linux, compiled wth GCC 4.5. The kernel is compiled with GCC 4.6. Well, the stuff doesn't work because of the version number difference between GCCs. (the installer says the driver won't work - for details please visit the link above)
Could one disguise a binary compiled with GCC 4.5 to a binary compiled with GCC 4.6? If it is possible, under what circumstances would it work well?
Your problem is called ABI: Application Binary Interface. This is a set of rules (among others) how functions in a piece of code get their arguments (ordering, padding of types on the stack), naming of the function so the linker can resolve symbols and padding/alignment of fields in structures.
GCC tries to keep the ABI stable between compiler versions but that's not always possible.
For example, GCC 4.4 fixed a bug in packed bit-fields which means that old/new code can't read structures using this feature properly anymore. If you would mix versions before and after 4.4, data corruption would occur without any crashes.
There is no indication in the 4.6 release notes that the ABI was changed but that's something which the Linux kernel can't know - it just reads the compiler version used to compile the code and if the first two numbers change, it assumes that running the code isn't safe.
There are two solutions:
You can compile the Nvidia driver with the same compiler as the kernel. This is strongly recommended
You can patch the version string in the binary. This will trick the kernel into loading the module but at the risk of causing data corruption to internal data structures.
I have tried to understand the naming conventions behind the gcc cross-compilers, but there seems to be conflicting answers. I have the following three cross-compilers in my system:
arm-none-linux-gnueabi (CodeSourcery ARM compiler for linux)
arm-none-eabi (CodeSourcery ARM compiler for bare-metal systems)
arm-eabi (Android ARM compiler)
When reading through the GNU libtool manual, it specifies the cross-compiler naming convention as:
cpu-vendor-os (os = system / kernel-system)
This does not seem completely accurate with the compilers in my system. Is the information in the GNU manual old, or have the compiler distributors simply stopped following it?
The naming comes down to this:
arch-vendor-(os-)abi
So for example:
x86_64-w64-mingw32 = x86_64 architecture (=AMD64), w64 (=mingw-w64 as "vendor"), mingw32 (=win32 API as seen by GCC)
i686-pc-msys = 32-bit (pc=generic name) msys binary
i686-unknown-linux-gnu = 32-bit GNU/linux
And your example specifically:
arm-none-linux-gnueabi = ARM architecture, no vendor, linux OS, and the gnueabi ABI.
The arm-eabi is alike you say, used for Android native apps.
One caveat: Debian uses a different naming, just to be difficult, so be careful if you're on a Debian-based system, as they have different names for eg. i686-pc-mingw32.
The fact is that, there is a rule, and it is the one described above from rubenvb. But in several cases the naming you will find is incorrect, as:
gcc-pippotron-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
gcc-pippotron-arm-none-eabi-4.8-2013.11_linux
This maming above are 2 examples that are not respecting the rule.