I have tried to understand the naming conventions behind the gcc cross-compilers, but there seems to be conflicting answers. I have the following three cross-compilers in my system:
arm-none-linux-gnueabi (CodeSourcery ARM compiler for linux)
arm-none-eabi (CodeSourcery ARM compiler for bare-metal systems)
arm-eabi (Android ARM compiler)
When reading through the GNU libtool manual, it specifies the cross-compiler naming convention as:
cpu-vendor-os (os = system / kernel-system)
This does not seem completely accurate with the compilers in my system. Is the information in the GNU manual old, or have the compiler distributors simply stopped following it?
The naming comes down to this:
arch-vendor-(os-)abi
So for example:
x86_64-w64-mingw32 = x86_64 architecture (=AMD64), w64 (=mingw-w64 as "vendor"), mingw32 (=win32 API as seen by GCC)
i686-pc-msys = 32-bit (pc=generic name) msys binary
i686-unknown-linux-gnu = 32-bit GNU/linux
And your example specifically:
arm-none-linux-gnueabi = ARM architecture, no vendor, linux OS, and the gnueabi ABI.
The arm-eabi is alike you say, used for Android native apps.
One caveat: Debian uses a different naming, just to be difficult, so be careful if you're on a Debian-based system, as they have different names for eg. i686-pc-mingw32.
The fact is that, there is a rule, and it is the one described above from rubenvb. But in several cases the naming you will find is incorrect, as:
gcc-pippotron-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
gcc-pippotron-arm-none-eabi-4.8-2013.11_linux
This maming above are 2 examples that are not respecting the rule.
Related
I've done all my development work for an embedded linux device (gumstix) in a linux VM and I would like to move the code base to my host Linux computer. The cross-compiler was setup prior to me inheriting the codebase, so I'm not sure how the compiler was set up. I have some questions concerning how to set up the cross-compiler.
The compiler on the VM is a arm-linux-gnueabihf-gcc.
Is the cross-compiler kernel specific? (Using linux kernel 3.17)
Is the cross-compiler target device specific; i.e. do I need to use a gumstix compiler or is the arm-linux-gnueabihf-gcc satisfactory. Does this compiler need to be configured manually.
Is there a way to see/import the configuration setting of the working VM compiler?
Does the arm-linux-gnueabihf-gcc use the same standard library source code as the gcc compiler?
I've seen varying approaches to setting up cross-compilers on web. Where can I find comprehensive information for setting up a cross-compiler (More than a how-to, but also explains why).
Thank you
The cross compiler is not kernel specific nor target device specific. It is specific to the architecture of the SoC or processor you are targeting. So if your current compiler is arm-linux-gnueabihf-gcc it implies it can compile code for ARM32 processors which have floating point support in hardware. Depending on your host Linux system, you can install a similar compiler using the package manager or you may also download it from here.
Different people probably will recommend different approaches and also on whether a particular approach is easy or difficult. Regardless I tend to recommend building the complete target image and generating an SDK for doing development using something like Yocto/Openembedded or Buildroot.
Not sure exactly what you mean by Q4.
I am a beginner in programming and wanted to download a good C compiler to practice coding. So I thought of GCC and started a small research on it. I read a Wikipedia article on it. The article mentioned something about target architecture,which I do not know. Can anyone tell me what it means, and any source I can refer for more information. Thanks in advance.
The target architecture is the architecture which the compiler creates binary files for.
Common architectures are: i386 (Intel 32-bit), x86_64 (Intel 64-bit), armv7, arm64, etc...
GCC compiles C code (after the preprocessing stage) to assembly code,
and the assembly code varies depending on the CPU architecture.
The assembly code is then "assembled" to a binary file.
Something to keep in mind:
Two binary files are not guaranteed to be compatible across different operating systems despite sharing the same architecture.
A program compiled on Ubuntu Linux (let's say with arch x86_64) won't work on Windows (with same arc x86_64).
GCC identifies architectures by "triplets", like:
x86_64-apple-darwin14.0.0
i386-pc-mingw32
i686-pc-linux-gnu
Format is:
machine-vendor-operatingsystem (not always followed though)
They contain infos on both the architecture and the operating system.
Is there a guide somewhere that describes how to get LLVM to emit a binary for Cortex-M3 that I can massage into running bare metal? I've spent considerable time playing with LLVM on Windows and Ubuntu to no avail. I can get ARM-like assembly out. I can get bit code out, but what I really need is ELF, DWARF, Hobbit, Gandalf or any other Lord of the Rings critter that has a file format specification. Any and all help appreciated! I'm compiling LLVM 3.4 with CLANG on Ubuntu, Windows and/or OS X.
I created a firmware framework - PolyMCU https://github.com/labapart/polymcu - that is based on CMake that support GCC and LLVM. Because it is based on CMake you can build your firmware on Linux/Windows/MacOS.
It also uses Newlib and supports Baremetal/CMSIS RTOS (RTX)/FreeRTOS.
The benefit of using PolyMCU is this framework does not add any software layer on top of the libc and the MCU vendor's SDKs.
Another benefit is you can easily switch toolchains. I used this feature to get more feedback on my code by testing it with many compilers.
I also wrote a blog where I compared GCC and LLVM build size on ARM Cortex-M: http://labapart.com/blogs/3-the-importance-of-the-toolchain-version-in-embedded-space Interesting results, Clang generated code is not much bigger than GCC on Cortex-M...
The best guide that I know of is here: http://wiki.osdev.org/LLVM_Cross-Compiler. It's mostly about building an LLVM cross-compiler, but it does show a "Usage" section. However, that section specifically shows an example for a Cortex-A processor, but you should be able to get the general idea.
I have created an simple clang bare metal Cortex-M3 "hello world" program, but I don't have it in front of me. IIRC, the only options I needed were -march=thumb -mcpu=cortex-m3 as long as the LLVM compiler backend was built with the ARM thumb backend support (Again, see http://wiki.osdev.org/LLVM_Cross-Compiler). I did, however, need to link with arm-none-eabi-ld from the GCC toolchain here (http://launchpad.net/gcc-arm-embedded), and I believe that is how you can get your ELF binary.
I've since moved on to the D programming language, and I have a simple example using LDC (The LLVM D compiler) here (http://wiki.dlang.org/Extremely_minimal_semihosted_%22Hello_World%22)
So, I believe compiling bare metal ARM Cortex-M3 software with LLVM can be done, but it seems not many people have tried.
It is possible to use clang++ pulled from http://llvm.org/builds with https://launchpad.net/gcc-arm-embedded as a base, at least for the compile step.
Required extra arguments are the include paths hardcoded into gcc and certain arm-none-eabi defaults:
--target=arm-none-eabi -fshort-enums -isystem "../arm-none-eabi/include/c++/5.2.1" [-isystem ...]
I intend to cross compile for Raspberry Pi, basically a small ARM computer. The host will be an i686 box running Arch Linux.
My first instinct is to use cross compiler provided by Arch Linux, arm-elf-gcc-base and arm-elf-binutils. However, every wiki and post I read seems to use some version of custom gcc build. They seem to spend significant time on cooking their own gcc. Problem is that they never say WHY it is important to use their gcc over another.
Can stock distro provided cross compilers be used for building Raspberry Pi or ARM in general kernels and apps?
Is it necessary to have multiple compilers for ARM architecture? If so, why, since single gcc can support all x86 variants?
If 2), then how can I deduce what target subset is supported by a particular version of gcc?
More general question, what general use cases call for custom gcc build?
Please be as technical as you can, I'd like to know WHY as well as how.
When developers talk about building software (cross compiling) for a different machine (target) compared to their own (host) they use the term toolchain to describe the set of tools necessary to build binary files. That's because when you need to build an executable binary, you need more than a compiler.
You need routines (crt0.o) to initialize runtime according to requirements of operating system and standard libraries. You need standard set of libraries and those libraries need to be aware of the kernel on target because of the system calls API and several os level configurations (f.e. page size) and data structures (f.e. time structures).
On the hardware side, there are different set of ARM architectures. Architectures can be backward compatible but a toolchain by nature is binary and targeted for a specific architecture. You can have the most widespread architecture by default but then that won't be too fruitful for an already constraint environment (embedded device). If you have the latest architecture, then it won't be useful for older architecture based targets.
When you build a binary on your host for your host, compiler can look up all the necessary bits from its own environment or use what's on the host - so most of the above details are invisible to developer. However when you build for a different target than your host type, toolchain must know about hardware, os and standard library details. The way you tell these to toolchain is... by building it according to those details which might require some level of bootstrapping. (or you can do this via extensive set of parameters if toolchain supports / built for it.)
So when there is a generic (stock) cross compile toolchain, it has already some target specifics set and that might not meet your requirements. Please see this recent question about the situation on Ubuntu for an example.
I was looking at a question about atomic compare and swap and gcc intrinsics. I noticed that an answer quoted from the gcc manual (note the answer I looked at quoted from an earlier version of gcc but I've linked to the latest versions manual because I had checked to see if anything changed). However, when I looked at the text in the manual I saw that it appears to reference Itanium rather than x86:
The following builtins are intended to be compatible with those
described in the Intel Itanium Processor-specific Application Binary
Interface, section 7.4. As such, they depart from the normal GCC
practice of using the “__builtin_” prefix, and further that they are
overloaded such that they work on multiple types.
My question is why does gcc reference Itanium documentation and does that effect how the intrinsics work on x86? Are there any differences or is it safe to assume that even though the gcc manual references the Itanium manual that everything the gcc manual describes will work correctly on an x86 system?
My understanding is that a lot of gcc's ABI decisions (the egcs fork) were based on the ABI specs for the good ship Itanic. This included the name mangling conventions for C++ symbols. There was a large effort (Project Trillian) to have IA-64 Linux (and GCC) ready to go when the actual processor became available. The semantics are intended to be platform-independent, though they will be replaced by the __atomic builtins.