ARM and GCC Compiling - gcc

Hopefully this hasn't been asked and answered already, but I just had a quick question on ARM.
Specifically, if when compiling Android (which has a lot of C and C++), you use GCC to compile, doesn't that create x86 based code? How is it that an ARM processor, which uses a reduced instruction set, can interpret this code and run like it does?
Thanks!

GCC doesn't just compile for x86. It actually compiles to any instruction set. If you wanted to you could create a new one just by adding a few files.
And ARM isn't a reduced instruction set. Its a completely different instruction set. There's some things ARM has that x86 doesn't and vice versa.

Building gcc goes through a configuration step, part of this is to specify a back-end. The back-end is responsible for op-code generation. The typical compiler is many phases. Briefly,
Parser - convert text to a data representation.
Front end - Optimize by changing code constructs, possibly language specific.
Middle end - Performs computer science optimization that are common to any compiler.
Back end - Performs optimization specific to the target CPU.
See stackoverflow compiler wiki for more.
So parts one to three are common for the x86 and the ARM versions of gcc (or any gcc). The Android compiler is a version of gcc which has been configured to generate ARM code. It is a different compiler than the one that normally runs on an x86. You maybe running an ARM emulator on a PC and then believe that this code is run by the x86. However, this is a virtual ARM machine running this code. An x86 processor can not run ARM code natively.
The Android gcc is an ARM configured gcc. A normal Linux distributions gcc is configured for an x86 or x86_64.
Something is missing above: Who compiles the compiler? In both cases, an x86 compiler compile the new compiler. The difference is the selected back-end. One is x86, the other ARM. Both compilers run on an x86, but they generate code for different targets. Gcc can only generate code for an ARM or an x86; never both via any sort of command line switch. A compiler build usually refers to three different CPU types.
Build - Machine where the compiler is built. This is the compiler's compiler.
Host - the machine the compiler runs on. Not it's output, but the compiler itself.
Target - the machine the back-end targets. The one code is generated for.
I think maybe people are thinking because they both run on the same host, they must generate code for the same target. But that is not true; it is a little mind bending at first. Depending on the setup, you may need compilers for each of these machines to make a final compiler.
The first compiler for any machine is usually a cross compiler. Except for some people who made primitive compilers long ago in assembler.
See also: Cross compiler.

to put it simply, when you're building for ARM on your x86 computer you're using a cross-compiler - a compiler that runs on one platform but generates code for another. This is extremely common when developing for embedded or mobile platforms.

Related

The relation between JIT of JVM and GCC (or other existing native compiler)

Does JIT uses GCC or any other native compiler as its backend to generate its native code?
Also, is there any relation between the C2, C1 of JIT and O1, O2, O3 optimization flags of GCC?
Thanks for the answer.
The JVM is a standalone operating system process that interprets byte codes (in Java, .class type files) and runs them. That may mean it runs code that stays entirely inside of the JVM (perhaps string concatenation for example) or calls the underlying O/S to take care of the code it is running (for example, reading a file). Each JVM will implement this it's own way but it does not create a file in another programming language to then be handled by that environment.
GCC converts C and C++ code to the assembly / machine code appropriate for the processor it is built for (i.e. x86_64, Arm64, etc.) Generally the output from GCC is meant for a single type of processor and operating system. There are exceptions - some compilers can build "fat" binaries that contain code targeted at multiple processors.
So, because of that, the JVM doesn't understand or care about the GCC optimization flags. While the JVM itself is likely compiled with a compiler like GCC once it's built it no longer has any interaction with the GCC (or other) compiler.

why must gnu binutils be configured for a spefic target. What's going on underneath

I am messing around with creating my own custom gcc toolchain for an arm Cortex-A5 cpu, and I am trying to dive as deeply as possible into each step. I have deliberately avoided using crosstool-ng or other tools to assist, in order to get a better understanding of what is going on in the process of creating a toolchain.
One thing stumples me though. During configuration and building of binutils, I need to specify a target (--target). This target can be either the classic host-tuple (ex: arm-none-linux-gnuabi) or a specific type, something like i686-elf for example.
Why is this target needed? and what does it specifically do with the generated "as" and "ld" programs built by binutils?
For example, if I build it with arm-none-linux-gnueabi, it looks like the resulting "as" program supports every arm instruction set under the sun (armv4t, armv5, e.t.c.).
Is it simply for saving space in the resulting executable? Or is something more going on?
I would get it, if I configured the binutils for a specific instruction set for example. Build me an assembler that understands armv4t instructions.
Looking through the source of binutils and gas specifically, it looks like the host-tuple is selecting some header files located in gas/config/tc*, gas/config/te*. Again, this seems arbitrary, as it is broad categories of systems.
Sorry for the rambling :) I guess my questing can be stated as: Why is binutils not an all-in-one package?
Why is this target needed?
Because there are (many) different target architectures. ARM assembler / code is different from PowerPC is different from x86 etc. In principle it would have been possible to design one tool for all targets, but that was not the approach taken at te time.
Focus was mainly on speed / performance. The executables are small as of today's 'standards' but combining all > 40 architectures and all tools like as, ld, nm etc. would be / have been quite clunky.
Moreover, not only are modern host machines much more powerful, that also applies to the compiler / assembled programs, sometime zillions of lines of (preprocessed) C++ to compile. This means the overall build times shifted much more in the direction of compilation that in the old days.
Only different core families are usually switchable / selectable per options.

Is it possible to build gcc 1.0 without a C compiler?

Is it possible to build gcc 1.0 with only an assembler, without any C compilers? If it is possible, how can I build it? If it is not possible, how did the first C compiler come out?
Let's say if we have a new architecture of CPU with a new set of instructions, and the only software that has been made for it, is the assembler, then how can I build a gcc compiler for it?
Early versions of GCC were written in C. At the time, the operating systems GCC targeted came with at least a rudimentary C compiler (maybe for K&R C only, without support for prototypes). There was no bootstrap from assembler code involved, even in the first release. For those who did not or could not build GCC by themselves, the FSF provided pre-built binaries on tape, for a fee.
Support for new architectures (if they support self-hosting at all) was and still is implemented using cross-compilers.

What compilers support CUDA

I found some problem with Visual Studio. My project that use openMP multithreading was twice slow on Visual Studio 2010, than on Dev-C++ , Now I wrote my other project that uses CUDA technology , I think that my project works slow because of Visual Studio, so I need some other compiler that will support CUDA , my questions are:
is Dev-C++ support CUDA?
what compilers support CUDA except Visual Studio?
if there are a lot compilers supporting CUDA what will give best speed for application?
The CUDA Toolkit Release Notes list the supported platforms and compilers.
Well I think it's the other way around. The thing is, there is a driver called nvcc. it generates device code and host code and sends the host code to a compiler. It should be a C compiler and it should be in the executable path. (EDIT: and it should be gcc on Linux and cl on Windows and I think I should ignore mac as the release note did(?))
nvcc Compiler Info reads:
A general purpose C compiler is needed by nvcc in the following
situations:
During non-CUDA phases (except the run phase), because these phases will be forwarded by nvcc to this compiler
During CUDA phases, for several preprocessing stages (see also 0). On Linux platforms, the compiler is assumed to be ‘gcc’, or ‘g++’ for linking. On Windows platforms, the compiler is assumed to be ‘cl’. The
compiler executables are expected to be in the current executable
search path, unless option -compiler-bin-dir is specified, in which
case the value of this option must be the name of the directory in
which these compiler executables reside.
And please don't talk like that about compilers. Your code is in a way that works better with Dev-C++. What is generated is an assembly code. I don't say that they don't make any difference, but maybe 4 to 5%, not 100%.
And absolutely definitely don't blame the compiler for your slow program. It is definitely because of inefficient memory access and incorrect use of different types of memory.

Distro provided cross compiler vs custom built gcc

I intend to cross compile for Raspberry Pi, basically a small ARM computer. The host will be an i686 box running Arch Linux.
My first instinct is to use cross compiler provided by Arch Linux, arm-elf-gcc-base and arm-elf-binutils. However, every wiki and post I read seems to use some version of custom gcc build. They seem to spend significant time on cooking their own gcc. Problem is that they never say WHY it is important to use their gcc over another.
Can stock distro provided cross compilers be used for building Raspberry Pi or ARM in general kernels and apps?
Is it necessary to have multiple compilers for ARM architecture? If so, why, since single gcc can support all x86 variants?
If 2), then how can I deduce what target subset is supported by a particular version of gcc?
More general question, what general use cases call for custom gcc build?
Please be as technical as you can, I'd like to know WHY as well as how.
When developers talk about building software (cross compiling) for a different machine (target) compared to their own (host) they use the term toolchain to describe the set of tools necessary to build binary files. That's because when you need to build an executable binary, you need more than a compiler.
You need routines (crt0.o) to initialize runtime according to requirements of operating system and standard libraries. You need standard set of libraries and those libraries need to be aware of the kernel on target because of the system calls API and several os level configurations (f.e. page size) and data structures (f.e. time structures).
On the hardware side, there are different set of ARM architectures. Architectures can be backward compatible but a toolchain by nature is binary and targeted for a specific architecture. You can have the most widespread architecture by default but then that won't be too fruitful for an already constraint environment (embedded device). If you have the latest architecture, then it won't be useful for older architecture based targets.
When you build a binary on your host for your host, compiler can look up all the necessary bits from its own environment or use what's on the host - so most of the above details are invisible to developer. However when you build for a different target than your host type, toolchain must know about hardware, os and standard library details. The way you tell these to toolchain is... by building it according to those details which might require some level of bootstrapping. (or you can do this via extensive set of parameters if toolchain supports / built for it.)
So when there is a generic (stock) cross compile toolchain, it has already some target specifics set and that might not meet your requirements. Please see this recent question about the situation on Ubuntu for an example.

Resources