Is it possible to build gcc 1.0 with only an assembler, without any C compilers? If it is possible, how can I build it? If it is not possible, how did the first C compiler come out?
Let's say if we have a new architecture of CPU with a new set of instructions, and the only software that has been made for it, is the assembler, then how can I build a gcc compiler for it?
Early versions of GCC were written in C. At the time, the operating systems GCC targeted came with at least a rudimentary C compiler (maybe for K&R C only, without support for prototypes). There was no bootstrap from assembler code involved, even in the first release. For those who did not or could not build GCC by themselves, the FSF provided pre-built binaries on tape, for a fee.
Support for new architectures (if they support self-hosting at all) was and still is implemented using cross-compilers.
Related
Is it safe to link objects generated from sources compiled with different GCC versions into a shared library?
I assume not, but in case the used GCCs have no difference in regards to code generation and optimization improvement? Is there an information to know which GCC compiler is not backward compatible?
My question is also concerns the binaries, I looked in
https://gcc.gnu.org/onlinedocs/gcc/Compatibility.html
From my understanding, different GCC version would be compatible as long as they Conform to the same ABI
So After doing a research on the web, and reading several GCC release notes, it seems that GCC is backward compatible if there is no ABI change.
In general, this would be stated in the release notes.
I also did some experiment using different GCC compilers and GCC linkers (by different meaning from different versions of GCC) and I got linker errors when it was incompatible (different ABI versions).
I am trying to understand how exactly I can use OpenACC to offload computation to my nvidia GPU on GCC 5.3. The more I google things the more confused I become. All the guides I find, they involve recompiling the entire gcc along with two libs called nvptx-tools and nvptx-newlib. Other sources say that OpenACC is part of GOMP library. Other sources say that the development for OpenACC support will continue only on GCC 6.x. Also I have read that support for OpenACC is in the main brunch of GCC. However if I compile a program with -fopenacc and -foffload=nvptx-non is just wont work. Can someone explain to me what exactly it takes to compiler and run OpenACC code with gcc 5.3+?
Why some guides seem to require (re)compilation of nvptx-tools, nvptx-newlib, and GCC, if, as some internet sources say, OpenACC support is part of GCC's main branch?
What is the role of the GOMP library in all this?
Is it true that development for OpenACC support will only be happening for GCC 6+ from now on?
When OpenACC support matures, is it the goal to enable it in a similar way we enable OpenMP (i.e., by just adding a couple of compiler flags)?
Can someone also provide answers to all the above after replacing "OpenACC" with "OpenMP 4.0 GPU/MIC offload capability"?
Thanks in advance
The link below contains a script that will compile gcc for OpenACC support.
https://github.com/olcf/OLCFHack15/blob/master/GCC5OffloadTest/auto-gcc5-offload-openacc-build-install.sh
OpenACC is part of GCC's main branch now, but there are some points to note. Even if there are libraries that are part of gcc, when you compile gcc, you have to specify which libraries to compile. Not all of them will be compiled by default. For OpenACC there's an additional problem. Since, NVIDIA drivers are not open source, GCC cannot compile OpenACC directly to binaries. It needs to compile OpenACC to the intermediate NVPTX instructions which the Nvidia runtime will handle. Therefore you also need to install nvptx libs.
GOMP library is the intermediate library that handles both OpenMP and OpenACC
Yes, I think OpenACC development will only be happening in GCC 6, but it may still be backported to GCC 5. But your best best would be to use GCC 6.
While I cannot comment on what GCC developers decide to do, I think in the first point I have already stated what the problems are. Unless NVIDIA make their drivers open source, I think an extra step will always be necessary.
I believe right now OpenMP is planned only for CPU's and MIC. I believe OpenMP support for both will probably become default behavior. I am not sure whether OpenMP targeting NVIDIA GPU's are immediately part of their target, but since GCC is using GOMP for both OpenMP and OpenACC, I believe eventually they might be able to do it. Also, GCC is also targeting HSA using OpenMP, so basically AMD APU's. I am not sure whether AMD GPU's will work the same way, but it maybe possible. Since, AMD is making their drivers open source, I believe they maybe easier to integrate into default behavior.
Is there a guide somewhere that describes how to get LLVM to emit a binary for Cortex-M3 that I can massage into running bare metal? I've spent considerable time playing with LLVM on Windows and Ubuntu to no avail. I can get ARM-like assembly out. I can get bit code out, but what I really need is ELF, DWARF, Hobbit, Gandalf or any other Lord of the Rings critter that has a file format specification. Any and all help appreciated! I'm compiling LLVM 3.4 with CLANG on Ubuntu, Windows and/or OS X.
I created a firmware framework - PolyMCU https://github.com/labapart/polymcu - that is based on CMake that support GCC and LLVM. Because it is based on CMake you can build your firmware on Linux/Windows/MacOS.
It also uses Newlib and supports Baremetal/CMSIS RTOS (RTX)/FreeRTOS.
The benefit of using PolyMCU is this framework does not add any software layer on top of the libc and the MCU vendor's SDKs.
Another benefit is you can easily switch toolchains. I used this feature to get more feedback on my code by testing it with many compilers.
I also wrote a blog where I compared GCC and LLVM build size on ARM Cortex-M: http://labapart.com/blogs/3-the-importance-of-the-toolchain-version-in-embedded-space Interesting results, Clang generated code is not much bigger than GCC on Cortex-M...
The best guide that I know of is here: http://wiki.osdev.org/LLVM_Cross-Compiler. It's mostly about building an LLVM cross-compiler, but it does show a "Usage" section. However, that section specifically shows an example for a Cortex-A processor, but you should be able to get the general idea.
I have created an simple clang bare metal Cortex-M3 "hello world" program, but I don't have it in front of me. IIRC, the only options I needed were -march=thumb -mcpu=cortex-m3 as long as the LLVM compiler backend was built with the ARM thumb backend support (Again, see http://wiki.osdev.org/LLVM_Cross-Compiler). I did, however, need to link with arm-none-eabi-ld from the GCC toolchain here (http://launchpad.net/gcc-arm-embedded), and I believe that is how you can get your ELF binary.
I've since moved on to the D programming language, and I have a simple example using LDC (The LLVM D compiler) here (http://wiki.dlang.org/Extremely_minimal_semihosted_%22Hello_World%22)
So, I believe compiling bare metal ARM Cortex-M3 software with LLVM can be done, but it seems not many people have tried.
It is possible to use clang++ pulled from http://llvm.org/builds with https://launchpad.net/gcc-arm-embedded as a base, at least for the compile step.
Required extra arguments are the include paths hardcoded into gcc and certain arm-none-eabi defaults:
--target=arm-none-eabi -fshort-enums -isystem "../arm-none-eabi/include/c++/5.2.1" [-isystem ...]
Hopefully this hasn't been asked and answered already, but I just had a quick question on ARM.
Specifically, if when compiling Android (which has a lot of C and C++), you use GCC to compile, doesn't that create x86 based code? How is it that an ARM processor, which uses a reduced instruction set, can interpret this code and run like it does?
Thanks!
GCC doesn't just compile for x86. It actually compiles to any instruction set. If you wanted to you could create a new one just by adding a few files.
And ARM isn't a reduced instruction set. Its a completely different instruction set. There's some things ARM has that x86 doesn't and vice versa.
Building gcc goes through a configuration step, part of this is to specify a back-end. The back-end is responsible for op-code generation. The typical compiler is many phases. Briefly,
Parser - convert text to a data representation.
Front end - Optimize by changing code constructs, possibly language specific.
Middle end - Performs computer science optimization that are common to any compiler.
Back end - Performs optimization specific to the target CPU.
See stackoverflow compiler wiki for more.
So parts one to three are common for the x86 and the ARM versions of gcc (or any gcc). The Android compiler is a version of gcc which has been configured to generate ARM code. It is a different compiler than the one that normally runs on an x86. You maybe running an ARM emulator on a PC and then believe that this code is run by the x86. However, this is a virtual ARM machine running this code. An x86 processor can not run ARM code natively.
The Android gcc is an ARM configured gcc. A normal Linux distributions gcc is configured for an x86 or x86_64.
Something is missing above: Who compiles the compiler? In both cases, an x86 compiler compile the new compiler. The difference is the selected back-end. One is x86, the other ARM. Both compilers run on an x86, but they generate code for different targets. Gcc can only generate code for an ARM or an x86; never both via any sort of command line switch. A compiler build usually refers to three different CPU types.
Build - Machine where the compiler is built. This is the compiler's compiler.
Host - the machine the compiler runs on. Not it's output, but the compiler itself.
Target - the machine the back-end targets. The one code is generated for.
I think maybe people are thinking because they both run on the same host, they must generate code for the same target. But that is not true; it is a little mind bending at first. Depending on the setup, you may need compilers for each of these machines to make a final compiler.
The first compiler for any machine is usually a cross compiler. Except for some people who made primitive compilers long ago in assembler.
See also: Cross compiler.
to put it simply, when you're building for ARM on your x86 computer you're using a cross-compiler - a compiler that runs on one platform but generates code for another. This is extremely common when developing for embedded or mobile platforms.
I've found a company that provides GCC based toolchains for ARM, MIPS but I don't know in what are they different from the vanilla GCC, of course, they bring other software pieces such as Eclipse, but when looking only at GCC and Binutils, are they different or just the same?
One big difference between a pre-compiled toolchain (like those provided by Code Sourcery, MontaVista, Wind River, etc) and one built from source is convenience. Building a toolchain from scratch, especially for cross-compiling purposes, is tedious and can be a complete pain. Also, the newest versions of glibc (or uClibc), gcc, and binutils aren’t always compatible as they're developed independently. There are open source tools to make this process easier (like crosstool-NG), but having a proven toolchain that’s been optimized for a certain platform can save a lot of time and headaches. This is especially true at the beginning of a new project. It also helps to have technical support when things go screwy. Of course…you have to pay for it most of the time.
That being said, compiling your own toolchain will most likely save you money and can allow more flexibility down the road. MontaVista, as far as I know, doesn’t include support for older platforms in their newest toolchain releases. For example, if you bought MontaVista Pro 4.X and it included a toolchain with gcc 3.3.X, that’s the toolchain you’re most likely going to be stuck with for the life of your project. Upgrading to a toolchain with gcc 4.X most likely wouldn’t be an option.
Hope that helps.