Can I run C program compiled on different ARM processor? - compilation

Let's say I compiled C-program on RaspberryPi, can I run this binary on let's say Cubietruck?
How to know for sure that 2 ARM processors are compatible? Are they all compatible between each other?
It should be some easy answer referring instruction set supported by processors, but I can't find any good materials on that.

There are several conditions for that:
Your executable should use the "least common denominator" of all the ARM microarchitectures you wish to support. See gcc's -march=... option for that. Assuming you're running Linux, grep '^model' /proc/cpuinfo should give you that information for each platform.
(related) Some features may not be supported by all your target ARM cores (FPU, NEON, etc...), so be very careful with that.
You should, of course, run the same OS on all supported platforms.
You need to make sure that all supported platforms run the same ABI; ARM has an history of ABIs changes, so you must take this into consideration.
If you're lucky enough to target only reasonably modern ARM platforms, you should be able to find some common ground (EABI or Hard Float ABI). Otherwise you probably have no choice but to maintain several versions of your executable.

Related

why must gnu binutils be configured for a spefic target. What's going on underneath

I am messing around with creating my own custom gcc toolchain for an arm Cortex-A5 cpu, and I am trying to dive as deeply as possible into each step. I have deliberately avoided using crosstool-ng or other tools to assist, in order to get a better understanding of what is going on in the process of creating a toolchain.
One thing stumples me though. During configuration and building of binutils, I need to specify a target (--target). This target can be either the classic host-tuple (ex: arm-none-linux-gnuabi) or a specific type, something like i686-elf for example.
Why is this target needed? and what does it specifically do with the generated "as" and "ld" programs built by binutils?
For example, if I build it with arm-none-linux-gnueabi, it looks like the resulting "as" program supports every arm instruction set under the sun (armv4t, armv5, e.t.c.).
Is it simply for saving space in the resulting executable? Or is something more going on?
I would get it, if I configured the binutils for a specific instruction set for example. Build me an assembler that understands armv4t instructions.
Looking through the source of binutils and gas specifically, it looks like the host-tuple is selecting some header files located in gas/config/tc*, gas/config/te*. Again, this seems arbitrary, as it is broad categories of systems.
Sorry for the rambling :) I guess my questing can be stated as: Why is binutils not an all-in-one package?
Why is this target needed?
Because there are (many) different target architectures. ARM assembler / code is different from PowerPC is different from x86 etc. In principle it would have been possible to design one tool for all targets, but that was not the approach taken at te time.
Focus was mainly on speed / performance. The executables are small as of today's 'standards' but combining all > 40 architectures and all tools like as, ld, nm etc. would be / have been quite clunky.
Moreover, not only are modern host machines much more powerful, that also applies to the compiler / assembled programs, sometime zillions of lines of (preprocessed) C++ to compile. This means the overall build times shifted much more in the direction of compilation that in the old days.
Only different core families are usually switchable / selectable per options.

How can a compiler cross-compile to a different OS and architecture?

I'm very intrigued by the fact that Go (since v1.5) has in-built cross compilation options.
But how is it possible to compile for a different OS and architecture?
I mean that would require knowing (and probably behaving like) the target machine language and platform.
I mean that would require knowing (and probably behaving like) the target machine language and platform.
Yes, the Go compiler has to know how the target operating system works, but it doesn't need to behave like the target OS, as the Go compiler will not run the compiled executable binary, it just needs to produce it.
All the Go tools need to know is the binary formats of the different Operating Systems, and OS and architectural details (such as the instruction set, word size, endianness, alignment, available registers etc.; more info on this). And this knowledge is built into the Go tools.

Are games/programs compiled for multiple architectures?

This might be a big broad and somewhat stupid, but it is something I've never understood.
I've never dealt with code that needs to be compiled except Java, which I guess falls between two chairs, so here goes.
I don't understand how games and programs are compiled. Are they compiled for multiple architectures? Or are they compiled during installation (it does not look that way)?
As far as I've understood, code needs to be compiled based on the local architecture in order to make it work. Meaning that you can't compile something for AMD and "copy" the binaries and execute them on a computer running Intel (or similar).
Is there something I've misunderstood here, or does they use an approach which differs from the example I am presenting?
AMD and Intel are manufacturers. You might be thinking of amd64 (also known as x86_64) versus x86. x86_64 is, as the name suggests, based on x86.
Computers running a 64-bit x86_64 OS can normally run x86 apps, but the reverse is not true. So one possibility is to ship 32 bit x86 games, but that limits the amount of RAM that can be accessed per process. That might be OK for a game though.
A bigger issue is shipping for different platforms, such as Playstation and (Windows) PC. The Playstation not only has a completely different CPU architecture (Cell), but a different operating system.
In this case you can't simply cross-compile - and that is because of the operating system difference. You have to have two separate versions of the game - sharing a bunch of common code and media files (also known as assets) - one version for PC and one for Playstation.
You could use Java to overcome that problem, in theory... but that only works when a JVM is available for all target platforms. Also, there is now fragmentation in the Java market, with e.g. Android supporting a different API from JME. And iPhones and iPads don't support Java at all.
Many games publishers do not in fact use Java. An exception is Mojang.

Distro provided cross compiler vs custom built gcc

I intend to cross compile for Raspberry Pi, basically a small ARM computer. The host will be an i686 box running Arch Linux.
My first instinct is to use cross compiler provided by Arch Linux, arm-elf-gcc-base and arm-elf-binutils. However, every wiki and post I read seems to use some version of custom gcc build. They seem to spend significant time on cooking their own gcc. Problem is that they never say WHY it is important to use their gcc over another.
Can stock distro provided cross compilers be used for building Raspberry Pi or ARM in general kernels and apps?
Is it necessary to have multiple compilers for ARM architecture? If so, why, since single gcc can support all x86 variants?
If 2), then how can I deduce what target subset is supported by a particular version of gcc?
More general question, what general use cases call for custom gcc build?
Please be as technical as you can, I'd like to know WHY as well as how.
When developers talk about building software (cross compiling) for a different machine (target) compared to their own (host) they use the term toolchain to describe the set of tools necessary to build binary files. That's because when you need to build an executable binary, you need more than a compiler.
You need routines (crt0.o) to initialize runtime according to requirements of operating system and standard libraries. You need standard set of libraries and those libraries need to be aware of the kernel on target because of the system calls API and several os level configurations (f.e. page size) and data structures (f.e. time structures).
On the hardware side, there are different set of ARM architectures. Architectures can be backward compatible but a toolchain by nature is binary and targeted for a specific architecture. You can have the most widespread architecture by default but then that won't be too fruitful for an already constraint environment (embedded device). If you have the latest architecture, then it won't be useful for older architecture based targets.
When you build a binary on your host for your host, compiler can look up all the necessary bits from its own environment or use what's on the host - so most of the above details are invisible to developer. However when you build for a different target than your host type, toolchain must know about hardware, os and standard library details. The way you tell these to toolchain is... by building it according to those details which might require some level of bootstrapping. (or you can do this via extensive set of parameters if toolchain supports / built for it.)
So when there is a generic (stock) cross compile toolchain, it has already some target specifics set and that might not meet your requirements. Please see this recent question about the situation on Ubuntu for an example.

Basic questions about Assembly and Macs

Okay. I want to learn how to assemble programs on my Mac (Early 2009 MBP, Intel Core 2 Duo). So far, I understand only that Assembly languages are comprised of direct one-to-one mnemonics for CPU instructions. After some Googling, I've seen a lot of terms, mostly "x86" and "x86_64". I've also seen MASM, NASM, and GAS, among others.
Correct me if I'm wrong:
x86 and x86_64 are instruction sets. If I write something using these instruction sets (as raw machine code), I'm fine so long as my program stays on the processor it was designed for.
NASM, MASM, and GAS are all different assemblers.
There are different Assembly languages. There's the AT&T syntax and the Intel syntax, for example. Support for these syntaxes differ across assemblers.
Now, questions:
As a Mac user, which instruction sets should I be concerned about?
Xcode uses GCC. Does this mean it also uses GAS?
If it does use GAS, then should I be learning the AT&T syntax?
Is there a book I can get on this. Not a tutorial, not a reference manual on the web. Those things assume to much about me; for example, as far as I know, a register is just a little bit of memory on the CPU. That's how little I really know.
Thanks for your help.
If you want to learn assembly language, start with the x86 instruction set. That's the basic set.
A good book on the subject is Randall Hyde's the Art of Assembly Language, which is also available on his website. He uses a high-level assembler to make things easy to grasp and to get going, but deep down it uses GAS.
I don't believe that XCode comes with any assembler, but you can for example find GAS in MacPort's binutils package.
If you just want to make programs on your Mac and you're not that interested in the life of all the bits in the CPU, you're much better off with a more high-level language like Python or Ruby.
"I'm fine so long as my program stays on the processor it was designed for." Not really. In many cases, assembler programs will also make assumptions about the operating system they run on (e.g. when they call library functions or make system calls). Otherwise, your assumpptions are correct.
Onto questions:
Current Macs support both x86 and x86-64 (aka AMD64 aka EM64T aka Intel64). Both 32-bit and 64-bit binaries can be run on recent systems; Apple itself ships its libraries in "fat" (aka "universal") mode, i.e. machine code for multiple architectures.
Use "as -v" to find out what precise assembler you have; mine reports as "Apple Inc version cctools-698.1~1, GNU assembler version 1.38". So yes, it's GAS.
Yes.
https://stackoverflow.com/questions/4845/good-x86-assembly-book
I'll answer the first question:
Macs use Intel chips now, and modern processors are 64-bit.

Resources