Setting up cross-compiler for existing codebase on new machine - linux-kernel

I've done all my development work for an embedded linux device (gumstix) in a linux VM and I would like to move the code base to my host Linux computer. The cross-compiler was setup prior to me inheriting the codebase, so I'm not sure how the compiler was set up. I have some questions concerning how to set up the cross-compiler.
The compiler on the VM is a arm-linux-gnueabihf-gcc.
Is the cross-compiler kernel specific? (Using linux kernel 3.17)
Is the cross-compiler target device specific; i.e. do I need to use a gumstix compiler or is the arm-linux-gnueabihf-gcc satisfactory. Does this compiler need to be configured manually.
Is there a way to see/import the configuration setting of the working VM compiler?
Does the arm-linux-gnueabihf-gcc use the same standard library source code as the gcc compiler?
I've seen varying approaches to setting up cross-compilers on web. Where can I find comprehensive information for setting up a cross-compiler (More than a how-to, but also explains why).
Thank you

The cross compiler is not kernel specific nor target device specific. It is specific to the architecture of the SoC or processor you are targeting. So if your current compiler is arm-linux-gnueabihf-gcc it implies it can compile code for ARM32 processors which have floating point support in hardware. Depending on your host Linux system, you can install a similar compiler using the package manager or you may also download it from here.
Different people probably will recommend different approaches and also on whether a particular approach is easy or difficult. Regardless I tend to recommend building the complete target image and generating an SDK for doing development using something like Yocto/Openembedded or Buildroot.
Not sure exactly what you mean by Q4.

Related

Cross-compile on a Linux host for various targets

I have a set of more or less portable C/C++ sources sitting on a Linux development host that I would like to be able to:
compile for 32- and 64-bit Linux targets
cross-compile for 32- and 64-bit Windows targets
cross-compile for 32- and 64-bit Mac targets
and, ideally, without any runtime dependencies on other emulation DLL's like cygwin1.dll, MinGW, etc though I could use them if there's no other choice. If I have to use them, I'd prefer statically linking their functionality to my code.
The target binary that is desired is:
a shared library (.so) for Linux and Mac targets, and
a DLL for Windows.
I have no idea how to build a cross-compiler (and the associated toolchain) from scratch. I'm hearing that pre-built cross-compiler toolchains are available for various host-and-target combinations, but I don't know where to find them, or even how to use them without running into runtime crashes/coredumps later due to pointer model subtleties (LP64, LLP64, etc), specifying wrong or inadequate compiler switches, other misconfiguration, etc.
I've so far been unable to find the relevant and complete information on the above, and whatever little I've managed to find is scattered all over the place in so many bits and pieces that I'm not even sure if all that I've read is complete or even correct (applies fully, no more no less to my case).
I'm not a compilers expert, just their regular user. Would appreciate information achieving the above compilation goals.
I would like to cross compile a library for Mac OsX on Linux and I am considering imcross. The instructions in the site are simple, but everytime you setup a crosscompiling environment you have to fix a lot of things, so I won't expect that it will be straightforward. You can check in the website that there are some limitations to this project but it is the best I came across.
Not being a priority for me now (I have other stuff to do before performing this task) I didn't setup the crossenvironment yet. I am going to do that in few days time.

ARM and GCC Compiling

Hopefully this hasn't been asked and answered already, but I just had a quick question on ARM.
Specifically, if when compiling Android (which has a lot of C and C++), you use GCC to compile, doesn't that create x86 based code? How is it that an ARM processor, which uses a reduced instruction set, can interpret this code and run like it does?
Thanks!
GCC doesn't just compile for x86. It actually compiles to any instruction set. If you wanted to you could create a new one just by adding a few files.
And ARM isn't a reduced instruction set. Its a completely different instruction set. There's some things ARM has that x86 doesn't and vice versa.
Building gcc goes through a configuration step, part of this is to specify a back-end. The back-end is responsible for op-code generation. The typical compiler is many phases. Briefly,
Parser - convert text to a data representation.
Front end - Optimize by changing code constructs, possibly language specific.
Middle end - Performs computer science optimization that are common to any compiler.
Back end - Performs optimization specific to the target CPU.
See stackoverflow compiler wiki for more.
So parts one to three are common for the x86 and the ARM versions of gcc (or any gcc). The Android compiler is a version of gcc which has been configured to generate ARM code. It is a different compiler than the one that normally runs on an x86. You maybe running an ARM emulator on a PC and then believe that this code is run by the x86. However, this is a virtual ARM machine running this code. An x86 processor can not run ARM code natively.
The Android gcc is an ARM configured gcc. A normal Linux distributions gcc is configured for an x86 or x86_64.
Something is missing above: Who compiles the compiler? In both cases, an x86 compiler compile the new compiler. The difference is the selected back-end. One is x86, the other ARM. Both compilers run on an x86, but they generate code for different targets. Gcc can only generate code for an ARM or an x86; never both via any sort of command line switch. A compiler build usually refers to three different CPU types.
Build - Machine where the compiler is built. This is the compiler's compiler.
Host - the machine the compiler runs on. Not it's output, but the compiler itself.
Target - the machine the back-end targets. The one code is generated for.
I think maybe people are thinking because they both run on the same host, they must generate code for the same target. But that is not true; it is a little mind bending at first. Depending on the setup, you may need compilers for each of these machines to make a final compiler.
The first compiler for any machine is usually a cross compiler. Except for some people who made primitive compilers long ago in assembler.
See also: Cross compiler.
to put it simply, when you're building for ARM on your x86 computer you're using a cross-compiler - a compiler that runs on one platform but generates code for another. This is extremely common when developing for embedded or mobile platforms.

Distro provided cross compiler vs custom built gcc

I intend to cross compile for Raspberry Pi, basically a small ARM computer. The host will be an i686 box running Arch Linux.
My first instinct is to use cross compiler provided by Arch Linux, arm-elf-gcc-base and arm-elf-binutils. However, every wiki and post I read seems to use some version of custom gcc build. They seem to spend significant time on cooking their own gcc. Problem is that they never say WHY it is important to use their gcc over another.
Can stock distro provided cross compilers be used for building Raspberry Pi or ARM in general kernels and apps?
Is it necessary to have multiple compilers for ARM architecture? If so, why, since single gcc can support all x86 variants?
If 2), then how can I deduce what target subset is supported by a particular version of gcc?
More general question, what general use cases call for custom gcc build?
Please be as technical as you can, I'd like to know WHY as well as how.
When developers talk about building software (cross compiling) for a different machine (target) compared to their own (host) they use the term toolchain to describe the set of tools necessary to build binary files. That's because when you need to build an executable binary, you need more than a compiler.
You need routines (crt0.o) to initialize runtime according to requirements of operating system and standard libraries. You need standard set of libraries and those libraries need to be aware of the kernel on target because of the system calls API and several os level configurations (f.e. page size) and data structures (f.e. time structures).
On the hardware side, there are different set of ARM architectures. Architectures can be backward compatible but a toolchain by nature is binary and targeted for a specific architecture. You can have the most widespread architecture by default but then that won't be too fruitful for an already constraint environment (embedded device). If you have the latest architecture, then it won't be useful for older architecture based targets.
When you build a binary on your host for your host, compiler can look up all the necessary bits from its own environment or use what's on the host - so most of the above details are invisible to developer. However when you build for a different target than your host type, toolchain must know about hardware, os and standard library details. The way you tell these to toolchain is... by building it according to those details which might require some level of bootstrapping. (or you can do this via extensive set of parameters if toolchain supports / built for it.)
So when there is a generic (stock) cross compile toolchain, it has already some target specifics set and that might not meet your requirements. Please see this recent question about the situation on Ubuntu for an example.

Porting Linux kernel 2.6 to new MIPS board

I wanna port Linux kernel 2.6.x to new MIPS board. Unfortunatelly, I can't find good actual documentation with step by step explaination. Hope, you'll help me. Paper books are OK too.
Thank you in advance!
First, get your hands on a MIPS toolchain. You're going to need it to compile the kernel. I've used buildroot a few times, including for building a MIPS toolchain.
But buildroot offers a lot more than just that:
Buildroot can generate any or all of a
cross-compilation toolchain, a root
filesystem, a kernel image and a
bootloader image. Buildroot is useful
mainly for people working with small
or embedded systems, using various CPU
architectures (x86, ARM, MIPS,
PowerPC, etc.) : it automates the
building process of your embedded
system and eases the cross-compilation
process.
If you would like to do this process manually, I suggest you take a look at this. It's not for MIPS but it shows the generic formula (you'll probably have to find and apply MIPS patches to the Kernel before compiling it). Try buildroot, it does all of this automagically!
I must also recommend reading Jun Sun's Linux MIPS Porting Guide.

How can I compile object code for the wrong system and cross compiling question?

Reference this question about compiling. I don't understand how my program for Mac can use the right -arch, compile with those -arch flags, the -arch flags be for the system I am on (a ppc64 g5), and still produce the wrong object code.
Also, if I used a cross compiler and was on Linux, produced 10.5 code for mac, how would this be any different than what I described above?
Background is that I have tried to compile various apache modules. They compile with the -arch ppc, ppc64, etc. I get no errors and I get my mod_whatever.so. But, apache will always complain that some symbol isn't found. Apparently, it has to do with what the compiler produces, even though the file type says it is for ppc, ppc64, i386, x_64 (universal binary) and seems to match all the other .so mods I have.
I guess I don't understand how it could compile for my system with no problem and then say my system can't use it. Maybe I do not understand what a compiler is actually giving me.
EDIT: All error messages and the complete process can be seen here.
Thank you.
Looking at the other thread and elsewhere and without a G5 or OSX Server installation, I can only make a few comments and suggestions but perhaps they will help.
It's generally not a good idea to be modifying the o/s vendor's installed software. Installing a new Apache module is less problematic than, say, overwriting an existing library but you're still at the mercy of the vendor in that a Software Update could delete your modifications and, beyond that you have to figure out how the vendor's version was built in the first place. A common practice in the OS X world is to avoid this by making a completely separate installation of an open source product, like Apache, using, for instance, MacPorts. That has its cons, too: to achieve a high-level of independence, MacPorts will often download and build a lot of dependent packages for things which are already in OS X but there's no harm in that other than some extra build cycles and disk space.
That said, it should be possible to build and install apache modules to supplement those supplied by Apple. Apple does publish the changes it makes to open source products here; you can drill down in the various versions there to find the apache directory which contains the source, Makefile and applied patches. That might be of help.
Make sure that the mod_*.so you build are truly 64-bit and don't depend on any non-64 bit libraries. Use otool -L mod_*.so to see the dynamic libraries that each references and then use file on those libraries to ensure they all have ppc64 variants.
Make sure you are using up-to-date developer tools (Xcode 3.1.3 is current).
While the developer tool chain uses many open source components, Apple has enhanced many of them and there are big differences in OS X's ABIs, universal binary support, dynamic libraries, etc. The bottom line is that cross-compilation of OS X-targeted object code on Linux (or any other non-OS X platform) is neither supported nor practical.

Resources