I have some functional libraries (Qt) builded on linux Mint and I already built some test applications (cross compile) for ARM that works. Since it was impossible to cross compile the same libs on Windows, despite all efforts and the excellent cross compiler (Linaro) I wonder if it's possible a second approach.
It's possible to take required so + headers and use them in cross compilation on Windows for the same ARM? (assuming the cross compiler will be the same).
In fact, I can link with linux so libs like regular a's ?
Thank you very much,
You could always build GCC as a cross-compiler by yourself for the desired targeted architecture and platform (a useful skill to have, anyway). I did it myself several times on Windows. I highly recommend this tutorial for further reading. It has been vastly improved since the time I used it to build my first cross-compiler, several years ago. Good luck.
Related
I have a set of more or less portable C/C++ sources sitting on a Linux development host that I would like to be able to:
compile for 32- and 64-bit Linux targets
cross-compile for 32- and 64-bit Windows targets
cross-compile for 32- and 64-bit Mac targets
and, ideally, without any runtime dependencies on other emulation DLL's like cygwin1.dll, MinGW, etc though I could use them if there's no other choice. If I have to use them, I'd prefer statically linking their functionality to my code.
The target binary that is desired is:
a shared library (.so) for Linux and Mac targets, and
a DLL for Windows.
I have no idea how to build a cross-compiler (and the associated toolchain) from scratch. I'm hearing that pre-built cross-compiler toolchains are available for various host-and-target combinations, but I don't know where to find them, or even how to use them without running into runtime crashes/coredumps later due to pointer model subtleties (LP64, LLP64, etc), specifying wrong or inadequate compiler switches, other misconfiguration, etc.
I've so far been unable to find the relevant and complete information on the above, and whatever little I've managed to find is scattered all over the place in so many bits and pieces that I'm not even sure if all that I've read is complete or even correct (applies fully, no more no less to my case).
I'm not a compilers expert, just their regular user. Would appreciate information achieving the above compilation goals.
I would like to cross compile a library for Mac OsX on Linux and I am considering imcross. The instructions in the site are simple, but everytime you setup a crosscompiling environment you have to fix a lot of things, so I won't expect that it will be straightforward. You can check in the website that there are some limitations to this project but it is the best I came across.
Not being a priority for me now (I have other stuff to do before performing this task) I didn't setup the crossenvironment yet. I am going to do that in few days time.
I intend to cross compile for Raspberry Pi, basically a small ARM computer. The host will be an i686 box running Arch Linux.
My first instinct is to use cross compiler provided by Arch Linux, arm-elf-gcc-base and arm-elf-binutils. However, every wiki and post I read seems to use some version of custom gcc build. They seem to spend significant time on cooking their own gcc. Problem is that they never say WHY it is important to use their gcc over another.
Can stock distro provided cross compilers be used for building Raspberry Pi or ARM in general kernels and apps?
Is it necessary to have multiple compilers for ARM architecture? If so, why, since single gcc can support all x86 variants?
If 2), then how can I deduce what target subset is supported by a particular version of gcc?
More general question, what general use cases call for custom gcc build?
Please be as technical as you can, I'd like to know WHY as well as how.
When developers talk about building software (cross compiling) for a different machine (target) compared to their own (host) they use the term toolchain to describe the set of tools necessary to build binary files. That's because when you need to build an executable binary, you need more than a compiler.
You need routines (crt0.o) to initialize runtime according to requirements of operating system and standard libraries. You need standard set of libraries and those libraries need to be aware of the kernel on target because of the system calls API and several os level configurations (f.e. page size) and data structures (f.e. time structures).
On the hardware side, there are different set of ARM architectures. Architectures can be backward compatible but a toolchain by nature is binary and targeted for a specific architecture. You can have the most widespread architecture by default but then that won't be too fruitful for an already constraint environment (embedded device). If you have the latest architecture, then it won't be useful for older architecture based targets.
When you build a binary on your host for your host, compiler can look up all the necessary bits from its own environment or use what's on the host - so most of the above details are invisible to developer. However when you build for a different target than your host type, toolchain must know about hardware, os and standard library details. The way you tell these to toolchain is... by building it according to those details which might require some level of bootstrapping. (or you can do this via extensive set of parameters if toolchain supports / built for it.)
So when there is a generic (stock) cross compile toolchain, it has already some target specifics set and that might not meet your requirements. Please see this recent question about the situation on Ubuntu for an example.
I am a complete newbie to the ARM world. I need to be able to write C code, compile it, and then download into an ARM emulator, and execute. I need to use the GCC 4.1.2 compiler for the C code compilation.
Can anybody point me in the correct directions for the following issues?
What tool chain to use?
What emulator to use?
Are there tutorials or guides on setting up the tool chain?
building a gcc cross compiler yourself is pretty easy. the gcc library and the C library and other things not so much, an embedded library and such a little harder. Depends on how embedded you want to get. I have little use for gcclib or a c library so roll your own works great for me.
After many years of doing this, perhaps it is an age thing, I now just go get the code sourcery tools. the lite version works great. yagarto, devkitarm, winarm or something like that (the site with a zillion examples) all work fine. emdebian also has a good pre-built toolchain. a number of these places if not all have info on how they built their toolchains from gnu sources.
You asked about gcc, but bear in mind that llvm is a strong competitor, and as far as cross compiling goes, since it always cross compiles, it is a far easier cross compiler to download and build and get working than gcc. the recent version is now producing code (for arm) that competes with gcc for performance. gcc is in no way a leader in performance, other compilers I have used run circles around it, but it has been improving with each release (well the 3.x versions sometimes produce better code than the 4.x versions, but you need 4.x for the newer cores and thumb2). even if you go with gcc, try the stable release of llvm from time to time.
qemu is a good emulator, depending on what you are doing the gba emulator virtual gameboy advance is good. There are a couple of nds emulators too. GDB and other places have what appear to be ARMs own armulator. I found it hard to extract and use, so I wrote my own, but being lazy only implemented the thumb instruction set, I called mine the thumbulator. easy to use. Far easier than qemu and armulator to add peripherals to and watch and debug your code. ymmv.
Hmmm I posted a similar answer for someone recently. Google: arm verilog and at umich you will find a file isc.tgz in which is an arm10 behavioural (as in you cannot make a chip from it therefore you can find verilog on the net) model. Which for someone wanting to learn an instruction set, watching your code execute at the gate level is about as good as it gets. Be careful, like a drug, you can get addicted then have a hard time when you go back to silicon where you have relatively zero visibility into your code while it is executing. Somewhere in stackoverflow I posted the steps involved to get that arm10 model and another file or two to turn it into an arm emulator using icarus verilog. gtkwave is a good and free tool for examining the wave (vcd) files.
Above all else you will need the ARM ARM. (The ARM Architectural Reference Manual). Just google it and find it on ARM's web site. There is pseudo code for each instruction teaching you what they do. Use the thumbulator or armulator or others if you need to understand more (mame has an arm core in it too). I make no guarantees that the thumbulator is 100% debugged or accurate, I took some common programs and compared their output to silicon both arm and non-arm to debug the core.
Toolchain you can use Yagarto http://www.yagarto.de/
Emulator you can use Proteus ISIS http://www.labcenter.com/index.cfm
(There is a demo version)
and tutorials, well, google them =)
Good luck!
Reference this question about compiling. I don't understand how my program for Mac can use the right -arch, compile with those -arch flags, the -arch flags be for the system I am on (a ppc64 g5), and still produce the wrong object code.
Also, if I used a cross compiler and was on Linux, produced 10.5 code for mac, how would this be any different than what I described above?
Background is that I have tried to compile various apache modules. They compile with the -arch ppc, ppc64, etc. I get no errors and I get my mod_whatever.so. But, apache will always complain that some symbol isn't found. Apparently, it has to do with what the compiler produces, even though the file type says it is for ppc, ppc64, i386, x_64 (universal binary) and seems to match all the other .so mods I have.
I guess I don't understand how it could compile for my system with no problem and then say my system can't use it. Maybe I do not understand what a compiler is actually giving me.
EDIT: All error messages and the complete process can be seen here.
Thank you.
Looking at the other thread and elsewhere and without a G5 or OSX Server installation, I can only make a few comments and suggestions but perhaps they will help.
It's generally not a good idea to be modifying the o/s vendor's installed software. Installing a new Apache module is less problematic than, say, overwriting an existing library but you're still at the mercy of the vendor in that a Software Update could delete your modifications and, beyond that you have to figure out how the vendor's version was built in the first place. A common practice in the OS X world is to avoid this by making a completely separate installation of an open source product, like Apache, using, for instance, MacPorts. That has its cons, too: to achieve a high-level of independence, MacPorts will often download and build a lot of dependent packages for things which are already in OS X but there's no harm in that other than some extra build cycles and disk space.
That said, it should be possible to build and install apache modules to supplement those supplied by Apple. Apple does publish the changes it makes to open source products here; you can drill down in the various versions there to find the apache directory which contains the source, Makefile and applied patches. That might be of help.
Make sure that the mod_*.so you build are truly 64-bit and don't depend on any non-64 bit libraries. Use otool -L mod_*.so to see the dynamic libraries that each references and then use file on those libraries to ensure they all have ppc64 variants.
Make sure you are using up-to-date developer tools (Xcode 3.1.3 is current).
While the developer tool chain uses many open source components, Apple has enhanced many of them and there are big differences in OS X's ABIs, universal binary support, dynamic libraries, etc. The bottom line is that cross-compilation of OS X-targeted object code on Linux (or any other non-OS X platform) is neither supported nor practical.
I've found a company that provides GCC based toolchains for ARM, MIPS but I don't know in what are they different from the vanilla GCC, of course, they bring other software pieces such as Eclipse, but when looking only at GCC and Binutils, are they different or just the same?
One big difference between a pre-compiled toolchain (like those provided by Code Sourcery, MontaVista, Wind River, etc) and one built from source is convenience. Building a toolchain from scratch, especially for cross-compiling purposes, is tedious and can be a complete pain. Also, the newest versions of glibc (or uClibc), gcc, and binutils aren’t always compatible as they're developed independently. There are open source tools to make this process easier (like crosstool-NG), but having a proven toolchain that’s been optimized for a certain platform can save a lot of time and headaches. This is especially true at the beginning of a new project. It also helps to have technical support when things go screwy. Of course…you have to pay for it most of the time.
That being said, compiling your own toolchain will most likely save you money and can allow more flexibility down the road. MontaVista, as far as I know, doesn’t include support for older platforms in their newest toolchain releases. For example, if you bought MontaVista Pro 4.X and it included a toolchain with gcc 3.3.X, that’s the toolchain you’re most likely going to be stuck with for the life of your project. Upgrading to a toolchain with gcc 4.X most likely wouldn’t be an option.
Hope that helps.