toolchain and libraries - compilation

When we compile a toolchain, we need to specify which library we are using to compile the toolchain. For example, i recently compiled toolchain for openRISC architecture. They gave me an option to choose from uClibc and newlib.
Is it necessary to compile a toolchain with a library? While working on say embedded linux, cant i just compile a library on the target platform, and the use the toolchain (compiled without the library) and link the library with the user program ? Thank you!

well , yes we have to specify the c library in order to build a toolchain . Like uClibc is mainly for micro controller ( later for processors ) , musl libc is for less than 32MB of size ( for smaller memory ) , glibc is for large memory but not very configurable . glibc is for wildly ( POSIX compatible). Also you have take care of other supporting libraries while building toolchain ( whether POSIX compatibility) etc.

Of coarse it is necessary to compile a toolchain with a library. In order to lessen the search and become more convenient. Everybody wants everything so quickly nowadays if they dont get it straight away they get annoyed but we should learn how to become useful and productive sometimes. Libraries and toolchain are very important.

Related

Dual ABI support in a single file library? What's the easiest/best practice way?

How can we compile a shared library to support both ABIs in a single .so file using Modern CMake (>3.4)?
If this is not possible, what's the best practice for that? I'd like to avoid hard-coded and custom code to keep the maintenance straightforward.
I found this link but it requires to change the code and uses gcc only: http://kayari.org/cxx/dualabi.html
Reason: My users have Centos 7 (old ABI exclusively) and Centos 8 (both ABI). The size of the library isn't important as long as the performance doesn't change.
Targets: gcc and cmake

Cross compile with linux libraries on windows

I have some functional libraries (Qt) builded on linux Mint and I already built some test applications (cross compile) for ARM that works. Since it was impossible to cross compile the same libs on Windows, despite all efforts and the excellent cross compiler (Linaro) I wonder if it's possible a second approach.
It's possible to take required so + headers and use them in cross compilation on Windows for the same ARM? (assuming the cross compiler will be the same).
In fact, I can link with linux so libs like regular a's ?
Thank you very much,
You could always build GCC as a cross-compiler by yourself for the desired targeted architecture and platform (a useful skill to have, anyway). I did it myself several times on Windows. I highly recommend this tutorial for further reading. It has been vastly improved since the time I used it to build my first cross-compiler, several years ago. Good luck.

Portable method to package C++11 program sources

so, C++11 has been around for a while and, given there already are compilers supporting it on most platforms, it would be nice to use it in some real software -- e.g. one that can be packaged in as-portable-as-possible package, preferably providing ./configure and so.
Because both Clang and GCC currently need -std=c++11 flag to compile c++11 source, and both sometimes require specific flags to work correctly (see for example How to compile C++11 with clang 3.2 on OSX lion? or C++11 Thread not working ), I'm quite afraid that the package won't work on some platforms that already support c++11 because of wrong invocation of compiler.
Q: Is there some standard how to correctly and portably compile c++11? E.g. autotools/autoconf check or some list of compiler/platform directives that describe all possible needed options? Or does the situation come from the fact that c++11 standard implementations are currently marked as "experimental" and the standard will eventually stabilize and become the default choice, not needing any usage of extra compiler flags?
Thanks
-exa
Well, if you`re trying to write portable code, i would recommend using cmake
a very powerful cross-platform, open-source build system.
Using cmake you should be able to identify the compilers available in your current machine and then generate your makefiles using the flags that you want in each case.
I have been using cmake for almost a year by now and it has significantly reduced the time consumed when trying to get a project compiling in different platforms.
I`m using CMake to generate Makefiles of C++11 projects. The only change in CMakeLists.txt I need to do is add the following:
ADD_DEFINITIONS("-std=gnu++11")
ADD_DEFINITIONS("-D_GLIBCXX_USE_C99_STDINT_TR1")
ADD_DEFINITIONS("-D_GLIBCXX_HAS_GTHREADS")
However, as I use Qt, I re-compile QtSDK with a new gcc version 4.8 and get a complete mingw system that use gcc in version 4.8.
Makings these changes, the project compile and run in Windows XP, Windows 7 and linux both 32 and 64 bits. I didn`t test it in OSX yet.

Distro provided cross compiler vs custom built gcc

I intend to cross compile for Raspberry Pi, basically a small ARM computer. The host will be an i686 box running Arch Linux.
My first instinct is to use cross compiler provided by Arch Linux, arm-elf-gcc-base and arm-elf-binutils. However, every wiki and post I read seems to use some version of custom gcc build. They seem to spend significant time on cooking their own gcc. Problem is that they never say WHY it is important to use their gcc over another.
Can stock distro provided cross compilers be used for building Raspberry Pi or ARM in general kernels and apps?
Is it necessary to have multiple compilers for ARM architecture? If so, why, since single gcc can support all x86 variants?
If 2), then how can I deduce what target subset is supported by a particular version of gcc?
More general question, what general use cases call for custom gcc build?
Please be as technical as you can, I'd like to know WHY as well as how.
When developers talk about building software (cross compiling) for a different machine (target) compared to their own (host) they use the term toolchain to describe the set of tools necessary to build binary files. That's because when you need to build an executable binary, you need more than a compiler.
You need routines (crt0.o) to initialize runtime according to requirements of operating system and standard libraries. You need standard set of libraries and those libraries need to be aware of the kernel on target because of the system calls API and several os level configurations (f.e. page size) and data structures (f.e. time structures).
On the hardware side, there are different set of ARM architectures. Architectures can be backward compatible but a toolchain by nature is binary and targeted for a specific architecture. You can have the most widespread architecture by default but then that won't be too fruitful for an already constraint environment (embedded device). If you have the latest architecture, then it won't be useful for older architecture based targets.
When you build a binary on your host for your host, compiler can look up all the necessary bits from its own environment or use what's on the host - so most of the above details are invisible to developer. However when you build for a different target than your host type, toolchain must know about hardware, os and standard library details. The way you tell these to toolchain is... by building it according to those details which might require some level of bootstrapping. (or you can do this via extensive set of parameters if toolchain supports / built for it.)
So when there is a generic (stock) cross compile toolchain, it has already some target specifics set and that might not meet your requirements. Please see this recent question about the situation on Ubuntu for an example.

Performance comparison between Windows gcc compiled & Visual Studio compiled

I'm currently compiling an open source optimization library (native C++) supplied with makefiles for use with gcc. As I am a Windows user, I'm curious on the two options I see of compiling this, using gcc with MinGW/Cygwin or manually building a Visual Studio project and compiling the source.
1) If I compile using MinGW/Cygwin + gcc, will the resulting .lib (static library) require any libraries from MinGW/Cygwin? I.e. can I distribute my compiled .lib to a Windows PC that doesn't have MinGW/Cygwin and will it still run?
2) Other than performance differences between the compilers themselves, is there an overhead associated when compiling using MinGW/Cygwin and gcc - as in does the emulation layer get compiled into the library, or does gcc build a native Windows library?
3) If speed is my primary objective of the library, which is the best method to use? I realise this is quite open ended, and I may be best running my own benchmarks, but if someone has experience here this would be great!
The whole point of Cygwin is the Linux emulation layer, and by default (ie if you don't cross-compile), binaries need cygwin1.dll to run.
This is not the case for MinGW, which creates binaries as 'native' as the ones from MSVC. However, MinGW comes with its own set of runtime libraries, in particular libstdc++-6.dll. This library can also be linked statically by using -static-libstdc++, in which case you also probably want to compile with -static-libgcc.
This does not mean that you can freely mix C++ libraries from different compilers (see this page on mingw.org). If you do not want to restrict yourself to an extern "C" interface to your library, you most likely will have to choose a single compiler and stick with it.
As to your performance concerns: Using Cygwin only causes a (minor?) penalty when actually interacting with the OS - where raw computations are concerned, only the quality of the optimizer matters.

Resources