I downloaded crosstool-ng-1.24.0, not yet installed. I want build cross toolchain for ARM target, but I need specific components versions run on target ARM board, for which the toolchain generates code, Linux kernel 2.6.26.5, gnu gcc 4.1.2, glibc-2.5
Since crosstool-ng toolchain have no these components, how I can add and install it? I checked toolchain sources and documents but didn't find any instruction on that. The 'packages' folder contain separate components, but in form of patches. I would prefer download the required components and put these locally, in folder. How to make these correctly?
Related
I am sorry for naive question. I could not understand the difference between these Yocto variables. The Manual says
TOOLCHAIN_HOST_TASK: Lists packages that make up the host part of the
SDK (i.e. the part that runs on the SDKMACHINE). When you use bitbake
-c populate_sdk to create the SDK, a set of default packages apply. This variable allows you to add more packages.
And
TOOLCHAIN_TARGET_TASK: Lists packages that make up the target part of
the SDK (i.e. the part built for the target hardware).
I could not understand what is difference between Host part of SDK and target part of SDK ?
As for I understand, Host part is that we expanded on our host PC and use it for cross-development. What is target part of SDK ?
The recipes added to TOOLCHAIN_TARGET_TASK will be cross-compiled for the target architecture, and included in the target sysroot in the SDK.
The recipes added to TOOLCHAIN_HOST_TASK will be built to run on the developer machine.
So if you want a certain library available in the SDK, so that you can develop applications linking to it, add it to TOOLCHAIN_TARGET_TASK. Then the cross-compiles library and its header files will be available in the SDK.
If you on the other hand have a tool that's need during building, like a code-generator or cmake, you add it to TOOLCHAIN_HOST_TASK so that it's available on the developer machine during the build of the target software.
I develop a utility in Go that requires recent version of sqlite. I'm interested only in targeting specific architecture, to be specific: x64 linux. I'm developing that utility on Mac OS X. I'm using go-sqlite3 driver. I use GNU Make + Glide to build my utility. In order to cross compile on my Mac I pass specific arch flags to make.
Repos on Linux platforms that I'm targeting usually have quite old versions of sqlite that don't have features that I need in my utility.
I can manually compile and install required version of sqlite on all the platforms that I need, but it is quite cumbersome. I wonder if there is a good way to either statically link a specific version of sqlite or somehow bundle a utility with specific version of sqlite dynamic library.
Even though I mention sqlite a lot, this question can be generalized to other libraries: how to bundle a golang app with a specific version of C library an outdated version of which may be installed on the target platform.
Also: how to better organize development of that utility so that other devs won't need to manually compile and install specific version of the library - the preference is to use Makefile that would build all the binaries for required target platform. I see that I can just copy code of specific version of library (e.g. sqlite) to my utility's repo though I wonder if there is a better option - maybe I can somehow use glide dependencies for that purpose and build library that I need as part of my other dependencies.
I am using buildroot's toolchain to cross compile applications for ARM. However some application requires libraries that are not compiled for that tool chain. I have those libraries on my host tool chain like -ljack, lfftw etc.
I need to know that if I get tarball of the required packages then how can I configure them so that the libraries are compiled by arm-gcc and the headers/libraries copied to /usr and /include of the buildroot ?
In this way I should be able to access these libraries via buildroot's toolchain.
Thanks,
Well, you need to integrate them into Buildroot.
Take fftw for example, in that particular case, fftw is already available in Buildroot, and you just have to enable it in your build. Go to Target packages->Libraries->Other and enable fftw.
If you don't know where to find a package, run make menuconfig and type Ctrl-/ to get a search box. There you could type e.g. fftw and learn where in the menu system it is located and what dependencies it has.
If fftw (or some other library you need) hadn't been / isn't available in Buildroot, you need to add it yourself. See e.g. adding packages to Buildroot.
Of course we all know building GCC version >= 4.1.x requires the supplementary packages MPFR, GMP and MPC to be present.
There's a few ways to handle these GCC dependencies:
1) Download and build each supporting package separately and then tell make where the binaries are located during GCC build time.
2) Download each supporting package, untar and move the source into your GCC build directory and make will automatically build each of the packages when needed.
(Executing the gcc-src/contrib/download_prerequisites script does the same as option 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Is there an advantage to either method? Does pre-compiling the binaries provide something I'm missing by taking the "easy route" and just dumping the package's source into my GCC build directory and letting make figure it out?
I've seen it done more frequently in various build scripts by pre-compiling each package to a binary, and then telling make where they are located during gcc compilation. Is this the "preferred" way to do it? Why?
To add context, I'm mainly building cross-compilers targeting various ARM platforms.
For most use cases I believe that option 2 is just as good as option 1. However, I can see a few situations in which one would want to do it manually.
A package maintainer wants to build separately as they want separate packages for mpfr et al.
Someone who wants to pass different configure arguments/CFLAGS to each of the packages.
A GCC developer who wants to keep their source and build trees small as they don't make any changes to MPFR/GMP/etc.
I haven't done too much work with the (rather ugly) GCC build system, but I haven't seen any obvious differences in how the binaries are built.
I'm not the biggest authority on this though, so YMMV; I may be wrong.
I am working on CentOS 6 machines, which has very old GCC/GlibC version. I want to build the whole glibc, binutils, gcc toolchain with latest or at least very recent versions in order to use c++11 support in latest gcc, and ld.gold in recent binutils, and possibly improvements in recent glibc.
I want to put the whole toolchain in some separate directory, and not to influence any existing system files. I also want to build gcc with --sys-root so that when using the gcc, I don't need to specify -I/some/directory/include and -L/some/directory/lib or whatever other parameters. Also the generated executable will automatically use the new ld-linux-xxxxx program loader which will automatically find the new libc.so.
Anyone knows if there exists some tutorial on this task?
The compiler is very dependent on glibc, altough you manage to build the compiler either in a chrooted system or equivalent, you will need to build also all libraries needed with the program you will build with this new compiler.
The best you can do is use a fresh new system (vm or whatever) or upgrade your existing one
You can download the latest toolchain from Openembedded or Yocto.
And here you don't have to do any package installation to your current system.
Just download the toolchain, source the environment and thats it you are ready to check the c++11 support.
The location to download the toolchain:
http://downloads.yoctoproject.org/releases/yocto/yocto-1.7/toolchain/ (Just select the architecture either 32bit or 64 bit based on your machine support)
If you need the latest toolchain, you'd better migrate to Fedora.
If you can't/won't, the best bet is to get the pieces as source RPMs for CentOS and Fedora, unpack them and fix up the CentOS by pilfering the sources and patches from Fedora, take care it doesn't overrule the system packages, correct versions and fix to install elsewhere (don't mess up your system too much! /usr/local comes to mind). The pieces are at least binutils, gcc.
I do not knwo Why you need this ? If this is needed that to compile for another computer, I would suggest using a virtual machine running the same OS as target. much more easier !!