PC-A is used for crosscompiling application APP (application which is cross compiled) for DEV-B (some non x86 embedded device).
APP uses some C system libs to run properly. PC-A has all the header files for the libraries available on DEV-B.
Once APP is cross compiled and executable is created that is migrated to DEV-B and run there.
Once run on DEV-B, APP is using the system libs (the actual binaries) from DEV-B.
APP can't be run on PC-A, and the system libs already exist on DEV-B. Why are the cross compiled system libraries required on the PC-A?
You need all the cross compiled libraries to be able to link your application against. System libraries are part of the toolchain and third party libraries are usually placed in so called staging directory. Take a look at such embedded Linux distributions as Buildroot or OpenWrt.
Related
I am sorry for naive question. I could not understand the difference between these Yocto variables. The Manual says
TOOLCHAIN_HOST_TASK: Lists packages that make up the host part of the
SDK (i.e. the part that runs on the SDKMACHINE). When you use bitbake
-c populate_sdk to create the SDK, a set of default packages apply. This variable allows you to add more packages.
And
TOOLCHAIN_TARGET_TASK: Lists packages that make up the target part of
the SDK (i.e. the part built for the target hardware).
I could not understand what is difference between Host part of SDK and target part of SDK ?
As for I understand, Host part is that we expanded on our host PC and use it for cross-development. What is target part of SDK ?
The recipes added to TOOLCHAIN_TARGET_TASK will be cross-compiled for the target architecture, and included in the target sysroot in the SDK.
The recipes added to TOOLCHAIN_HOST_TASK will be built to run on the developer machine.
So if you want a certain library available in the SDK, so that you can develop applications linking to it, add it to TOOLCHAIN_TARGET_TASK. Then the cross-compiles library and its header files will be available in the SDK.
If you on the other hand have a tool that's need during building, like a code-generator or cmake, you add it to TOOLCHAIN_HOST_TASK so that it's available on the developer machine during the build of the target software.
I downloaded crosstool-ng-1.24.0, not yet installed. I want build cross toolchain for ARM target, but I need specific components versions run on target ARM board, for which the toolchain generates code, Linux kernel 2.6.26.5, gnu gcc 4.1.2, glibc-2.5
Since crosstool-ng toolchain have no these components, how I can add and install it? I checked toolchain sources and documents but didn't find any instruction on that. The 'packages' folder contain separate components, but in form of patches. I would prefer download the required components and put these locally, in folder. How to make these correctly?
I am trying to cross compile some dependency libs for RaspberryPi target system, and host system is Linux with GCC compiler. For example, let's say that one of those libs has dependency on linkage stage and being linked with one of the system's static or dynamic libraries.
How this case is resolved by linker? (Because those .a or .so files can be different on target system, so probably program on RaspberryPi will crash in this case). How to make it work in a right way?
The build environment that the cross-compiler provides is more accurately described as a cross-toolchain. It needs to provide everything you need: Not just the compiler, but also the assembler, linker, and all run-time support libraries. That includes a C library (maybe glibc, maybe something else), the GCC run-time library (libgcc and libgcc_s), and the C++ run-time library (libstdc++). But the build environment also needs copies of all the libraries your software needs to build, typically both header files and static libraries or dynamic shared objects for the target. In particular, you cannot use the installed header files on the host because they might have the wrong definitions and declarations for the target.
Some programmers simply copy their dependencies (which are not system libraries) into their source tree, so that the cross-build environment can stay minimal. But then these libraries have to be tracked and updated as part of the project, which can be cumbersome.
I am trying to create cross platform / platform independent executables for my JAVA-9 application / project jigsaw.
I think jlink command will create only platform specific executable/runtime.
JLink (covered by JEP 282) creates modular runtime images (covered by JEP 220, particularly the section New run-time image structure). These images are a generalization of JRE, JDK, and compact profiles and are OS specific like they are. JLink can hence not be used to create cross-platform executables.
That said, it is possible to run JLink on one operating system and create a runtime image for a different OS. All you have to do for that is to download and unpack a JDK 9 (same version as the one JLink comes from) for that and put its jmods folder on the module path for the JLink call.
I know this question is old, but since it was one of the top Google results for me before posting my own question I decided to document my findings here as well.
Faced the same problem while attempting to create runtime images with jlink for Java 11. The problem boiled down to incorrectly referencing the target JDK's jmods folder which in turn meant that the JDK's modules weren't found in the module path. jlink then defaulted to include the host JDK's module files (and corresponding binaries, libraries, etc) in the generated runtime image. By correctly referencing the target JDK's jmods directory then the generated runtime image contains the platform-specific executables and accompanying files.
This was tested on a Windows machine by creating runtime images for Windows, Linux and MacOS.
I am using buildroot's toolchain to cross compile applications for ARM. However some application requires libraries that are not compiled for that tool chain. I have those libraries on my host tool chain like -ljack, lfftw etc.
I need to know that if I get tarball of the required packages then how can I configure them so that the libraries are compiled by arm-gcc and the headers/libraries copied to /usr and /include of the buildroot ?
In this way I should be able to access these libraries via buildroot's toolchain.
Thanks,
Well, you need to integrate them into Buildroot.
Take fftw for example, in that particular case, fftw is already available in Buildroot, and you just have to enable it in your build. Go to Target packages->Libraries->Other and enable fftw.
If you don't know where to find a package, run make menuconfig and type Ctrl-/ to get a search box. There you could type e.g. fftw and learn where in the menu system it is located and what dependencies it has.
If fftw (or some other library you need) hadn't been / isn't available in Buildroot, you need to add it yourself. See e.g. adding packages to Buildroot.