This question isn't specifically related to cross-compiling but has arisen as I have a problem related to architecture specific headers while trying to cross-compile a library.
I am trying to cross-compile OpenCV, the target is an ARM processor and I am compiling on an x86_64 processor. The build fails because a header file cannot be located:
/usr/include/zlib.h:34:19: fatal error: zconf.h: No
such file or directory #include "zconf.h"
Sure enough in zlib.h there is a reference to zconf.h:
#include "zconf.h"
However when I look under <path_to_arm_filesys>/usr/include I actually find zconf.h under <path_to_arm_filesys>/usr/include/arm-linux-gnueabihf directory. So as I understand it the C preprocessor won't find zconf.h as the reference to it does not include the reference to the architecture-specific sub-directory.
To try and understand how zconf.h is actually found, I referred to the host machine and where zconf.h is located. Similarly, it is located under /usr/include but under the architecture-specific x86_64-linux-gnu directory.
So if in the source code the is no specific reference to architecture (as to be expected) in any #include how does the (GNU) C pre-processor know where to look? Is it a case that the pre-processor already knows its architecture-target and can automatically append another architecture specific directory to all the include directories it knows about? Or must I specifically inform it with the use of the -I flag of these specific directories?
There are 3 ways to say the compiler where to find headers :
set the --sysroot option #preferred for cross-compilation
-sysroot=dir Use dir as the logical root directory for headers and libraries. For example, if the compiler normally searches for headers
in /usr/include and libraries in /usr/lib, it instead searches
dir/usr/include and dir/usr/lib.
In this case you must have all libraries and headers under the sysroot folder.
Directly say with -I options where to find the headers.
Set the C_INCLUDE_PATH/ CPLUS_INCLUDE_PATH env variables
For #include preprocessor will seachy in search system directories and directories that were provided by compiler.
For #include "filename" preprocessor will search in the same folder where you have file that have this include.
https://gcc.gnu.org/onlinedocs/gcc/Directory-Options.html
https://gcc.gnu.org/onlinedocs/gcc/Environment-Variables.html
Related
I have installed GCC cross compiler for Raspberry Pi to my Ubuntu 20.04 to opt folder. Now When I create new cross compile project I have list of include in my Eclipse project explorer:
/opt/gcc-arm-10.2-2020.11-x86_64-arm-none-linux-gnueabihf/arm-none-linux-gnueabihf/include
/opt/gcc-arm-10.2-2020.11-x86_64-arm-none-linux-gnueabihf/arm-none-linux-gnueabihf/libc/usr/include
/opt/gcc-arm-10.2-2020.11-x86_64-arm-none-linux-gnueabihf/lib/gcc/arm-none-linux-gnueabihf/10.2.1/usr/include
/opt/gcc-arm-10.2-2020.11-x86_64-arm-none-linux-gnueabihf/lib/gcc/arm-none-linux-gnueabihf/10.2.1/usr/include-fixed
How Eclipse knows these include folders?
What is purpose of all of these folders? What kind of includes they are defined for?
Suppose I need to use SDL2 library. Where I should place it's header and binary?
As explained in this article (which is a little dated) https://www.eclipse.org/community/eclipse_newsletter/2013/october/article4.php CDT will try to detect built-in compiler symbols and include paths running the compiler with special options and parse the output of this special run. The command will be probably something like: arm-linux-gnueabihf-cpp -v /dev/null -o /dev/null supposing the compiler you are using is arm-linux-gnueabihf-gcc.
All these folders contains include files like stdio.h , stdlib.h ... of libc , libm ... and also some arm specific header files.
If you are not 100% sure, install the cross compiled in a directory all by itself and add the include directory to your eclipse project.
I'm using CMake 3.3.2 on OS X Yosemite. If I put a framework path into include_directories the generated Makefile doesn't include that directory. A short example:
cmake_minimum_required(VERSION 3.0)
project(testproj)
include_directories(/System/Library/Frameworks/OpenGL.framework)
add_library(testlib test.c)
The make output:
Scanning dependencies of target testlib
/Library/Developer/CommandLineTools/usr/bin/make -f CMakeFiles/testlib.dir/build.make CMakeFiles/testlib.dir/build
[ 50%] Building C object CMakeFiles/testlib.dir/test.c.o
/Library/Developer/CommandLineTools/usr/bin/cc -o CMakeFiles/testlib.dir/test.c.o -c /Users/wrar/test/test.c
I expected the include_directories command to have an effect on the compiler line, and as the official OPENGL_INCLUDE_DIR has the value I passed in the example, I expect it to be a correct value. What am I missing?
Turning my comments into an answer
Standard framework include paths are filtered by CMake
CMake has a lot of special handling for OS X Frameworks. One of it is to remove explicitly named framework include and library search paths (see this commit).
I agree that it's confusing that
find_package(OpenGL REQUIRED)
message(STATUS "OPENGL_INCLUDE_DIR: ${OPENGL_INCLUDE_DIR}")
include_directories(${OPENGL_INCLUDE_DIR})
add_library(...)
does return OPENGL_INCLUDE_DIR: /System/Library/Frameworks/OpenGL.framework but you don't find this path in the cc command line.
But this search path is implicit, meaning the compiler knows best where to find the framework's header files. So this filtering of the OS X framework include/library directories in CMake does reflect the special handling of the compiler/linker when working with frameworks.
How does the compiler find a framework's include file?
By the framework's name given as an include's directory name. See MAC Developer Library/Framework Programming Guide/Including Frameworks:
You include framework header files in your code using the #include directive. [...]
#include <Framework_name/Header_filename.h>
See cmake not finding gl.h on OS X for more details.
I've just tested your example and added e.g. #include <OpenGL/glext.h> to test.c and the header in the framework is found.
If you don't want the OpenGL standard framework to be found first (the default), see find_path documentation and look for CMAKE_FIND_FRAMEWORK.
Or if you have your own OpenGL distribution in /some/other/path - and you omit the use of find_package(OpenGL) - you can give include_directories(SYSTEM /some/other/path).
I have an Autogen Makefile.am that I'm trying to use to build a test program for a shared library. To build my test binary, I want to continue building the shared library as target but I want the test program to be linked statically. I've spent the last few hours trying to craft my Makefile.am to get it to do this.
I've tried explicitly changing the LDADD line to use the .a version of the library and get a file not found error even though I can see this library is getting built.
I try to add the .libs directory to my link path via LDFLAGS and still it can't find it.
I tried moving my library sources to my test SOURCES list and this won't work because executable object files are built differently than those for static libraries.
I even tried replicating a lib_LIBRARIES entry for the .a version (so there's both a lib_LTLIBRARIES and a lib_LIBRARIES) and replicate all the LDFLAGS, SOURCES, dir and HEADERS for the shared version as part of the static version (replacing la with a of the form _a_SOURCES = _la_SOURCES. Still that doesn't work because now it can't figure out what to build.
My configure.ac file is using the default LT_INIT which should give me both static and dynamic libraries and as I said it is apprently building both even if the libtool can't see the .a file.
Please, anyone know how to do this?
As #Brett Hale mentions in his comment, you should tell Makefile.am that you want the program to be statically linked.
To achieve this you must append -static to your LDFLAGS.
Changing the LDFLAGS for a specific binary is achieved by changing binary_LDFLAGS (where binary is the name of the binary you want to build).
so something like this should do the trick:
binary_LDFLAGS = $(AM_LDFLAGS) -static
There is a laptop on which I have no root privilege.
onto the machine I have a library installed using configure --prefix=$HOME/.usr .
after that, I got these files in ~/.usr/lib :
libXX.so.16.0.0
libXX.so.16
libXX.so
libXX.la
libXX.a
when I compile a program that invokes one of function provided by the library with this command :
gcc XXX.c -o xxx.out -L$HOME/.usr/lib -lXX
xxx.out was generated without warning, but when I run it error like this was thrown:
./xxx.out: error while loading shared libraries: libXX.so.16: cannot open shared object file: No such file or directory , though libXX.so.16 resides there.
my clue-less assumption is that ~/.usr/lib wasn't searched when xxx.out is invoked.
but what can I do to specify path of .so , in order that xxx.out can look there for .so file?
An addition is when I feed -static to gcc, another error happens like this:
undefined reference to `function_proviced_by_the_very_librar'
It seems .so does not matter even though -L and -l are given to gcc.
what should I do to build a usable exe with that library?
For other people who has the same question as I did
I found a useful article at tldp about this.
It introduces static/shared/dynamic loaded library, as well as some example code to use them.
There are two ways to achieve that:
Use -rpath linker option:
gcc XXX.c -o xxx.out -L$HOME/.usr/lib -lXX -Wl,-rpath=/home/user/.usr/lib
Use LD_LIBRARY_PATH environment variable - put this line in your ~/.bashrc file:
export LD_LIBRARY_PATH=/home/user/.usr/lib
This will work even for a pre-generated binaries, so you can for example download some packages from the debian.org, unpack the binaries and shared libraries into your home directory, and launch them without recompiling.
For a quick test, you can also do (in bash at least):
LD_LIBRARY_PATH=/home/user/.usr/lib ./xxx.out
which has the advantage of not changing your library path for everything else.
Should it be LIBRARY_PATH instead of LD_LIBRARY_PATH.
gcc checks for LIBRARY_PATH which can be seen with -v option
I'm having a problem with my compiler telling me there is an 'undefined reference to' a function I want to use in a library. Let me share some info on the problem:
I'm cross compiling with gcc for C.
I am calling a library function which is accessed through an included header which includes another header, which contains the prototype.
I have included the headers directory using -I and i'm sure it's being found.
I'm first creating the .o files then linking them in a separate command.
So my thought is it might be the order in which I include the library files, but i'm not sure what is the correct way to order them. I tried with including the headers folder both before and after the .o file.
Some suggests would be great, and maybe and explanation of how the linker does its thing.
Thanks!
Response to answers
there is no .a library file, just .h and .c in the library, so -l isn't appropriate
my understanding of a library file is that it is just a collection of header and source files, but maybe it's a collection of .o files created from the source?!
there is no library object file being created, maybe there should be?? Yes seems I don't understand the difference between includes and libraries...i'll work on that :-)
Thanks for all the responses! I learned a lot about libraries. I'd like to put all the responses as the accepted answer :-)
Headers provide function declarations and function definitions. To allow the linker find the function's implementation (and get rid of the undefined reference) you need to ask the compiler driver (gcc) to link the specific library where the function resides using the -l flag. For instance, -lm will link the math library. A function's manual page typically specifies what library, if any, must be specified to find the function.
If the linker can't find a specified library you can add a library search path using the -L switch (for example, -L/usr/local/lib). You can also permanently affect the library path through the LIBRARY_PATH environment variable.
Here are some additional details to help you debug your problem. By convention the names of library files are prefixed with lib and (in their static form) have a .a extension. Thus, the statically linked version of the system's default math library (the one you link with -lm) typically resides in /usr/lib/libm.a. To see what symbols a given library defines you can run nm --defined-only on the library file. On my system, running the command on libm.a gives me output like the following.
e_atan2.o:
00000000 T atan2
e_asinf.o:
00000000 T asinf
e_asin.o:
00000000 T asin
To see the library path that your compiler uses and which libraries it loads by default you can invoke gcc with the -v option. Again on my system this gives the following output.
GNU assembler version 2.15 [FreeBSD] 2004-05-23 (i386-obrien-freebsd)
using BFD version 2.15 [FreeBSD] 2004-05-23
/usr/bin/ld -V -dynamic-linker /libexec/ld-elf.so.1 /usr/lib/crt1.o
/usr/lib/crti.o /usr/lib/crtbegin.o -L/usr/lib /var/tmp//ccIxJczl.o -lgcc -lc
-lgcc /usr/lib/crtend.o /usr/lib/crtn.o
It sounds like you are not compiling the .c file in the library to produce a .o file. The linker would look for the prototype's implementation in the .o file produced by compiling the library
Does your build process compile the library .c file?
Why do you call it a "library" if it's actually just source code?
I fear you mixed the library and header concepts.
Let's say you have a library libmylib.a that contains the function myfunc() and a corresponding header mylib.h that defines its prototype. In your source file myapp.c you include the header, either directly or including another header that includes it. For example:
/* myapp.h
** Here I will include and define my stuff
*/
...
#include "mylib.h"
...
your source file looks like:
/* myapp.c
** Here is my real code
*/
...
#include "myapp.h"
...
/* Here I can use the function */
myfunc(3,"XYZ");
Now you can compile it to obtain myapp.o:
gcc -c -I../mylib/includes myapp.c
Note that the -I just tells gcc where the headers files are, they have nothing to do with the library itself!
Now you can link your application with the real library:
gcc -o myapp -L../mylib/libs myapp.o -lmylib
Note that the -L switch tells gcc where the library is, and the -l tells it to link your code to the library.
If you don't do this last step, you may encounter the problem you described.
There might be other more complex cases but from your question, I hope this would be enough to solve your problem.
Post your makefile, and the library function you are trying to call. Even simple gcc makefiles usually have a line like this:
LIBFLAGS =-lc -lpthread -lrt -lstdc++ -lShared -L../shared
In this case, it means link the standard C library, among others
I guess you have to add the path where the linker can find the libraray. In gcc/ld you can do this with -L and libraray with -l.
-Ldir, --library-path=dir
Search directory dir before standard
search directories (this option must
precede the -l option that searches
that directory).
-larch, --library=archive
Include the archive file arch in the
list of files to link.
Response to answers - there is no .a library file, just .h and .c in the library, so -l isn't approriate
Then you may have to create the libraray first?
gcc -c mylib.c -o mylib.o
ar rcs libmylib.a mylib.o
I have encountered this problem when building a program with a new version of gcc. The problem was fixed by calling gcc with the -std=gnu89 option. Apparently this was due to inline function declarations. I have found this solution at https://gcc.gnu.org/gcc-5/porting_to.html