I'm installing caffe on an Ubuntu 17.04 system using boost 1.66. I am able to execute make all and make test without problems:
me#icvr1:~/PackageDownloads/caffe$ make all
make: Nothing to be done for 'all'.
me#icvr1:~/PackageDownloads/caffe$ make test
make: Nothing to be done for 'test'.
However, when I try make runtest, I get the following error:
me#icvr1:~/PackageDownloads/caffe$ make runtest
.build_release/tools/caffe
.build_release/tools/caffe: error while loading shared libraries: libboost_system.so.1.66.0: cannot open shared object file: No such file or directory
Makefile:532: recipe for target 'runtest' failed
make: *** [runtest] Error 127
Now, I know that libboost_system.so.1.66.0 exists in /usr/local/lib, which (I believe) is a fairly standard location:
me#icvr1:~/PackageDownloads/caffe$ ls /usr/local/lib/libboost_system*
/usr/local/lib/libboost_system.a /usr/local/lib/libboost_system.so /usr/local/lib/libboost_system.so.1.66.0
and, in caffe's Makefile.config, /usr/local/lib is on the library path:
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial
So, what am I missing here? How can I ensure caffe will know where to find libboost_system.so.1.66.0 ?
Your /usr/local/lib/libboost_system.so.1.66.0 is evidently one you built
yourself and want the loader to find at runtime without special measures.
But having built it, you didn't update the ldconfig cache
(because you didn't know you had to); so the runtime loader is not yet aware that this
library exists and can't find it.
When the loader is looking for the shared libraries needed to assemble a new process,
it does not search all of the linker's default search directories. That would be slow.
By default it searches a cached database /etc/ld.so.cache, of libraries that were
found by ldconfig, last time it was run.
By default, ldconfig caches libraries that it finds in /lib, /usr/lib and in the directrories
listed in the file /etc/ld.so.conf, and/or any similar *.conf files that are recursively include-ed in /etc/ld.so.conf.
For example:
$ cat /etc/ld.so.conf
include /etc/ld.so.conf.d/*.conf
$ cat /etc/ld.so.conf.d/*.conf
/usr/lib/x86_64-linux-gnu/libfakeroot
# libc default configuration
/usr/local/lib
# Multiarch support
/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu
/usr/lib/nvidia-384
/usr/lib32/nvidia-384
/usr/lib/nvidia-384
/usr/lib32/nvidia-384
# Legacy biarch compatibility support
/lib32
/usr/lib32
You see that /usr/local/lib is listed there. So, to make the loader aware
of your new /usr/local/lib/libboost_system.so.1.66.0, just run:
ldconfig
as root. You need to do likewise anytime you install a new locally built
shared library in /usr/local/lib.
If you want the loader to find a shared library /a/b/libfoo.so that isn't in the ldconfig
cache, you can get it to do so by the special measure of appending /a/b/libfoo.so to the value of the
environment variable LD_LIBRARY_PATH (which by default is empty) in the environment in which you launch
the process that needs to load that library. The loader will search the LD_LIBRARY_PATH
directories, if any, before the ldconfig cache. However, not adding a shared library
to the ldconfig cache should be an informed choice and setting LD_LIBRARY_PATH
is, of course, not well motivated merely by ignorance of the ldconfig apparatus or of the linker's -rpath option
Related
I am trying to add a shared library package to a Yocto Krogoth image from a custom recipe that is dependent on libudev.so.0 but the openembedded-core layer's eudev recipe only provides libudev.so.1.6.3 and a libudev.so.1 symlink:
libudev.so.1 -> libudev.so.1.6.3
I have added a eudev_%.bbappend recipes_core recipe that creates the symlink
do_install_append() {
ln -srf ${D}${base_libdir}/libudev.so.1 ${D}${base_libdir}/libudev.so.0
}
and I can confirm the libudev.so.0 file is added to the libudev package in
tmp/work/HOST/eudev/3.1.5-r0/image/lib/libudev.so.0
tmp/work/HOST/eudev/3.1.5-r0/package/lib/libudev.so.0
tmp/work/HOST/eudev/3.1.5-r0/packages-split/lib/libudev.so.0
tmp/work/HOST/eudev/3.1.5-r0/sysroot-destdir/lib/libudev.so.0
and installed to the image's tmp/sysroots/MACHINE/lib/libudev.so.0 directory when building the image and is present in the resultant tmp/deploy/images/MACHINE/IMAGENAME.tar.bz2 rootfs archive. The issue is that with the above in place I cannot add my shared library package to the image as it results in the following error:
do_rootfs: ...
Computing transaction...error: Can't install MYRECIPE#HOST: no package provides libudev.so.0
The custom recipe does have RDEPENDS_${PN} = libudev set.
Why is the do_rootfs error generated as the installed libudev package clearly does provide the libudev.so.0 library? Bitbaking the custom recipe independently has no issue, but that obviously does not attempt to install the resultant package into an image.
Why is the do_rootfs error generated as the installed libudev package
clearly does provide the libudev.so.0 library?
I'm pretty sure the the relevant code isn't looking at file names but actual symbols needed by your library -- there's packaging magic that looks at all the libraries and executables and figures out their runtime dependencies based on the symbols they need (this is how the correct dependency libraries end up on the rootfs). I believe the error message is trying to say that no available library exports the symbols that your library needs.
You can check the soname your udev exports with
readelf -a libudev.so.1 | grep SONAME
0x000000000000000e (SONAME) Library soname: [libudev.so.1]
I've not confirmed that this is what the rootfs code checks for but it's quite likely.
Are you expecting a library linked with libudev.so.0 to just work with libudev.so.1 without even a recompile? I'm not familiar with udev but in general libraries are unlikely to work like that.
The custom recipe does have RDEPENDS_${PN} = libudev set.
This is not normally needed -- if your software dynamically links to a library, the runtime dependency is found automatically.
Am stumped on an automake link. Even after pouring over the manuals for hours and searching online it is probably a misunderstanding of autotools.
I have one .la library made by libtool, one .dylib shared library and am creating a program. The .la is linked to the .dylib and the program uses the .la.
Makefile.am for the .la library
lib_LTLIBRARIES = libA.la
libA_la_LDFLAGS = ${AM_LDFLAGS} -no-undefined
libA_la_LIBADD = $(LIBM) -Ldir/to/ -lB
libA_la_CPPFLAGS = ${AM_CPPFLAGS}
Makefile.am for program with libtool wrapper
noinst_PROGRAMS = test
test_SOURCES = test_source.c
test_LDADD = libA.la -Ldir/to/ -lB
libA.la is created and links to B.dylib but the test program "wrapper" created by automake is exporting DYLD_LIBRARY_PATH to find libA.la while not linking to B.dylib. Giving the error
dyld: Library not loaded: ./B.dylib
Referenced from: /dir/to/test/.libs/test
Reason: image not found
Trace/BPT trap: 5
Some things that I have tried are adding -Ldir/to/ -lB to test_LDFLAGS in addition to already being added in test_LDADD. And have tried setting test_LDFLAGS = -rpath -Ldir/to in the hopes that setting the runtime search path to the directory where B.dylib is would help.
If I manually export DYLD_LIBRARY_PATH to include /dir/to/B.dylib then the test program is able to run but I'm looking to have autotools take care of this rather than requiring someone to export a path before being able to run it.
libB.dylib includes an rpath that gets copied into your binary and that is used to resolve -lB at runtime.
And it seems that this rpath is not /path/to, so libB.dylib cannot be resolved by the runtime linker.
The reasons why it works for libA.la is that libtools knows that the rpath in libA.dylib is wrong anyhow (as you haven't done a make install) and so it needs to be set manually.
The only way around this that i have found is to use install_name_tool to fix the stored rpath in the resulting binary.
(that is: I don't think that libtool will do this for you, as it contradicts the intended use of libB.dylib - as declared in its rpath)
The problem is that Libtool wasn't involved in building libB.dylib, so it does not know how to fix up your environment to find it. That means it is up to you. You can add path/to/ libB in your environment, or add a hardcoded search path to libA.la so that libA will find it.
libA_la_LIBADD = $(LIBM) -Ldir/to/ -rpath dir/to/ -lB
This will not only add the path to B in libA's binary, but will add it to dependency-libs in Libtool's libA.la file so that on platforms that won't automatically inherit the rpath specification it can be added by Libtool when linking.
I'm trying to compile an example program that links to the shared library produced by Sundown. I'm compiling the program like so.
$ gcc -o sd sundown.c -L. -lsundown
Yet, when I run it I get the following error.
./sd: error while loading shared libraries: libsundown.so: cannot open shared object
file: No such file or directory
The output of ls is.
$ ls
libsundown.so libsundown.so.1 sundown.c sd
Why is the shared library not found by ld?
Short solution:
add . (or whatever it is from your -L flag) to your LD_LIBRARY_PATH. When you run sd, it'll look for libraries in the standard places and the LD_LIBRARY_PATH. Note that since you've added ., this will only work if you run sd from the same directory libsundown.so is in.
I plan on distributing the compiled binary. How can I do so that the library can be distributed without forcing people to edit their LD_LIBRARY_PATH?
You should install libsundown.so in one of the standard places, like /usr/lib or /usr/local/lib. You can do that with an installer or a make file, or something as simple as a INSTALL or README that tells the user to stick the libraries there and ensure the permissions are set to something sensible.
On Centos systems with /usr/lib and /usr/lib64, if you install 64-bit libs manually into /usr/lib then at runtime, the library may not be visible even though at build time it is visible (I used autotools and it was able to find my zopfli library from /usr/lib without any problem). When I execute the my_binary that links to /usr/lib/libzopfli.so.1 I got
libzopfli.so.1 => not found
After moving libzopfly.so.1 from /usr/lib to /usr/lib64, then everything works fine.
I have a collection of projects that I'm compiling as dynamic libraries. Each of these .dylibs depend on other various .dylibs that I would like to place in various other directories (i.e. some at the executable path, some at the loader path, some at a fixed path).
When I run otool -L on the compiled libraries, I get a list of paths to those dependencies but I have know idea how those paths are being set/determined. They almost appear pseudo random. I've spent hours messing with the "Build Settings" in Xcode to try and change these paths (w/ #rpath, #executable_path, #loader_path, etc.) but I can't seem to change anything (as checked by running otool -L). I'm not even entirely sure where to add these flags and don't really understand the difference between the following or how to properly use them:
Linking - "Dynamic Library Install Name"
Linking - "Runpath Search Paths"
Linking - "Other Linking Flags"
Search Paths - "Library Search Paths"
When I run install_name_tool -change on the various libraries, I am able to successfully change the run path search paths (again as verified by running otool -L to confirm).
I'm running Xcode 4.2 and I'm very close to giving up and just using a post-build script that runs install_tool_name to make the changes. But its a kludge hack fix and I'd prefer not to do it.
Where can I see how the search/run paths for the dylib dependencies are being set?
Anyone have any ideas on what I might be doing wrong?
Typically, in my dylib's target, I set INSTALL_PATH aka "Installation Directory" to the prefix I want (e.g. #executable_path/../Frameworks).
I leave LD_DYLIB_INSTALL_NAME aka "Dynamic Library Install Name" set to its default value, which is $(DYLIB_INSTALL_NAME_BASE:standardizepath)/$(EXECUTABLE_PATH).
Xcode expands that based on your target's name, so it might end up being #executable_path/../Frameworks/MyFramework.framework/Versions/A/MyFramework, for instance.
The important thing to realize is that the install path is built into the dylib, as part of its build process. Later on, when you link B.dylib that refers to A.dylib, A.dylib's install path is copied into B.dylib. (That's what otool is showing you -- those copied install paths.) So it's best to get the correct install path built into the dylib in the first place.
Before trying to get all the dylibs working together, check each one individually. Build it, then otool -L on the built dylib. The first line for each architecture should be what LD_DYLIB_INSTALL_NAME was showing you.
Once you have that organized, try to get the dylibs linking against each other. It should be much more straightforward.
install_name_tool is very useful for setting the names and paths. Its especially useful if the program runs its self-tests in the build directory, and then things change during a make install. In this case, you can use install_name_tool without the need for a separate build.
install_name_tool is also useful because Apple's LD does not honor rpath linker options like Linux/GCC does. That is, you need to use a different set of commands to set them.
Here's the man page for it. Its included in its entirety because it discusses other options, like -headerpad_max_install_names.
INSTALL_NAME_TOOL(1) INSTALL_NAME_TOOL(1)
NAME
install_name_tool - change dynamic shared library install names
SYNOPSIS
install_name_tool [-change old new ] ... [-rpath old new ] ...
[-add_rpath new ] ... [-delete_rpath new ] ... [-id name] file
DESCRIPTION
Install_name_tool changes the dynamic shared library install names and
or adds, changes or deletes the rpaths recorded in a Mach-O binary.
For this tool to work when the install names or rpaths are larger the
binary should be built with the ld(1) -headerpad_max_install_names
option.
-change old new
Changes the dependent shared library install name old to new in
the specified Mach-O binary. More than one of these options can
be specified. If the Mach-O binary does not contain the old
install name in a specified -change option the option is
ignored.
-id name
Changes the shared library identification name of a dynamic
shared library to name. If the Mach-O binary is not a dynamic
shared library and the -id option is specified it is ignored.
-rpath old new
Changes the rpath path name old to new in the specified Mach-O
binary. More than one of these options can be specified. If
the Mach-O binary does not contain the old rpath path name in a
specified -rpath it is an error.
-add_rpath new
Adds the rpath path name new in the specified Mach-O binary.
More than one of these options can be specified. If the Mach-O
binary already contains the new rpath path name specified in
-add_rpath it is an error.
-delete_rpath old
deletes the rpath path name old in the specified Mach-O binary.
More than one of these options can be specified. If the Mach-O
binary does not contains the old rpath path name specified in
-delete_rpath it is an error.
SEE ALSO
ld(1)
There is a laptop on which I have no root privilege.
onto the machine I have a library installed using configure --prefix=$HOME/.usr .
after that, I got these files in ~/.usr/lib :
libXX.so.16.0.0
libXX.so.16
libXX.so
libXX.la
libXX.a
when I compile a program that invokes one of function provided by the library with this command :
gcc XXX.c -o xxx.out -L$HOME/.usr/lib -lXX
xxx.out was generated without warning, but when I run it error like this was thrown:
./xxx.out: error while loading shared libraries: libXX.so.16: cannot open shared object file: No such file or directory , though libXX.so.16 resides there.
my clue-less assumption is that ~/.usr/lib wasn't searched when xxx.out is invoked.
but what can I do to specify path of .so , in order that xxx.out can look there for .so file?
An addition is when I feed -static to gcc, another error happens like this:
undefined reference to `function_proviced_by_the_very_librar'
It seems .so does not matter even though -L and -l are given to gcc.
what should I do to build a usable exe with that library?
For other people who has the same question as I did
I found a useful article at tldp about this.
It introduces static/shared/dynamic loaded library, as well as some example code to use them.
There are two ways to achieve that:
Use -rpath linker option:
gcc XXX.c -o xxx.out -L$HOME/.usr/lib -lXX -Wl,-rpath=/home/user/.usr/lib
Use LD_LIBRARY_PATH environment variable - put this line in your ~/.bashrc file:
export LD_LIBRARY_PATH=/home/user/.usr/lib
This will work even for a pre-generated binaries, so you can for example download some packages from the debian.org, unpack the binaries and shared libraries into your home directory, and launch them without recompiling.
For a quick test, you can also do (in bash at least):
LD_LIBRARY_PATH=/home/user/.usr/lib ./xxx.out
which has the advantage of not changing your library path for everything else.
Should it be LIBRARY_PATH instead of LD_LIBRARY_PATH.
gcc checks for LIBRARY_PATH which can be seen with -v option