When I would like to compile a program which uses a dynamic library, do I have to install (i.e. copy to a specific place, say, /usr/share/lib) this library? Or is it ok, if I put this library to any place somewhere and later during linking I point the linker to it, e.g. '-L ./thelibfolder'?
do I have to install (i.e. copy to a specific place, say, /usr/share/lib) this library?
No.
For a UNIX shared library, you need to arrange for two things:
You have to make the library known to the static linker, while linking main executable. Usually this is achieved by adding -L/path/to/directory -lfoo link flags to the link line.
You have to make runtime loader search /path/to/directory as well. This is system-specific. On many systems, setting LD_LIBRARY_PATH environment variable achieves the desired result, though this is usually not the preferred method. Another method is to encode this path into the application itself, e.g. on Linux one would add -Wl,-rpath=/path/to/directory to the application link line.
Related
I have a project, where I want to link one of system libraries statically. The project uses GNU build system.
In configure.ac I have:
AC_CHECK_LIB(foobar, foobar_init)
On development machine this library is installed in /usr/lib/x86_64-linux-gnu. It is detected, but it is linked dynamically, which causes issues, as it is not present on some machines. Linking it statically (-Wl,-Bstatic etc.) works fine, but I don't know how to set this up in autotools. I tried forcing this into Makefile.am link flags for the project, but it still gives a preference to dynamic library.
I also tried using --enable-static with ./configure, but it seems to have no effect on system libraries.
If you want to link the whole program statically, then you should pass the --disable-shared option to configure. You might or might not need also to pass --enable-static, depending on the default value for that option (which you can influence via your configure.ac file). You really should consider doing this.
You should also consider making this the installer's problem, not the build system's. Let it be the installer's responsibility to ensure that all the shared libraries needed by the program are provided by the systems where it is installed. This is very common; in fact, it is one of the inspirations for package-management systems such as yum / dnf and apt, and their underlying packaging formats.
If you insist on linking only one library statically, while linking everything else dynamically, then you'll need to jump through a few more hoops. The objective will be to emit link options that cause just that library to be linked statically, without changing other libraries' linking. With the GNU toolchain, and supposing that the program is otherwise being linked dynamically, that would be this combination of options:
-Wl,-Bstatic -lfoobar -Wl,-Bdynamic
Now consider the documentation of the AC_CHECK_LIB() macro:
Macro: AC_CHECK_LIB (library, function, [action-if-found],
[action-if-not-found], [other-libraries])
[...] action-if-found is a list of shell commands to run if the link with the library succeeds; action-if-not-found is a list of shell
commands to run if the link fails. If action-if-found is not
specified, the default action prepends -llibrary to LIBS and defines
'HAVE_LIBlibrary' (in all capitals). [...]
Note in particular the default behavior in the event that the optional arguments are not provided (your present case) -- that's not quite what you want, at least not by itself. I suggest providing at least an alternative behavior for action-if-found case, and you could consider also making configure fail in the action-if-not-found case. The latter is left as an exercise; implementing just the former might look like this:
AC_CHECK_LIB([foobar], [foobar_init], [
LIBS="-Wl,-Bstatic -lfoobar -Wl,-Bdynamic $LIBS"
AC_DEFINE([HAVE_LIBFOOBAR], [1], [Define to 1 if you have libfoobar.])
])
You should also pay attention to the order of your AC_CHECK_LIB() invocations. As its docs go on to say:
This macro is intended to support building LIBS in a right-to-left
(least-dependent to most-dependent) fashion such that library
dependencies are satisfied as a natural side effect of consecutive
tests. Linkers are sensitive to library ordering so the order in which
LIBS is generated is important to reliable detection of libraries.
If you find that you still aren't getting what you want, then have a look at the link commands that make actually executes. You need to understand what's wrong about them before you can determine how to fix the problem.
With all that said, I observe that the above treatment is basically a hack, and it makes your build system much less resilient. It introduces dependencies on GNU toolchain options (which some other toolchains may nevertheless accept), and it assumes dynamic linking is being performed overall. It may be possible to resolve those issues with additional Autoconf code, but I urge you to instead go with one of the first two alternatives I described.
I've used zlib for ages and never thought about the fact that it is named slightly unconventionally. While most libraries on Linux follow the naming convention of lib<name>.so for shared objects and lib<name>.a for archives, zlib is named zlib.so/zlib.a. My question is: how does gcc/ld know to look for zlib.so when I use -lz as a link flag?
I understand that for linking, gcc invokes ld, which searches for libraries in certain default paths and any path specified with -L, and it appends the lib and .so or .a. parts as necessary. Oddly, gcc's manual page for linking options only mentions that the linker can find archives; there is no mention of the .so extension. The man page for ld at least mentions both extensions, but still only mentions searching by prepending lib to the specified library name. How does ld know to add the lib after the z for zlib? I've never seen this happen to another library.
gcc has several different methods for linking libraries, shared or static. If you specify -lz, gcc is going to look for libz.so (possibly with some version bits between the libz and the .so, but the important part is the file name will start with libz and end with .so), or for libz.a (again, possibly with version info) if you are compiling statically, or as a fallback if the shared library does not exist. If you specify -lzlib it will look for libzlib.so (which is not the standard name - the package is often named zlib, but the library itself is libz). Another way of linking would be to not use the -l<lib> option, and just specify /path/to/zlib.so or -L /path/to zlib.so (or zlib.a if you want). In this case, the library doesn't have to have the lib prefix, but you would have to explicitly provide any version info, unless provisions are made for a symbolic link or something similar to provide the literal name zlib.so.
Applications can also load shared libraries at runtime via dlopen() and it's other associated functions, in which case the library can also be named whatever you want it to be (this doesn't work for static libraries, of course).
So, if the library you are looking at is actually called zlib.so, then it is not being found by gcc ... -lz, unless it just happens to be a symbolic link to libz.so (or vice versa, in which case gcc is really just using libz.so, which happens to have the same content as your zlib.so). However gcc might be using it if the build process explicitly names the library in the link stage (not using -l<lib>) or if your application loads it via dlopen() (but in that case, it's not really linked to your program - it's just loaded at run time).
I use gcc more than any other compiler, so I will shape my example with this compiler suite, but i have experienced this problem with almost all the suite that i have tried like gcc, mingw, clang and msvc.
gcc offers this flags:
-l you write the name foo, gcc will find a corrisponding library named libfoo
-L you append the path where the libs lives and gcc tries to match the required libraries to the ones that it finds in that path
-rpath basically a pool of different path for the same lib so the executable is "smart" enough to look for alternatives if he needs one.
the problem is big for me, no one of this solves my problem and each one of this flags suffer the same problem: ambiguity.
if I just want to link a library that i know there is no way to do this without including a dose of ambiguity in the best case scenario, what i want is:
linking 1 specific library only, and only the 1 that I specify with a precise name and path
avoid auto-completion mechanism like the one on the name given to -l because my libs are named foo.so not libfoo.so
relative path for the linked libs
only consider the explicitly given set of libraries, no other automation of any kind should be involved, no pool of libs, no search-paths, no nothing else, I prefer list of errors instead of an executable linked to a random library
I often deal with different libs in different releases, they often share the same name for historical and compatibility reasons, it's a nightmare compiling and linking with gcc because I never got the one that I want linked to my executable.
Thanks.
The easiest way to do this is to simply specify the library.
gcc -o test test.o /path/my_library.so /path/to_other_library.a
The obvious downside to this approach if of course that if you move that library then your application won't work anymore but since you state that you have the libraries at fixed locations it should work in your case.
Why is it that some static libraries (lib*.a) can be linked in the same way that shared libraries (lib*.so) are linked (ld -l switch), but some can not?
I had always been taught that all libraries, static or not, can be linked with -l..., however I've run into one library so far (GLFW), which does nothing but spew "undefined reference" link errors if I attempt to link it this way.
According to the response on this question, the "proper" way to link static libraries is to include them directly, along with my own object files, rather than using -l. And, in the case of the GLFW library, this certainly solves the issue. But every other static library I'm using works just fine when linked with -l.
So:
What could cause this one library to not work when linked rather than included directly? If I knew the cause, maybe I could edit and recompile the library to fix the issue.
Is it true that you're not supposed to link static libraries the same way you link shared libraries? (And if not, why not?)
Is the linker still able to eliminate unused library functions from the output executable when the library is directly included in this way?
Thanks for the replies! Turns out the problem was due to link order. Apparently, if you use a library which in turn has other library dependencies, those other dependencies must be listed after the library, not before as I had been doing. Learned something new!
Have you cared to indicate to GCC the path of your library (using -L) ? By using -l solely, GCC will only be able to link libraries available in standard directories.
-L[path] -l[lib]
The correct way to link a static library is using -l, but that only works if the library can be found on the search path. If it's not then you can add the directory to the list using -L or name the file by name, as you say.
The same is true for shared libraries, actually, although they're more likely to be found, perhaps.
The reason is historical. The "ar" tool was original the file archive tool on PDP11 unix, though it was later replaced entirely by "tar" for that purpose. It stores files (object files, in this case) in a package. And there's a separate extension containing the symbol table for the linker to use. It's possible if you are manually managing files in the archive that the symbol table can get out of date.
The short answer is that you can use the "ranlib" tool on any archive to recreate the symbol table. Try that. More broadly, try to figure out where the corrupt libraries are coming from and fix that.
I have a program do so some graphics. When I run it interactively, I want it to use OpenGL from the system to provide hardware accelerated graphics. When I run it in batch, I want to be able to redirect it to use the Mesa GL library so that I can use OSMesa functionality to render to an offscreen buffer. The OSMesa functionality is enabled by doing a LoadLibrary/GetProcAddress if the batch start up option is selected.
On Linux, its fairly easy to make this work. By using a wrapper script to invoke the program, I can do something like this:
if [ "$OPTION" = "batch" ]; then
export LD_LIBRARY_PATH=$PATHTO/mesalibs:$LD_LIBRARY_PATH
fi
It is possible to do something this in Windows?
When I try adding a directory to the PATH variable, the program continues to go to the system opengl32.dll. The only way I can get the program to use the Mesa GL/OSMesa shared libraries is to have them reside in the same directory as my program. However, when I do that, the program will never use the system opengl32.dll.
If I've understood what you're saying correctly, the wrong version of opengl32.dll is being loaded when your process starts up, i.e., load-time dynamic linking. There is probably no good way to solve your problem without changing this.
You say you can't use conveniently use run-time dynamic linking (LoadLibrary/GetProcAddress) for opengl32.dll because the calls to it are coming from the Qt library. I presume that the Qt library is itself dynamically linked, however, so you should be able to solve your problem by using run-time linking for it. In this scenario, provided you load opengl32.dll before you load the Qt library, you should be able to explicitly choose which version of opengl32.dll you want to load.
You might want to consider using delayed loading in order to simplify the process of moving from load-time to run-time linking. In this scenario, the first call into the Qt library causes it to be loaded automatically, and you'll just need to explicitly load opengl32.dll first.
There are a few ways you could handle this, depending on the libraries and their names/locations:
If both have the same name (opengl32.dll), then you need to add the Mesa DLL location to the search path such that it is searched before the system directory. The order directories are checked in is detailed here. As you can see, $PATH comes last, after system, so you can't just add the directory to that. However, you can make use of the second step ("The current directory") by setting the working directory to a path containing the mesa files. Generally this means starting the application using an absolute path while in the directory containing the files.
That's still not particularly pleasant, though. If you can, you should use LoadLibrary and check for an environment variable (OPENGL_LIBRARY_PATH) when your app starts up. Assuming the exports from opengl32.dll and Mesa's DLL are the same, you can do something like:
void LoadExports()
{
char location[MAX_PATH];
getenv("OPENGL_LIBRARY_PATH", location);
HMODULE oglLib = LoadLibrary(location);
function1 = GetProcAddress(oglLib, "glVertex2f");
...
}
This will work perfectly fine, doing almost exactly what you want.
However, if you want to do that, you can't import opengl32.dll, which you're probably doing, you have to dynamically link throughout. Make sure not to link against opengl32.lib and you should be fine. Depending on how many functions you use, it may be a pain to set up, but the code can easily be scripted and only needs done once, you can also use static variables to cache the results for the lifetime of the program. It's also possible to use different function names for different libraries, although that takes a bit more logic, so I'll leave the details to you.
Though this should be possible in the cmd window, it seems you're having no luck.
Try: set a variable in your script (RUNNING_IN_SCRIPT=Y) and then parse for that variable in your executable and LoadLibrary from the absolute path of installation - be sure to clear the variable when you exit.
Windows used to search different paths for dynamic libraries, but due to security consideration, the system path is searched first.
You could, however use Delay Load Imports to get a workaround:
If you're using MSVC, you could single-out the DLLs you're interested in loading on your own with /DELAYIMPORT flag to the linker.
Then, override the delay load helper function and use LoadLibrary to find the proper DLL (and not trust it to the system).
After loading the correct DLL, have your helper function just call the original one that will do all the GetProcAddress business by itself.