imposing library loading order - gcc

I have a gcc-compiled application linked against dynamic libraries. Is there a way to impose the order in which libraries are loaded? (In my case one library constructor uses resources set up by other library constructor).
Thanks.

gcc isn't in-charge of loading the libraries, either ld.so does it automatically when your program loads, or you do it manually as #jldupont suggests.
And ld.so might deliberately randomise the order to prevent return-to-stdlib attacks.
So either:
Load the libraries yourself.
Or remove the dependencies between the library load scripts.
Make the libraries contain the dependencies themselves (might work, might not)
That is when you get to the point of linking each shared library, make sure it includes -l<dependentlib> in the link command. You can test this by creating a trival program that links only with that shared library - if it builds and runs, then the library contains all necessary dependent libs. This might help if ld.so loads the libraries in dependency order - which I think it has to do.

You can use dlopen and load the libraries yourself: this way, you can have a finer grain control over the loading/unloading process. See here.
Of course, this isn't a "gcc" based solution and it requires reworking your application... Maybe you could explain the "problem" you are facing in a bit more details?
You can disregard my solution if it doesn't fit your needs. Cheers!

Related

setting up autotools to link single system library statically

I have a project, where I want to link one of system libraries statically. The project uses GNU build system.
In configure.ac I have:
AC_CHECK_LIB(foobar, foobar_init)
On development machine this library is installed in /usr/lib/x86_64-linux-gnu. It is detected, but it is linked dynamically, which causes issues, as it is not present on some machines. Linking it statically (-Wl,-Bstatic etc.) works fine, but I don't know how to set this up in autotools. I tried forcing this into Makefile.am link flags for the project, but it still gives a preference to dynamic library.
I also tried using --enable-static with ./configure, but it seems to have no effect on system libraries.
If you want to link the whole program statically, then you should pass the --disable-shared option to configure. You might or might not need also to pass --enable-static, depending on the default value for that option (which you can influence via your configure.ac file). You really should consider doing this.
You should also consider making this the installer's problem, not the build system's. Let it be the installer's responsibility to ensure that all the shared libraries needed by the program are provided by the systems where it is installed. This is very common; in fact, it is one of the inspirations for package-management systems such as yum / dnf and apt, and their underlying packaging formats.
If you insist on linking only one library statically, while linking everything else dynamically, then you'll need to jump through a few more hoops. The objective will be to emit link options that cause just that library to be linked statically, without changing other libraries' linking. With the GNU toolchain, and supposing that the program is otherwise being linked dynamically, that would be this combination of options:
-Wl,-Bstatic -lfoobar -Wl,-Bdynamic
Now consider the documentation of the AC_CHECK_LIB() macro:
Macro: AC_CHECK_LIB (library, function, [action-if-found],
[action-if-not-found], [other-libraries])
[...] action-if-found is a list of shell commands to run if the link with the library succeeds; action-if-not-found is a list of shell
commands to run if the link fails. If action-if-found is not
specified, the default action prepends -llibrary to LIBS and defines
'HAVE_LIBlibrary' (in all capitals). [...]
Note in particular the default behavior in the event that the optional arguments are not provided (your present case) -- that's not quite what you want, at least not by itself. I suggest providing at least an alternative behavior for action-if-found case, and you could consider also making configure fail in the action-if-not-found case. The latter is left as an exercise; implementing just the former might look like this:
AC_CHECK_LIB([foobar], [foobar_init], [
LIBS="-Wl,-Bstatic -lfoobar -Wl,-Bdynamic $LIBS"
AC_DEFINE([HAVE_LIBFOOBAR], [1], [Define to 1 if you have libfoobar.])
])
You should also pay attention to the order of your AC_CHECK_LIB() invocations. As its docs go on to say:
This macro is intended to support building LIBS in a right-to-left
(least-dependent to most-dependent) fashion such that library
dependencies are satisfied as a natural side effect of consecutive
tests. Linkers are sensitive to library ordering so the order in which
LIBS is generated is important to reliable detection of libraries.
If you find that you still aren't getting what you want, then have a look at the link commands that make actually executes. You need to understand what's wrong about them before you can determine how to fix the problem.
With all that said, I observe that the above treatment is basically a hack, and it makes your build system much less resilient. It introduces dependencies on GNU toolchain options (which some other toolchains may nevertheless accept), and it assumes dynamic linking is being performed overall. It may be possible to resolve those issues with additional Autoconf code, but I urge you to instead go with one of the first two alternatives I described.

Python/C API: Statically-Linked Extensions?

I've been writing a Python extension use the Python/C API to read data out of a .ROOT file and store it in a list of custom objects. The extension itself works just fine, however when I tried to use it on a different machine I ran into some problems.
The code depends upon several libraries written for the ROOT data manipulation program. The compiler is linking these libraries dynamically, which means I cannot use my extension on a machine that does not have ROOT installed.
Is there a set of flags that I can add to my compilation commands to make these libraries statically linked? Obviously this would make the file size much larger but that isn't much of an issue providing that the code runs at the same speed.
I did think about collating all of the ROOT libraries that I need into an 'archive' file. I'm not too familiar with this so I don't know if that's a good idea or not.
Any advice would be great, I've never really dealt with the static/dynamic library issue before.
Thanks, Sean.

Delphi link to windows dll statically or dynamically

I am aware that implicitly linking to libraries at load time can lead to performance increases and as such I was wondering if it was good practice to link in this way at compile time thus increasing executable size (admittedly this is only marginal) compared to linking explicitly at runtime. My question is when linking against Microsoft Windows dll files located in System32, is it 'better' to link at load time as you can be mostly certain that the libraries will be present or follow the explicit approach?
Language used is Delphi (pascal) and the library in question is the WTsAPI32.dll - Terminal Services.
EDIT: As pointed out - my choice of language was incorrect and has been amended. Also, due to having only really every extensively linked to libraries in Unix, my comments about executable size can be omitted, I believed at the time I WAS in fact referring to static linking which bundles the library code into the executable and I now realise this is impossible when using dll files (DUH!). Thanks all.
The two forms of DLL linking are perhaps better named implicit and explicit. Implicit linking is what you refer to as static linking. And explicit linking is what you refer to as runtime linking
For implicit linking the linker writes entries into the import table of the executable file. This import table is metadata that is used by the loader to resolve DLL imports at module load time. A stub function is included for each implicit import that is only a few bytes in size. The executable size implications of implicit linking are negligible.
With explicit linking the imported function's address is resolved by a call to GetProcAddress. This call is made when the programmer chooses. If the DLL or the function cannot be resolved, the programmer can code fall back behaviour. There are size implications to explicit linking that I estimate to be similar to implicit linking. If the function address is evaluated once and remembered between calls then the performance characteristics are similar to implicit linking.
My advice is as follows:
Prefer implicit linking. It is more convenient to code.
If the DLL may not be present, use explicit linking.
If the DLL must be loaded using a full path, use explicit linking.
If you want to unload the DLL during program execution, use explicit linking.
You specifically mention Windows DLLs. You can safely assume that they will be present. Don't try to code to allow your program to run in case user32.dll is missing. Some functions may not be present in older versions of Windows. If you support those older versions you'll need to use explicit linking and provide a fallback. Decide which version you support and use MSDN to be sure that a function is available on your minimum supported platform.
If your only two options are static linking and run-time dynamic linking, then the latter is the best choice for linking with Windows DLLs because it's your only choice. You cannot link statically to a DLL because DLLs are exclusively for dynamic linking; that's what the D stands for. Microsoft does not provide static libraries for the OS modules, so you cannot link to them statically.
But those typically aren't your only two options. There's a third, namely load-time dynamic linking.
In Delphi, you use load-time dynamic linking by marking a function declaration external and specifying the name of the DLL where the function resides. If you use the function, then an entry is created in your module's import table, and when the OS loads your module, it reads the table, loads the referenced DLL, looks up the address of the function, and stores the address in your program's memory image so that your program can call it directly.
You use run-time dyanmic linking by declaring a function pointer, and then using LoadLibrary and GetProcAddress to look up the function's address prior to calling it. In newer Delphi versions, you can also declare a function in the same style that load-time dynamic linking uses, but then mark it with delay. In that case, the Delphi run-time library will call LoadLibrary and GetProcAddress on your behalf the first time you call the function.
The size differences are negligible. Run-time dynamic linking requires your program to contain code to load and link to libraries, but load-time dynamic linking stores more function references in the import table.
Run-time dynamic linking offers more flexibility in the face of uncertain DLL availability. With load-time dynamic linking, if a DLL is missing, or if it doesn't have all the functions mentioned in your import table, then the OS will fail to load your program — none of your code will run. With run-time dynamic linking, however, you have the opportunity to recover from the problem. You can disable certain parts of your program that the missing DLL depends on, or you can search for DLLs in non-standard places, or you can provide alternative implementations of missing functions.
If the functions you're calling are integral to your program's ability to operate, and there's ample reason to expect the functions to be present wherever your program is installed, then you should choose to link at load time. It allows you to write simpler code. You can be confident that you'll have the required functions if they are available on a certain version of windows that you check for in your installer, or if they're provided by DLLs that you distribute with your program.
On the other hand, if the functions you're calling are optional, then you should prefer to link at run time. Use that for loading plug-ins, or for taking advantage of advanced OS features while maintaining backward compatibility. (For example, you might want to take advantage of Windows Vista theme support when it's present, but still allow your program to run on Windows XP.)
Why do you think that compile-time linking to dynamic libraries would increase EXE size ? I believe you are mislead by somewhat poor choice of terms, used in windows programming from far ago. Let us better use relative terms "early binding" and "late binding" instead for the choice who should search for procedure names, compiler/loader or programmer's custom code.
Using early binding (aka static linking against dynamic library) your EXE contains the values (in a special tables):
DLL1 Name:
procedure "aaaaa" into the variable $1234
procedure "bbbbb" into the variable $5678
.
DLL2 Name:
procedure "ccccc" into the variable $4567
...et cetera.
Now, when you turn this into runtime loading (dynamic linking against dynamic libraries) it would look like
VarH1 := SafeLoatLibrary(DLL1 Name);
if Error-Loading-DLL then do-error-handling;
Var1234 := GetProcAfdress(VarH1, "aaaaa");
if Error-Searching-For-Function then do-error-handling;
Var5678 := GetProcAfdress(VarH1, "bbbbb");
if Error-Searching-For-Function then do-error-handling;
et cetera.
Obviously in the latter case your EXE contains all those values like in the 1st case, but more so - it contains a lot of code to deal with those values, that was just absent before.
So, while EXE size difference is not really large for today memory sizes, it is still in favor of early binding (static compilation against dynamic library).
Then what are the benefits for late binding? For example you can load different DLLs from different paths, determined in runtime by configuration - the flexibility and avoiding of DLL Hell (funny, concept of avoiding DLL Hell is against concept of volume saving). You can make your application work with limited functionality, if DLL load failed while statically binded EXE would just not load - graceful degradation concept. And at least you may give user much better, full of semantics, error messages than Windows could ever do.
And the last word, where you got that concept of EXE size from. I believe you mistaken it from talks about - attention! - static linking against static libraries. That is when OBJ/LIB/DCU files are not the part of distribution, but are just temporary code containers, that ultimately takes its place inside the monolythic EXE. Then yes - then your EXE has all those libraries insideitself and thus grows larger. However this case have nothing about dynamic libraries - DLLs.
The wording chosen once ago overuses static/dynamic terms in two closely related topics: how the library is loaded (compile-time vs runtime) and how functions inside the library are located (or bound. By developer's custom codeing ro by some OS-provided or compiler-provided toolset way before 1st line of your sources started execution).
Due to that ambiguity those close but different concepts start overlapping and sometimes this leads to a total confusion.
Now, what more static linking may give you in modern Windows versions. That is WinSxS folder Novadays Windows tends to keep multiple versions of each system DLL and your program may ask for the specific version of it (while in System32 folder there would be the most recent version that your program may be not get used to. Then you can make a special MANIFEST resource and compile it into EXE asking windows to load not DLLs not be name, but by name+version instead. You can replicaty that functionality with dynamic loading as well, but using Windows-provided toolset it is much easier.
Now you can decide which of those options do or do not have importance for your particular case and make somewhat better informed choice.
HTH.

Is it possible to link some — but not all — libraries statically with libtool?

I am working on a project which is built using autoconf, automake and libtool. The project is distributed in both binary and source form.
On Linux, by default the build script links to all libraries dynamically. This makes sense since Linux users can rely on their distribution’s package manager to handle dependencies.
On Windows, by default the build script links to all libraries statically using libtool’s -all-static option. This makes sense since none of the dependencies are provided with Windows, and it’s helpful to be able to distribute a single binary containing all dependencies rather than mucking about distributing tons of DLLs.
On OSX, some of the dependencies are provided by the OS, and some are not. Therefore it would be helpful to link to the OS-provided libraries dynamically and to the other libraries statically. Unfortunately libtool’s all-or-nothing -all-static option is not helpful here.
Is there a good way to get libtool to link to some libraries statically, but not all?
Note: I realise I could carefully compile the dependencies so that only static builds are available. However, I’d rather the build system for my project were robust in the common case of static and dynamic builds of dependencies being available.
Note: Of course, I am not concerned with really low level dependencies like the C/C++ runtime libraries, which are always linked dynamically on all three of the above platforms.
After some research I have answered my own question.
If you have static and dynamic builds of a library installed, and you link to that library using the -l parameter, libtool links by preference to the dynamic build. It links to a static build if there is no dynamic build available, or if you pass the -static or -all-static options.
libtool can be forced to link to the static library by giving the full path to that library in place of the -l option.

Why does my static build require shared libraries?

Why does my static build require shared libraries?
Every so often I get these warnings from my linker... (at the moment it is happening with openssh-5.2p1)
The warnings look similar to:
"Using 'function' in statically linked applications requires at runtime the shared libraries from the glibc version used for..."
When I google, I only see fixes, not reasons.
Thanks,
Chenz
It doesn't require shared libs per se, it just warns you that some things might not work properly if you link statically to glibc.
Some of those things are the nsswitch, see e.g. /etc/nsswitch.conf .On a system different ways of looking up users/groups/hostnames and other things can be configured and altered via plugins - e.g. samba comes with a module for managing users configured on a windows domain/active directory transparently.
Your app will not honor /etc/nsswitch.conf customization if you link statically to glibc, functions such as gethostbyname,getpwuid and others will just use the default ways of looking up things.
Same goes for e.g. other libraries your app might use that for whatever reason dlopen()s itself to hook into glibc or similar.
See also
Statically linking considered harmful

Resources