Proper link order for gcc - gcc

ALL,
I'm trying to figure out a proper link order when I build my software on Linux with gcc.
I have an .a library, which export an interface class (class with a lot of pure virtuals).
This a library is linked to libraryA.so and libraryB.so, because both those libraries have an actual implementation for the interface.
Now libraryB is also linked to unixODBC.
Now, both libraryA and libraryB are linked to libraryC, which actually instantiates the classes from them.
If I understand correctly the linking order should be as follows:
librarya.a libraryA odbc odbcinst libraryB
for libraryC.
But after successful build and running make install, trying to load libraryC fails, because libraryB is not visible.
Is my linking correct? Or better yet - is my understanding of the linking is correct - the referenced library should be first and then follows referencing libraries and I just need to update the LD_LIBRARY_PATH variable.
Thank you for clarifying.

Related

Go code building linker error. Can I link manually?

I am building Go code that uses CGo heavily and this code must be compiled into a shared or static library (static is highly preferred). (code for reference)
It all works just fine on Linux and Mac, but on Windows it fails on linker stage either saying that all 4 modes (c-shared, shared, c-archive, archive) are not available or if invoke go tool link -shared manually complains about missing windows specific instructions.
My understanding is that all I need to build usable lib.a is to compile everything I will use into object files (*.o) and then put it through ar to produce usable static library.
Now the question is whether I can completely skip Go's linker and based on prepared .o files create .a manually?
How would I go about doing that if that is even possible?
Looks like gcc on windows is unable to automatically discover necessary shared libraries. The problem was caused by GCC and not by Go.
Although for compiling Go I had to use self-compiled master tip as current release (1.6.2) does not support shared/static libraries on windows/amd64.
Manually feeding gcc with each shared library (ntdll, winmm etc) in default location (C:\Windows\SysWOW64) has fixed the problem.

How can I tell why my program requires a specific shared library?

I'm working on an OS X application using a third-party framework. This framework is distributed both as shared objects and static objects. For my purposes, I want to use the static objects because I can't rely on the presence of the library on other systems.
However, when I build the application with Xcode, something decides it needs the shared objects, and when I run it, dyld tells me off before I even get to my program:
dyld: Library not loaded: /usr/local/lib/libshared.dylib
  Referenced from: /Users/me/Library/Developer/Xcode/snip/Application.app/Contents/MacOS/Application
  Reason: image not found
I ran otool -L on the executable, and sure enough, it tried to link against the shared objects (which aren't even installed on my system). However, when I ran it on the thirty-some .a files that I link against, none of them indicated any dependency on them.
Apple's ld -v is just a tad verbose: it displays the library search paths but doesn't produce any other kind of useful output.
How can I find what tried to link against the shared objects?
otool -L does list the libraries against which any object links.
This specific instance was caused by an Xcode bug (known as rdar://2725744 to the ones blessed with Apple bug database access, and not fixed as of Xcode 6.1.1) where if you try to link against a .a static library with Xcode but there's a .dylib (or .so) dynamic library with the same name in the same directory, the linker will pick the dynamic one.
When you instruct Xcode to link against a static library (say /some/path/to/my/libFooBar.a), it adds -L/some/path/to/my -lFooBar to the linker invocation. However, with it, ld first searches for a dynamic library called libFooBar.dylib, and it will fall back to the static library only if it can't find the dynamic one.
If you added the framework is listed in the "Build Phases" of "Link Binary with Libraries" and it has the setting of "Required" that could explain why the launch fails.
Try changing the setting to "Optional". Then if nothing uses the framework, it should launch fine. (If something tries to use the framework, but fails to check for it first, it might crash.)

How to properly build third-party library to be used in Debug and Release configurations in my project?

When I need to build some third party library to be used in several of my projects under different version of MSVC, I usually build it for every MSVC version and for both Debug and Release configurations. That's what boost does, and that's what we have been done for our whole life in my team.
However, I still don't get, why couldn't I just build this library with like... whatever. All I need is function prototype and object code, right? Since I'm linking CRT statically, I have no external dependencies. But when I'm trying to link library built in Release under MSVC8 with my project in Debug under MSVC10 I have this annoying "already defined" linker errors which we all hate so much.
But why? Can I just "encapsulate" all this functions inside lib and do not export them so that my project will take only what it needs from the lib? Why can I have precompiled version of libpng and zlib which I can link in every project? Yes, they are not build using MSVC, I guess, but the still uses the same functions of CRT. So can anyone please explain in depth or share a link to some enlightened explanation of this issue?
Since I'm linking CRT statically, I have no external dependencies
Well, that's not true, you do have a dependency. On the static version of the CRT. Debug or Release, depending on your build settings. And it is an external dependency, the linker glues the CRT later, when the library gets linked. The code that uses the library also has a dependency on the CRT. And if the compile settings don't match then the linker barfs.
You isolate that dependency by building a DLL instead of a static link library. You must further ensure that the exported functions don't cause a CRT dependency. You can't return a C++ object from the standard C++ library and can't return a pointer to an object that needs to be released by the client code. Even passing structures is tricky since their packing is an implementation detail, but you usually get away with it. A good practical example is COM automation, it forces you into using a subset of types that are universal. Windows is rife with them and all these servers work with any version of the compiler or CRT. Even any language. This however comes at a cost, writing such a library isn't as simple or convenient as just throwing a bunch of code in a static lib.

'Undefined reference' despite class being linked properly

I have the following problem - I'm trying a sort of bastardized build of the poco library for C++ (ie, using a premake-generated makefile instead of the poco makefile because I'm building on windows without msvc)
I've actually managed to get all the libraries built into .a files. The problem arises when I try to actually use classes - and then gcc swears up and down that it can't find the reference. This despite the fact that I have checked the libraries with ar -t and seen that the classes in question do indeed exist there.
In general, what could be the problem? I have a library that at least claims to have the requisite .o files, yet the references are still undefined.
For example, I have an undefined reference to Poco::XML::InputSource::InputSource(std::istream&), yet "InputSource.o" is in the linked library, and the requisite ctor is in the header file.

Is it possible to link some — but not all — libraries statically with libtool?

I am working on a project which is built using autoconf, automake and libtool. The project is distributed in both binary and source form.
On Linux, by default the build script links to all libraries dynamically. This makes sense since Linux users can rely on their distribution’s package manager to handle dependencies.
On Windows, by default the build script links to all libraries statically using libtool’s -all-static option. This makes sense since none of the dependencies are provided with Windows, and it’s helpful to be able to distribute a single binary containing all dependencies rather than mucking about distributing tons of DLLs.
On OSX, some of the dependencies are provided by the OS, and some are not. Therefore it would be helpful to link to the OS-provided libraries dynamically and to the other libraries statically. Unfortunately libtool’s all-or-nothing -all-static option is not helpful here.
Is there a good way to get libtool to link to some libraries statically, but not all?
Note: I realise I could carefully compile the dependencies so that only static builds are available. However, I’d rather the build system for my project were robust in the common case of static and dynamic builds of dependencies being available.
Note: Of course, I am not concerned with really low level dependencies like the C/C++ runtime libraries, which are always linked dynamically on all three of the above platforms.
After some research I have answered my own question.
If you have static and dynamic builds of a library installed, and you link to that library using the -l parameter, libtool links by preference to the dynamic build. It links to a static build if there is no dynamic build available, or if you pass the -static or -all-static options.
libtool can be forced to link to the static library by giving the full path to that library in place of the -l option.

Resources