Using a shared library without source code - go

I am building the shared library which can be used for my python program using command.
go build -o program.so -buildmode=c-shared myprogram/program.go
However, it seems that for me to use the shared library on another machine, I have to include all the source code. Otherwise, I would get OSError: invalid ELF header.
Is using the shared library without source code possible?

Library is a binary artifact and will only work on same architecture as it was built for. OSError: invalid ELF header means the library is for different architecture (e.g. library built on x86_64 Linux won't load on arm Linux, x86_64 MacOS X and so on).
Use without source code is perfectly possible if you build library binaries for all architectures (CPU and OS) where your users intend to use it.

Related

Mono on Mac: DllNotFoundException despite SQLite.Interop.dll being in dllmap

I have a C# application that uses SQLite and works fine on Windows.
The same Visual Studio project compiles fine in Xamarin Studio, but when running I get:
DllNotFoundException: SQLite.Interop.dll
Despite:
libsqlite3.0.dylib is in /usr/lib and also in the same folder as the executable and other DLLs
. is part of the $DYLD_LIBRARY_PATH
The executable and all SQLite-using DLLs have a matching <the_exe_or_dll_including_filename_extension>.config file containing:
<configuration>
<dllmap dll="sqlite" target="libsqlite.0.dylib" os="osx"/>
<dllmap dll="sqlite3" target="libsqlite3.0.dylib" os="osx"/>
</configuration>
I also tried adding <dllmap dll="SQLite.Interop.dll" target="libsqlite3.0.dylib" os="osx"/>, not better.
What is the problem?
You can easily find where mono is looking for that native library by setting the MONO_LOG_LEVEL to debug and MONO_LOG_MASK filtering to only DLL related messages.
export MONO_LOG_LEVEL=debug
export MONO_LOG_MASK=dll
mono yourprogram.exe
or as a one liner so you do not have to unset env vars:
MONO_LOG_LEVEL=debug MONO_LOG_MASK=dll mono yourprogram.exe
Mono and the OS-X dynamic link editor ('man dyld' for details) does not require DYLD_LIBRARY_PATH to be set to the current directory ('.'). Note: Linux does require LD_LIBRARY_PATH to include the current directory, if that is your intention.
Move those dll map files out of the way to remove them from the equation.
Unset DYLD_LIBRARY_PATH
cd in the directory that contains your CIL based exe, dlls and native dylib(s)
MONO_LOG_LEVEL=debug MONO_LOG_MASK=dll mono yourprogram.exe
Using the native dll/shared library trace output you can track which library is not being found (or one of its dependancies) or if it is the wrong ARCH for your mono version.
If you are still having problems, we would need to know which SQLite library you are using the options that you are using to compile it (or the arch version if getting it via a Nuget). A posting your dll trace output would quickly solve things also.
Notes:
I am assuming you are using the System.Data.SQLite library and are compiling the the options "/p:UseInteropDll=true /p:UseSqliteStandard=false".
Mono includes a SQLite in it's default install, it is 32-bit on OS-X:
file /Library/Frameworks/Mono.framework/Versions/4.0.2/lib/libsqlite3.dylib
/Library/Frameworks/Mono.framework/Versions/4.0.2/lib/libsqlite3.dylib: Mach-O dynamically linked shared library i386
Assuming you are using the OS-X package installer from Mono, thus are getting the 32-bit version of Mono and thus need 32-bit versions of the native libraries.
>>file `which mono`
/usr/bin/mono: Mach-O executable i386
The /usr/lib/libsqlite3.0.dylib is a multi ARCH fat binary, so that library is not a problem, but your debug output might show another one that is a problem,
>>file /usr/lib/libsqlite3.0.dylib
libsqlite3.0.dylib: Mach-O universal binary with 3 architectures
libsqlite3.0.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64
libsqlite3.0.dylib (for architecture i386): Mach-O dynamically linked shared library i386
libsqlite3.0.dylib (for architecture x86_64h): Mach-O 64-bit dynamically linked shared library x86_64
You need to build and supply SQLite.Interop.dll (or more precisely libSQLite.Interop.dylib). The Mono distribution packages don't include it, probably because it's native code and really needs to be built on the target platform.
System.Data.SQLite on Windows uses a mixed mode approach (Managed data adapter + sqlite native code in one assembly). Mono however doesn't really support mixed mode assemblies.
So on MacOS there are two alternatives when it comes to building System.Data.SQLite on Windows:
Use interop dll.
Use libsqlite.x.x.dylib.
Both of these are native code and need to be built on the Mac.
Interop is Windows com speak so it's a bit disconcerting to see it used in a MacOS context. What this native dll is is the sqlite source code compiled up with some additional native code that can be P\Invoked by System.Data.SQLite. There are some benefits to using the interop dll as opposed to the sqlite dylib.
System.Data.SQLite ships with a copy of the relevant SQLite native source code in ./SQLite.Interop/src.core. You can build the interop library by running compile-interop-assembly-release.sh on the Mac. This will build libSQLite.Interop.dylib. Drop that in beside System.Data.SQLite and you should be good to go.
If you turn on Mono dll tracing you can watch the loader (see mono 4.8.0 loader.c) searching for the dll in various locations and with various name substitutions. Eventually it finds our dylib. It is also possible to use a dllmap entry in the System.Data.SQLite.dll.config file to direct the runtime to the dll. In my case Mono is on my app bundle so I have:
<dllmap dll="SQLite.Interop.dll" target="#executable_path/../Mono/libSQLite.Interop.dylib" os="!windows"/>
The dllmap target argument is passed to dlopen() so #executable_path et al are all usable.
I prefer this approach as it goes into the repo and provides some insight into what is going on when there's a foul up.

including different headers for i386 and x86_64 when compiling universal libraries for mac via cmake

I am building an universal library on mac. My library uses openssl functions and links against openssl libraries. I can get the openssl code compiled for i386 and x86_64 separately and then create a fat library to make it a universal library for i386 and x86_64.
My library is compiled via cmake by setting CMAKE_OSX_ARCHITECTURES=i386;x86_64 to make it universal between i386 and x86_64
The openssl headers generated for i386 and x86_64 are different. How do I make cmake select different headers for i386 and x86_64?
AFAIK, the current openssl build process does not support OS X universal builds directly. One way to do it is to just compile each architecture separately and then afterwards combine the two variants of each library file into a combined universal file by using lipo -combine. See man 1 lipo. There is an example here: https://gist.github.com/tmiz/1441111
including different headers for i386 and x86_64 when compiling universal libraries for mac ...
I don't believe you can do it for a universal or fat library.
In this case, where you want different headers for architectures, you might need to leap to a framework because a framework allows multiple sets of headers. But I don't think I have seen it used for architecture independence.
The Introduction to Framework Programming Guide discusses the on disk layout of the bundle under Anatomy of Framework Bundles, Additional Directories:
Directory | Description
--------------+---------------------------------------------------------------------------------
Headers | Contains any public headers you want to make available to external developers.
--------------+---------------------------------------------------------------------------------
Documentation | Contains HTML or PDF files describing the framework interfaces. Typically,
| documentation directories do not reside at the top level of your framework.
| Instead, they reside inside your language-specific resource directories.
--------------+---------------------------------------------------------------------------------
Libraries | Contains any secondary dynamic libraries your framework requires.

Can gcc build an executable without access to a required shared library?

When building an executable, gcc requires the -l flag to list the shared libraries, even though they can be changed freely without recompiling the executable. Does gcc use that flag only to check if all the symbols are ok? Can I build the executable without performing this verification?
You can use dlopen to load the dynamic library at runtime, and dlsym to get the pointer to the function you like to call.
Here is a sample http://pubs.opengroup.org/onlinepubs/009695399/functions/dlsym.html

Problems using a library in Xcode

I'm actually developping an application for iPhone and I need to use a library, initially dedicated to a Linux environment. Since I'm using a Mac (with Snow Leopard and Intel Core Duo), I guess it's possible to use this library in my app.
My library has 3 files: a file .h, a file .a and a file .so (both .a and .so are in /Developer/usr/lib). In addition I have included the .h i nmy code and I've added the .a in XCode has a framework (and it works because XCode find the .so compiling).
For your info when I use the command "file" for the file .so, I have:
ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped
When I compile for the Xcode Simulator, I have a warning and an error.
The warning is:
In /Developer/usr/lib/mylib.so, file was built for unsupported
file format which is not the architecture being linked (i386)
The error is:
"_mylib_fct", referenced from:
-[MyAppAppDelegate applicationDidBecomeActive:] in
MyAppAppDelegate.o Symbol(s) not found Collect2: ld returned 1
exit status
When I compile for the Device 3.0 with architecture arm6, I also have the same error, but the warning is quite different:
ln /Users/Pablo/MyApp/mylib.a file is not of required architecture
I try to solve this and make the app working with this lib since days, and I don't understand why the compiler is complaining... is it a 32/64 bits issues? How can I deal with that?
Mac OS X is not binary compatible with Linux. It cannot load ELF images, nor does it share the same ABI.
It can only load MACH images, e.g.:
file /usr/lib/libcrypto.dylib
[..]
/usr/lib/libcrypto.dylib (for architecture i386): Mach-O dynamically linked shared library i386
Read the dlopen man page for details.
AFAIK If Mac OS is not binary compatible to the specific Linux version, the library should not be usable in your projects.
Also you need two versions, one for the simulator (i386) and one for the device (arm..).

How do I do source level debug of library

I have a following setup. Although my working setup deals with ARM compiler Real View Developer Suite (RVDS) 3.2 on a Windows host, the situation could be generic for any other C compiler on any host.
I build a ARM library (static library - .a file) of C code using RVDS 3.2 compiler toolchain on Windows host. Then I link this library with an application using an ARM-Linux compiler toolchain on a Linux host, to get a ARM executable. Now when I try to debug this generated ARM executable on Linux using gdb, by trying to put a breakpoint in some function which is present in the library that is linked, gdb is not able to put breakpoint there citing source not found. So I manually copied all the source files(*.c) used to create the library in the Linux folder where the executable file is present. Still gdb fails to put a breakpoint.
So now I started thinking:
How can I do source level debugging of this library which I create on Windows using a different compiler chain by launching the executable which is generated by linking this library to an application, in gdb. Is it possible? How can I do it? Is there any compiler option in RVDS compiler toolchain to enable this library source level debug?
Do I need to copy the source files to linux in exactly same folder structure as that is present in windows for those source files?
You could try to see if mimicking the exact same directory structure works. If you're not sure what directory structure the compiler annotated in the debug info in the executable, you can always look at it with dwarfdump (on linux).
First, GDB does not need any source to put breakpoints on functions; so your description of what is actually happening is probably inaccurate. I would start by verifying that the function you want to break on is actually there in the binary:
nm /path/to/app | grep function_desired
Second, to do source level debugging, GDB needs debug info in a format GDB understands. On Linux this generally means DWARF or STABS. It is quite possible that your RVDS compiler does not emit such debug info; if so, source level debugging will not be possible.
Did you build the library with debugging enabled (-g option)? Without that, there would be difficulties identifying lines etc.
I've found that -fPIC will cause this sort of issue, but the only work around I've found is to not use -fPIC when I want to debug.

Resources