Can't load any egg when embedding Chicken Scheme library into C project - chicken-scheme

I'm working on a chicken library that I use in a C project. When I try to load eggs (e.g. (use intarweb)), the runtime complains about failing to load the egg.
(lldb) run
Error: (require) cannot load extension: intarweb
Call history:
bridge-connector.scm:6: ##sys#require <--
Process 56172 exited with status = 70 (0x00000046)
I wondered whether the runtime failed to locate where the eggs are installed, so I tried setting CHICKEN_INCLUDE_PATH environment variable with no success:
export CHICKEN_INCLUDE_PATH="/usr/local/Cellar/chicken/4.13.0/lib/chicken/8/"
I even tried using load directly with the full path:
(load "/usr/local/Cellar/chicken/4.13.0/lib/chicken/8/intarweb.so")
but got the following error:
(lldb) run
Error: unbound variable: |\xcf\xfa\xed\xfe\x07\x00\x00\x01\x03\x00\x00\x00\x08\x00\x00\x00|
Call history:
bridge-connector.scm:6: load
I'm using Chicken Scheme 4 and I'm initializing the Chicken Scheme runtime as follow:
#include <chicken.h>
void my_lib_initialize()
{
C_word k = CHICKEN_run(C_toplevel);
(void)k;
}
My Chicken library is built as follow:
csc -embedded -debug-info -d3 -J -c bridge-connector.scm
csc -embedded -debug-info -d3 -c my-lib.scm
csc -c my_lib_initialize.c
csc ./my_lib_initialize.o ./my-lib.o ./bridge-connector.o -shared -embedded -static -debug-info -d3 -o libmy-lib.dylib

Don't use -static if you want to dynamically load extensions (which is what use does).
If you really want to link in intarweb statically, you'll have to compile it and all of its dependencies statically (which most CHICKEN 4 eggs don't currently do in their setup-file, so you must do it manually) and link them in, and use (declare (uses intarweb)) (import intarweb) instead of just (use intarweb). Here's a tutorial on how to do that. This is a bit involved in CHICKEN 4, unfortunately.
In CHICKEN 5, chicken-install has been rewritten to make it much easier to support static compilation of eggs. If you like, you can already try out the latest release candidate. Many eggs have already been ported (including intarweb) and it should be stable enough for usage; we expect this to be the last release candidate.

Related

/lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.34' [duplicate]

I'm very new to Yesod and I'm having trouble building Yesod statically
so I can deploy to Heroku.
I have changed the default .cabal file to reflect static compilation
if flag(production)
cpp-options: -DPRODUCTION
ghc-options: -Wall -threaded -O2 -static -optl-static
else
ghc-options: -Wall -threaded -O0
And it no longer builds. I get a whole bunch of warnings and then a
slew of undefined references like this:
Linking dist/build/personal-website/personal-website ...
/usr/lib/ghc-7.0.3/libHSrts_thr.a(Linker.thr_o): In function
`internal_dlopen':
Linker.c:(.text+0x407): warning: Using 'dlopen' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/unix-2.4.2.0/libHSunix-2.4.2.0.a(HsUnix.o): In
function `__hsunix_getpwent':
HsUnix.c:(.text+0xa1): warning: Using 'getpwent' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/unix-2.4.2.0/libHSunix-2.4.2.0.a(HsUnix.o): In
function `__hsunix_getpwnam_r':
HsUnix.c:(.text+0xb1): warning: Using 'getpwnam_r' in statically
linked applications requires at runtime the shared libraries from the
glibc version used for linking
/usr/lib/libpq.a(thread.o): In function `pqGetpwuid':
(.text+0x15): warning: Using 'getpwuid_r' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/libpq.a(ip.o): In function `pg_getaddrinfo_all':
(.text+0x31): warning: Using 'getaddrinfo' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__63.o): In function `sD3z_info':
(.text+0xe4): warning: Using 'gethostbyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__164.o): In function `sFKc_info':
(.text+0x12d): warning: Using 'getprotobyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__155.o): In function `sFDs_info':
(.text+0x4c): warning: Using 'getservbyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/libpq.a(fe-misc.o): In function `pqSocketCheck':
(.text+0xa2d): undefined reference to `SSL_pending'
/usr/lib/libpq.a(fe-secure.o): In function `SSLerrmessage':
(.text+0x31): undefined reference to `ERR_get_error'
/usr/lib/libpq.a(fe-secure.o): In function `SSLerrmessage':
(.text+0x41): undefined reference to `ERR_reason_error_string'
/usr/lib/libpq.a(fe-secure.o): In function `initialize_SSL':
(.text+0x2f8): undefined reference to `SSL_check_private_key'
/usr/lib/libpq.a(fe-secure.o): In function `initialize_SSL':
(.text+0x3c0): undefined reference to `SSL_CTX_load_verify_locations'
(... snip ...)
If I just compile with just -static and without -optl-static
everything builds fine but the application crashes when it tries to
start on Heroku.
2011-12-28T01:20:51+00:00 heroku[web.1]: Starting process with command
`./dist/build/personal-website/personal-website -p 41083`
2011-12-28T01:20:51+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: error while loading shared libraries: libgmp.so.10:
cannot open shared object file: No such file or directory
2011-12-28T01:20:52+00:00 heroku[web.1]: State changed from starting
to crashed
I tried adding libgmp.so.10 to the LD_LIBRARY_PATH as suggested in here
and then got the following error:
2011-12-28T01:31:23+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: /lib/libc.so.6: version `GLIBC_2.14' not found
(required by ./dist/build/personal-website/personal-website)
2011-12-28T01:31:23+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: /lib/libc.so.6: version `GLIBC_2.14' not found
(required by /app/dist/build/personal-website/libgmp.so.10)
2011-12-28T01:31:25+00:00 heroku[web.1]: State changed from starting
to crashed
2011-12-28T01:31:25+00:00 heroku[web.1]: Process exited
It seems that the version of libc that I'm compiling against is
different. I tried also adding libc to the batch of libraries the
same way I did for libgmp but this results in a segmentation fault
when the application starts on the Heroku side.
Everything works fine on my PC. I'm running 64bit archlinux with ghc
7.0.3. The blog post on the official Yesod blog looked pretty easy
but I'm stumped at this point. Anyone have any ideas? If there's a way to get this thing working without building statically I'm open to that too.
EDIT
Per Employed Russians answer I did the following to fix this.
First created a new directory lib under the project directory and copied the missing shared libraries into it. You can get this information by running ldd path/to/executable and heroku run ldd path/to/executable and comparing the output.
I then did heroku config:add LD_LIBRARY_PATH=./lib so when the application is started the dynamic linker will look for libraries in the new lib directory.
Finally I created an ubuntu 11.10 virtual machine and built and deployed to Heroku from there, this has an old enough glibc that it works on the Heroku host.
Edit:
I've since written a tutorial on the Yesod wiki
I have no idea what Yesod is, but I know exactly what each of your other errors means.
First, you should not try to link statically. The warning you get is exactly right: if you link statically, and use one of the routines for which you are getting the warning, then you must arrange to run on a system with exactly the same version of libc.so.6 as the one you used at build time.
Contrary to popular belief, static linking produces less, not more, portable executables on Linux.
Your other (static) link errors are caused by missing libopenssl.a at link time.
But let's assume that you are going to go the "sane" route, and use dynamic linking.
For dynamic linking, Linux (and most other UNIXes) support backward compatibility: an old binary continues to work on newer systems. But they don't support forward compatibility (a binary built on a newer system will generally not run on an older one).
But that's what you are trying to do: you built on a system with glibc-2.14 (or newer), and you are running on a system with glibc-2.13 (or older).
The other thing you need to know is that glibc is composed of some 200+ binaries that must all match exactly. Two key binaries are /lib/ld-linux.so and /lib/libc.so.6 (but there are many more: libpthread.so.0, libnsl.so.1, etc. etc). If some of these binaries came from different versions of glibc, you usually get a crash. And that is exactly what you got, when you tried to place your glibc-2.14 libc.so.6 on the LD_LIBRARY_PATH -- it no longer matches the system /lib/ld-linux.
So what are the solutions? There are several possibilities (in increasing difficulty):
You could copy ld-2.14.so (the target of /lib/ld-linux symlink) to the target system, and invoke it explicitly:
/path/to/ld-2.14.so --library-path <whatever> /path/to/your/executable
This generally works, but can confuse an application that looks at argv[0], and breaks for applications that re-exec themselves.
You could build on an older system.
You could use appgcc (this option has disappeared, see this for description of what it used to be).
You could set up a chroot environment matching the target system, and build inside that chroot.
You could build yourself a Linux-to-olderLinux crosscompiler
You have several issues.
You should not build production binaries on bleeding edge distributions. The libraries on the production system will not be forward compatible.
You should not link glibc statically - it will always at runtime try to load additional libraries. For example cpu-based assembly. That is what your first warnings are about.
The last linker errors look like they are related to a missing openssl library on the command line.
But all in all - downgrade your distribution.
I had similar problems launching to Heroku (which uses glibc-2.11) where I had an application that required glibc-2.14, but I did not have access to the source and could not re-build it. I tried many things and nothing worked.
My workaround was to launch the service on Amazon Elastic Beanstalk and just provide an API interface.
I found the information provided useful as well, I think the various descriptions miss a critical issue I also ran into while forcing an updated version of Vagrant to start working again.
It's the dependency references internal to something like complicated installs, like Yesod to Heroku. Those interanl refences need to be preserved.
This is the script I wrote to make problems go away (at least, hopefully, for a little while):
#!/bin/bash
cd $HOME/
GLIBC_VERSION="2.17"
GLIBC_PREFIX="/usr/glibc/"
VAGRANT_VERSION="2.2.19"
# Install the basic build system utilities.
yum groupinstall -y "Development tools"
yum install -y curl patchelf
# Grab the tarball with the GNU libc source code.
curl -Lfo glibc-${GLIBC_VERSION}.tar.gz "https://ftp.gnu.org/gnu/glibc/glibc-${GLIBC_VERSION}.tar.gz"
echo "a3b2086d5414e602b4b3d5a8792213feb3be664ffc1efe783a829818d3fca37a glibc-${GLIBC_VERSION}.tar.gz" | sha256sum -c || exit 1
# Extract the secrets and get ready to rumble.
tar xzvf glibc-${GLIBC_VERSION}.tar.gz
# The configure script requrires an independent build directory.
mkdir -p glibc-build && cd glibc-build
# Configure glibc with a GLIBC_PREFIX so it doesn't conflict with distro libc files..
../glibc-${GLIBC_VERSION}/configure --prefix="${GLIBC_PREFIX}" --libdir="${GLIBC_PREFIX}/lib" \
--libexecdir="${GLIBC_PREFIX}/lib" --enable-multi-arch
# Compile and then install GNU libc.
make -j8 && make install
# Download and install Vagrant.
curl -Lfo vagrant_${VAGRANT_VERSION}_x86_64.rpm "https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}_x86_64.rpm"
echo "990e8d2159032915f21c0f1ccdcbca1a394f7937e06e43dc1dabe605d208dc20 vagrant_${VAGRANT_VERSION}_x86_64.rpm" | sha256sum -c || exit 1
yum install -y vagrant_${VAGRANT_VERSION}_x86_64.rpm
# Patch the binaries and shared libraries inside the Vagrant directory, so they use the new version of GNU libc.
(find /opt/vagrant/ -type f -exec file {} \; )| grep "dynamically linked" | awk -F':' '{print $1}' | while read FILE ; do
patchelf --set-rpath /opt/vagrant/embedded/lib:/opt/vagrant/embedded/lib64:/usr/glibc/lib:/usr/lib64:/lib64:/lib --set-interpreter /usr/glibc/lib/ld-linux-x86-64.so.2 "${FILE}"
done
The script should be pretty easy to understand, and adapt easily to whatever MacGuffin you want to make work, provied you understand it.
The only tricky part is the rpath you pass to patchelf. Upi need to make sure you preserve the search paths, and precedence your software requires. Or you end up fixing one problem only to create another equally frustrating roadblock.
P.S. Don't forget the update the hashes for any file you down. In particular, you need to compile/install a different version of GNU libc, you will need to update that hash to match the version you want to use.

GCC built from source in different location is incorrectly using same shared libs as native GCC

I'm a student doing research involving extending the TM capabilities of gcc. My goal is to make changes to gcc source, build gcc from the modified source, and, use the new executable the same way I'd use my distro's vanilla gcc.
I built and installed gcc in a different location (not /usr/bin/gcc), specifically because the modified gcc will be unstable, and because our project goal is to compare transactional programs compiled with the two different versions.
Our changes to gcc source impact both /gcc and /libitm. This means we are making a change to libitm.so, one of the shared libraries that get built.
My expectation:
when compiling myprogram.cpp with /usr/bin/g++, the version of libitm.so that will get linked should be the one that came with my distro;
when compiling it with ~/project/install-dir/bin/g++, the version of libitm.so that will get linked should be the one that just got built when I built my modified gcc.
But in reality it seems both native gcc and mine are using the same libitm, /usr/lib/x86_64-linux-gnu/libitm.so.1.
I only have a rough grasp of gcc internals as they apply to our project, but this is my understanding:
Our changes tell one compiler pass to conditionally insert our own "function builtin" instead of one it would normally use, and this is / becomes a "symbol" which needs to link to libitm.
When I use the new gcc to compile my program, that pass detects those conditions and successfully inserts the symbol, but then at runtime my program gives a "relocation error" indicating the symbol is not defined in the file it is searching in: ./test: relocation error: ./test: symbol _ITM_S1RU4, version LIBITM_1.0 not defined in file libitm.so.1 with link time reference
readelf shows me that /usr/lib/x86_64-linux-gnu/libitm.so.1 does not contain our new symbols while ~/project/install-dir/lib64/libitm.so.1 does; if I re-run my program after simply copying the latter libitm over the former (backing it up first, of course), it does not produce the relocation error anymore. But naturally this is not a permanent solution.
So I want the gcc I built to use the shared libs that were built along with it when linking. And I don't want to have to tell it where they are every time - my feeling is that it should know where to look for them since I deliberately built it somewhere else to behave differently.
This sounds like the kind of problem any amateur gcc developer would have when trying to make a dev environment and still be able to use both versions of gcc, but I had difficulty finding similar questions. I am thinking this is a matter of lacking certain config options when I configure gcc before building it. What is the right configuration to do this?
My small understanding of the instructions for building and installing gcc led me to do the following:
cd ~/project/
mkdir objdir
cd objdir
../source-dir/configure --enable-languages=c,c++ --prefix=/home/myusername/project/install-dir
make -j2
make install
I only have those config options because they seemed like the ones closest related to "only building the parts I need" and "not overwriting native gcc", but I could be wrong. After the initial config step I just re-run make -j2 and make install every time I change the code. All these steps do complete without errors, and they produce the ~/project/install-dir/bin/ folder, containing the gcc and g++ which behave as described.
I use ~/project/install-dir/bin/g++ -fgnu-tm -o myprogram myprogram.cpp to compile a transactional program, possibly with other options for programs with threads.
(I am using Xubuntu 16.04.3 (64 bit), within VirtualBox on Windows. The installed /usr/bin/gcc is version 5.4.0. Our source at ~/project/source-dir/ is a modified version of 5.3.0.)
You’re running into build- versus run-time linking differences. When you build with -fgnu-tm, the compiler knows where the library it needs is found, and it tells the linker where to find it; you can see this by adding -v to your g++ command. However when you run the resulting program, the dynamic linker doesn’t know it should look somewhere special for the ITM library, so it uses the default library in /usr/lib/x86_64-linux-gnu.
Things get even more confusing with ITM on Ubuntu because the library is installed system-wide, but the link script is installed in a GCC-private directory. This doesn’t happen with the default GCC build, so your own GCC build doesn’t do this, and you’ll see libitm.so in ~/project/install-dir/lib64.
To fix this at run-time, you need to tell the dynamic linker where to find the right library. You can do this either by setting LD_LIBRARY_PATH (to /home/.../project/install-dir/lib64), or by storing the path in the binary using -Wl,-rpath=/home/.../project/install-dir/lib64 when you build it.

How to use Intel fortran compiler with MKL on command line

I've freshly installed the Intel® Parallel Studio XE Composer Edition for Fortran OS X* (student version). It comes with the Math Kernel Library, which is why I bought it. I'm having a hard time getting started with MKL. Here's what I've done step-by-step.
1) Installed Intel® Parallel Studio XE Composer Edition for Fortran OS X* (no problem). I can run a 'hello world' script using ifort and throw the -mkl link command on at the end with no problem (not calling any mkl commands just yet).
2) Following these instructions I set my environment variables using a script provided by intel (located in opt/intel/bin to be precise). I have the intel 64-bit architecture (according to ifort -V) so I use bash mklvars.sh intel64 mod ilp64. It runs without error (or any output).
3) I write the following code to use MKL's gemm command for fortran95. Just multiplying 2 matrices.
program test
implicit none
real, dimension(2,2) :: testA, testB, testC
testA = 1
testB = 1
testC = 0 ! I don't think I need this line, but it shouldn't matter
call gemm(testA, testB, testC)
write(*,*) testC
end program test
4) I compile with ifort test_mkl.f90 -o test -mkl. I get the following error:
Undefined symbols for architecture x86_64:
"_gemm_", referenced from:
_MAIN__ in ifortSTVOrB.o
ld: symbol(s) not found for architecture x86_64
5) I try ifort test_mkl.f90 -o test -L/opt/intel/mkl/lib -mkl and get the same result.
I notice a lot of people using MKL begin their code with USE mkl95_blas, ONLY: gemm, so I put that in above implicit none in both of the above examples and get:
test_mkl.f90(4): error #7002: Error in opening the compiled module file. Check INCLUDE paths. [MKL95_BLAS]
USE mkl95_blas, ONLY: gemm
--------^
test_mkl.f90(12): error #6406: Conflicting attributes or multiple declaration of name. [GEMM]
call gemm(testA, testB, testC )
---------^
test_mkl.f90(4): error #6580: Name in only-list does not exist. [GEMM]
USE mkl95_blas, ONLY: gemm
--------------------------^
compilation aborted for test_mkl.f90 (code 1)
Any ideas as to what the problem is or how to fix this? I have successfully run a simple script in XCODE using MKL, so it's definitely something I'm doing and not my installation.
The documentation tells you to use the "source" command on the provided compilervars.sh script to make all the resources available. For example:
source //bin/compilervars.sh
This will add MKL to the include and library paths so that the compiler and linker can find them. If you need more help, please ask in https://software.intel.com/en-us/forums/intel-fortran-compiler-for-linux-and-mac-os-x You can get MKL-specific help in https://software.intel.com/en-us/forums/intel-math-kernel-library
I think your program has some bugs, first, you have to include blas95 at the beginning of your code, then, you can't use gemm, you have to choose the specific version of gemm. For example, to do real number matrix multiplication, you have to use dgemm, here is my code(Which I called test.f90), which compiles successfully using ifort -mkl test.f90 -o test.exe
program test
use blas95
implicit none
real*8,dimension(2,2)::testA,testB,testC
integer::i,j
testA=1d0
testB=1d0
testC=1d0
call dgemm("N", "N", 2, 2, 2, 1d0, A, 2, B, 2, 0, C, 2)
print *, testC
end program test

OSX, ghci, dylib, what is the correct way?

I need to build some C code and then reference that C code via the FFI. I would like to use my binding from inside ghci on osx. On of my constraints is that I cannot just hand the C sources to ghc in the .cabal file. This is due to a limitation with ghc/cabal that may be fixed in the next release of ghc (but I want my code to work now and on older releases). See this bug for details.
The gist of that bug is that the C code needs to be compiled with some Objective-C modules and ghc misinterprets those as linker scripts. I've tried many things and building the files myself with a makefile is the only thing that has worked. Really, this shouldn't be an issue though because it should be the same as if I decided to use a external C library that I didn't build myself. For the sake of this issue, let's pretend it's a separate C library that I can easily rebuild with different options.
If I build the C library as a .a, then ghci comlains that it cannot open the .dylib. My first question is: Why does ghci need a .dylib and does it really use it?
When I build a dylib I get a segfault when loading the code into ghci.
Keep in mind, this binding works already on other platforms, both linux and windows, and the binding works fine on osx when I'm compiling instead of using ghci. This problem specific to the osx/ghci combo.
In that trace above, I'm using gdb but it crashes regardless of whether I use gdb. I tracked it down to the lines that cause the crash:
void _glfwClearWindowHints( void )
{
memset( &_glfwLibrary.hints, 0, sizeof( _glfwLibrary.hints ) );
}
The trouble maker is that memset line, well actually the problem is that when running inside ghci writing to the hints structure of _glfwLibrary is a memory access violation. The hints struct is simply a bunch of ints. It's very flat and simple, and thus I think the problem is an issue either with how I'm linking things or with the way ghci is loading the code.
Here are the bits of my makefile that I use to build the dylib and the .a:
GCCFLAGS := $(shell ghc --info | ghc -e "fmap read getContents >>= \
putStrLn . unwords . read . Data.Maybe.fromJust . lookup \
\"Gcc Linker flags\"")
FRAMEWORK := -framework Cocoa -framework OpenGL
GLFW_FLAG := $(GCCFLAGS) -O2 -fno-common -Iglfw/include -Iglfw/lib \
-Iglfw/lib/cocoa $(CFLAGS)
all: $(BUILD_DIR)/static/libglfw.a $(BUILD_DIR)/dynamic/libglfw.dylib
$(BUILD_DIR)/dynamic/libglfw.dylib: $(OBJS)
$(CC) -dynamiclib -Wl,-single_module -compatibility_version 1 \
-current_version 1 \
$(GLFW_FLAG) -o $# $(OBJS) $(GLFW_SRC) $(FRAMEWORK)
$(BUILD_DIR)/static/libglfw.a: $(OBJS)
ar -rcs $# $(OBJS)
Most of the flags are taken straight from the GLFW Makefile so I think they should be correct for that library.
The first line looks a bit weird but it's the solution I used for this problem.
Platform details:
OSX 10.6.6
x86_64
4 cores
GHC version 7.0.3 installed via Haskell Platform installer
Source repo: https://github.com/dagit/GLFW-b
Edit: Here are my questions:
Should this work with ghci?
If so, what am I doing wrong or how can I fix the crash?
Can I just get by with the static .a version of the library with ghci?
Initial Questions
Should this work with ghci?
If so, what am I doing wrong or how can I fix the crash?
On OSX 10.6.7 (using the Haskell Platform /w GHC 7.0.2) I could load your built shared lib into ghci as follows:
➜ GLFW-b git:(master) ✗ ghci dist/build/Graphics/UI/GLFW.hs -Lbuild/dynam
ic -lglfw
GHCi, version 7.0.2: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
Loading object (dynamic) glfw ... done
final link ... done
[1 of 1] Compiling Graphics.UI.GLFW ( dist/build/Graphics/UI/GLFW.hs, inte
rpreted )
Ok, modules loaded: Graphics.UI.GLFW.
*Graphics.UI.GLFW> initialize
True
Note: I built the glfw libs using your provided Makefile, and additionally used your .cabal file to process src/Graphics/UI/GLFW.hsc and build dist/build/Graphics/UI/GLFW.hs (i.e. I'd previously run cabal configure/build).
Can I just get by with the static .a version of the library with ghci?
Yes, support for loading static libs was included in GHC 7.0.2 (GHC Manual). compiler/ghci/Linker.lhs is a great read, and will give you a high-level sense of how ghci decides what to make of the command line arguments passed to it. Additionally, when navigating various platform support issues, I've found this documentation exceedingly useful.
Linking static archives with ghci.
As of writing, line 1113 of compiler/ghci/Linker.hs demonstrates that ghci currently requires that static archives be built by the package system (i.e. named HSlibname.a)
locateOneObj :: [FilePath] -> String -> IO LibrarySpec
locateOneObj dirs lib
| not ("HS" `isPrefixOf` lib)
-- For non-Haskell libraries (e.g. gmp, iconv) we assume dynamic library
= assumeDll
| not isDynamicGhcLib
-- When the GHC package was not compiled as dynamic library
-- (=DYNAMIC not set), we search for .o libraries or, if they
-- don't exist, .a libraries.
= findObject `orElse` findArchive `orElse` assumeDll
Further investigation of cmd line argument parsing indicates that libraries specified are collected at line 402 in the reallyInitDynLinker function:
; classified_ld_inputs <- mapM classifyLdInput cmdline_ld_inputs
where classifyLdInput is defined over
classifyLdInput :: FilePath -> IO (Maybe LibrarySpec)
classifyLdInput f
| isObjectFilename f = return (Just (Object f))
| isDynLibFilename f = return (Just (DLLPath f))
| otherwise = do
hPutStrLn stderr ("Warning: ignoring unrecognised input `" ++ f ++ "'")
return Nothing
This means that outside of a package specification, there currently is no direct way to link an archive file in ghci (or said differently, there currently is no cmd-line argument to do so).
Fixing your cabal package
In your .cabal package specification, you are attempting to build two conflicting libraries:
A: link in a pre-built library (built according to your specification in Setup.hs and Makefile, and linked as per the extra-libraries and extra-lib-dirs directives)
B: build a new library inline (the c-sources and frameworks directives).
A simple fix for the error above is to simply remove all directives enabling B when building for Mac OSX, as follows:
include-dirs:
glfw/include
glfw/lib
- c-sources:
- glfw/lib/enable.c
- glfw/lib/fullscreen.c
- glfw/lib/glext.c
- glfw/lib/image.c
- glfw/lib/init.c
- glfw/lib/input.c
- glfw/lib/joystick.c
- glfw/lib/stream.c
- glfw/lib/tga.c
- glfw/lib/thread.c
- glfw/lib/time.c
- glfw/lib/window.c
+
+ if !os(darwin)
+ c-sources:
+ glfw/lib/enable.c
+ glfw/lib/fullscreen.c
+ glfw/lib/glext.c
+ glfw/lib/image.c
+ glfw/lib/init.c
+ glfw/lib/input.c
+ glfw/lib/joystick.c
+ glfw/lib/stream.c
+ glfw/lib/tga.c
+ glfw/lib/thread.c
+ glfw/lib/time.c
+ glfw/lib/window.c
and
if os(darwin)
- include-dirs:
- glfw/lib/cocoa
- frameworks:
- AGL
- Cocoa
- OpenGL
extra-libraries: glfw
- extra-lib-dirs: build/static build/dynamic
+ extra-lib-dirs: build/dynamic
I've not tested anything beyond verifying that the following now works properly:
➜ GLFW-b git:(master) ✗ ghci
GHCi, version 7.0.2: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
Prelude> :m + Graphics.UI.GLFW
Prelude Graphics.UI.GLFW> initialize
Loading package GLFW-b-0.0.2.6 ... linking ... done.
True
Prelude Graphics.UI.GLFW>

How do I create a dynamic library (dylib) with Xcode?

I'm building few command-line utilities in Xcode (plain C, no Cocoa). I want all of them to use my customized version of libpng, and I want to save space by sharing one copy of the library among all executables (I don't mind re-distributing .dylib with them).
Do I need to do some magic to get libpng export symbols?
Does "Link Binary With Libraries" build phase link statically?
Apple's docs mention loading of libraries at run time with dlopen, but how I can make Xcode create executable without complaining about missing symbols?
I think I've figured it out:
libpng wasn't linking properly, because I've built 32/64-bit executables and 32-bit library. Build settings of the library and executables must match.
libpng's config.h needs to have tons of defines like #define FEATURE_XXX_SUPPORTED
"Link Binary With Libraries" build phase handles dynamic libraries just fine, and DYLD_FALLBACK_LIBRARY_PATH environmental variable is neccessary for loading .dylibs from application bundle.
Dynamic linking on Mac OS X, a tiny example
Steps:
create a library libmylib.dylib containing mymod.o
compile and link a "callmymod" which calls it
call mymod from callmymod, using DYLD_LIBRARY_PATH and DYLD_PRINT_LIBRARIES
Problem: you "just" want to create a library for other modules to use.
However there's a daunting pile of programs -- gcc, ld, macosx libtool, dyld --
with zillions of options, some well-rotted compost, and differences between MacOSX and Linux.
There are tons of man pages (I count 7679 + 1358 + 228 + 226 lines in 10.4.11 ppc)
but not much in the way of examples, or programs with a "tell me what you're doing" mode.
(The most important thing in understanding is to make a simplified
OVERVIEW for yourself: draw some pictures, run some small examples,
explain it to someone else).
Background: apple OverviewOfDynamicLibraries,
Wikipedia Dynamic_library
Step 1, create libmylib.dylib --
mymod.c:
#include <stdio.h>
void mymod( int x )
{
printf( "mymod: %d\n", x );
}
gcc -c mymod.c # -> mymod.o
gcc -dynamiclib -current_version 1.0 mymod.o -o libmylib.dylib
# calls libtool with many options -- see man libtool
# -compatibility_version is used by dyld, see also cmpdylib
file libmylib.dylib # Mach-O dynamically linked shared library ppc
otool -L libmylib.dylib # versions, refs /usr/lib/libgcc_s.1.dylib
Step 2, compile and link callmymod --
callmymod.c:
extern void mymod( int x );
int main( int argc, char** argv )
{
mymod( 42 );
}
gcc -c callmymod.c
gcc -v callmymod.o ./libmylib.dylib -o callmymod
# == gcc callmymod.o -dynamic -L. -lmylib
otool -L callmymod # refs libmylib.dylib
nm -gpv callmymod # U undef _mymod: just a reference, not mymod itself
Step 3, run callmymod linking to libmylib.dylib --
export DYLD_PRINT_LIBRARIES=1 # see what dyld does, for ALL programs
./callmymod
dyld: loaded: libmylib.dylib ...
mymod: 42
mv libmylib.dylib /tmp
export DYLD_LIBRARY_PATH=/tmp # dir:dir:...
./callmymod
dyld: loaded: /tmp/libmylib.dylib ...
mymod: 42
unset DYLD_PRINT_LIBRARIES
unset DYLD_LIBRARY_PATH
That ends one tiny example; hope it helps understand the steps.
(If you do this a lot, see GNU Libtool
which is glibtool on macs,
and SCons.)
You probably need to ensure that the dynamic library you build has an exported symbols file that lists what should be exported from the library. It's just a flat list of the symbols, one per line, to export.
Also, when your dynamic library is built, it gets an install name embedded within it which is, by default, the path at which it is built. Subsequently anything that links against it will look for it at the specified path first and only afterwards search a (small) set of default paths described under DYLD_FALLBACK_LIBRARY_PATH in the dyld(1) man page.
If you're going to put this library next to your executables, you should adjust its install name to reference that. Just doing a Google search for "install name" should turn up a ton of information on doing that.
Unfortunately, in my experience Apple's documentation is antiquated, redundant and missing a LOT of common information that you would normally need.
I wrote a bunch of stuff on this on my website where I had to get FMOD (Sound API) to work with my cross platform game that we developed at uni. Its a weird process and I'm surprised that Apple do not add more info on their developer docs.
Unfortunately, as "evil" as Microsoft are, they actually do a much better job of looking after their devs with documentation ( this is coming from an Apple evangelist ).
I think basically, what you are not doing is AFTER you have compiled your .app Bundle. You then need to run a command on the executable binary /MyApp.app/contents/MacOS/MyApp in order to change where the executable looks for its library file. You must create a new build phase that can run a script. I won't explain this process again, I have already done it in depth here:
http://brockwoolf.com/blog/how-to-use-dynamic-libraries-in-xcode-31-using-fmod
Hope this helps.
Are you aware of the Apple reference page Dynamic Library Programming Topics? It should cover most of what you need. Be aware that there a shared libraries that get loaded unconditionally at program startup and dynamically loaded libraries (bundles, IIRC) that are loaded on demand, and the two are somewhat different on MacOS X from the equivalents on Linux or Solaris.

Resources