Creating Haskell shared libraries on OS X - macos

I'm trying to create a shared library from Haskell source code.
I've tried following the instruction here: http://weblog.haskell.cz/pivnik/building-a-shared-library-in-haskell/ but I'm just not having any luck.
When I compile with Haskell 64-bit (ghc 7.0.4 from 2011.4.0.0) I get the following error:
ld: pointer in read-only segment not allowed in slidable image, used in
___gmpn_modexact_1c_odd
As an alternative I also tried the 32-bit version, and depending on the exact flags I use to link get errors such as:
Library not loaded: /usr/local/lib/ghc-7.0.4/base-4.3.1.0/libHSbase-4.3.1.0-ghc7.0.4.dylib
I did manage to get a little further by adding -lHSrts to the linker line. This got me to the point of successfully linking and loading the library, but I'm then unable to find the function name using dlsym (or manually using nm | grep)
Any hints would be greatly appreciated, an example make file, or build line that has successfully built (and used) a shared library on OS X would be appreciated. I'm quite new to Haskell and don't know if I should keep banging my head assuming that the problem is on my end, or for various reasons I shouldn't expect this to work on OS X.
A git repo with all the combinations I've tried is available here: https://github.com/bennoleslie/haskell-shared-example I did manage to get something working for 32-bit ghc, but not 64-bit yet.

It is possible to create working shared libraries on 64-bit OS X, with the latest Haskell Platform release (2012.4 64bit)
The invocation line works for me:
ghc -O2 --make \
-no-hs-main -optl '-shared' -optc '-DMODULE=Test' \
-o libTest.so Test.hs module_init.c
module_init.c should be something like:
#define CAT(a,b) XCAT(a,b)
#define XCAT(a,b) a ## b
#define STR(a) XSTR(a)
#define XSTR(a) #a
#include <HsFFI.h>
extern void CAT(__stginit_, MODULE)(void);
static void library_init(void) __attribute__((constructor));
static void library_init(void)
{
/* This seems to be a no-op, but it makes the GHCRTS envvar work. */
static char *argv[] = { STR(MODULE) ".so", 0 }, **argv_ = argv;
static int argc = 1;
hs_init(&argc, &argv_);
hs_add_root(CAT(__stginit_, MODULE));
}
static void library_exit(void) __attribute__((destructor));
static void library_exit(void)
{
hs_exit();
}
This git repo: https://github.com/bennoleslie/haskell-shared-example contains a working example.
All credit goes to this original source: http://weblog.haskell.cz/pivnik/building-a-shared-library-in-haskell/

You might want to try the ghc port in Homebrew -- https://github.com/mxcl/homebrew/blob/master/Library/Formula/ghc.rb

Related

Compilation failure in cpu_features_get.cc

I'm trying to compile Cobalt and getting errors building cpu_features_get.cc. The specific version I'm building is here: https://github.com/Metrological/cobalt/blob/master/src/starboard/shared/linux/cpu_features_get.cc and it appears to be based on Cobalt 22. I'm building using stbgcc-6.3.18, only Ubuntu 20.04. Sysroot is set to the cross compiler. And I'm building the starboard port at https://github.com/Metrological/cobalt/tree/master/src/third_party/starboard/wpe/brcm/arm, for ARM64.
An example of the error:
cpu_features_get.cc:381:12: error: 'HWCAP_SET_FOR_ARMV8' was not declared in this scope
There follow others, all related to HWCAP #defines.
As far as I can tell, the file defines these values if:
53: #if SB_IS(32_BIT) || defined(ANDROID)
Neither of these is true - I am compiling for ARM64 on Linux.
The code using HWCAP_SET_FOR_ARMV8 and the other missing defines is conditionally compiled as well:
340: #if SB_IS(ARCH_ARM) || SB_IS(ARCH_ARM64)
...
// Construct hwcap bitmask by the feature flags in /proc/cpuinfo
uint32_t ConstructHwcapFromCPUInfo(ProcCpuInfo* cpu_info,
int16_t architecture_generation,
uint32_t hwcap_type) {
if (hwcap_type == AT_HWCAP && architecture_generation >= 8) {
// This is a 32-bit ARM binary running on a 64-bit ARM64 kernel.
// The 'Features' line only lists the optional features that the
// device's CPU supports, compared to its reference architecture
// which are of no use for this process.
SB_LOG(INFO) << "Faking 32-bit ARM HWCaps on ARMv"
<< architecture_generation;
return HWCAP_SET_FOR_ARMV8;
}
...
...
So ARCH_ARM64 is true, and this code is compiled. But the defines are missing because it's not 32-bit. This seems contradictory and to my eyes could never have worked. How is it possible to compile Cobalt for ARM64?

How to use langinfo.h features in glibc without breaking compilation in FreeBSD based system?

I am trying to make the following code work on both Linux and FreeBSD based system, Is it a valid usage of macros __GLIBC__ and __USE_XOPEN2K8?
#include <stdio.h>
#include <langinfo.h>
#include <locale.h>
#include <xlocale.h>
int main(void) {
//#if defined(__GLIBC__) && defined (__USE_XOPEN2K8)
locale_t loc;
char *locale_messages = "en-US.utf-8";
loc = newlocale(LC_ALL_MASK, locale_messages, (locale_t)0);
if (loc != NULL)
{
char result[256];
sprintf(result, "%s_%s.%s",
nl_langinfo_l(_NL_IDENTIFICATION_LANGUAGE, loc),
nl_langinfo_l(_NL_IDENTIFICATION_TERRITORY, loc),
nl_langinfo_l(CODESET, loc));
}
//#endif
}
If I don't use those directives, I get the following error on mac OS. I want to disable that code to avoid the following errors.
error: use of undeclared identifier '_NL_IDENTIFICATION_LANGUAGE'
nl_langinfo_l(_NL_IDENTIFICATION_LANGUAGE, loc),
^
error: use of undeclared identifier '_NL_IDENTIFICATION_TERRITORY'
nl_langinfo_l(_NL_IDENTIFICATION_TERRITORY, loc),
I have found one thread recommending use _GNU_SOURCE and _XOPEN_SOURCE, but as result above code is disabled on my linux system too. It seems I need to define _GNU_SOURCE before using it, but before proceding with this idea, can we work with __GLIBC__ and __USE_XOPEN2K8.
You can use
#indef __FreeBSD__
#endif
preprocessor directives to ignore the code that shouldn't be built on FreeBSD. However man nl_langinfo_l on FreeBSD says that this function is present on FreeBSD, so you shouldn't have any problems with it.
The best way is to use a build system to detect if that option is available and then conditionally enable that part of code depending of the detection result. This is how autotools project came to be - to detect differences between operating systems.
In cmake you could:
include(CheckCSourceCompiles)
check_c_source_compiles("
#define _GNU_SOURCE
#define _SOMETHING_READ_FEATURE_TEST_MACROS
#include <something something that is needed.h>
int main() { return _NL_IDENTIFICATION_TERRITORY; }
" WE_HAVE_NL_IDENTIFICATION_TERRITORY)
if(WE_HAVE_NL_IDENTIFICATION_TERRITORY)
target_add_definitions(your_target PUBLIC LIB_HAS_NL_IDENTIFICATION_TERRITORY)
endif()
and then use your own LIB_HAS_NL_IDENTIFICATION_TERRITORY macro that detect if that option is available or not. Such solution is stable, easy to port and dynamically reacts to environment changes.

'value' is not a member of boost::mpl::aux::wrapped_type...when creating a mex function

I am using an extensive piece of code which compiles in Windows and Linux with gcc>=4.7. It is a utility to seamlessly generate mex functions in Matlab from m-scripts written by someone. I am having trouble compiling a short c script (not provided here) in Mac os x. I am using gcc-4.8 with C++11. It uses Boost library only for headers. The piece of utility code where it gets stuck is:
/* gets mxClassID, given C type>
eg. mx_class_id<float>()*/
template<typename T>
struct mx_class_id
{
operator mxClassID()
{
return static_cast<mxClassID>(boost::mpl::at<mxInverseTypeMap,T>::type::value);
}
};
required by
template<typename T>
mxArray* mxCreateScalar(const T & val)
{
//mxClassID cid=static_cast<mxClassID>(boost::mpl::at<mxInverseTypeMap,T>::type::value);
mxArray * arr=mxCreateNumericMatrix(1,1,mx_class_id<T>(),mxREAL);
mxSetValue(arr,val);
return arr;
}
What am I missing? Is it conflicting with built-in clang libraries? Or is it a header not specified (boost/mpl/at.hpp is included)? As I mention it does compile in Matlab for Windows and Linux.I have tried boost 1.51.0 (this is what we use) and also 1.56.0 (this is what Matlab uses) but I get the same error message.
The code I use to compile is
mex -v /usr/local/bin/gcc-4.8 -I path-to-boost-library -I path-to-private-library -I /usr/local/lib -std=C++11 script.cc
Here is the error message I am getting:
error: 'value' is not a member of 'boost::mpl::aux::wrapped_type <
boost::mpl::aux::type_wrapper < mpl_::void_> > ::type {aka
mpl_::void_}'
Any pointers or help appreciated. Thanks
This was resolved by the conflicting use of 'size_t' and 'unsigned long int'.
I think in Linux, Windows, size_t was employed in the code with the assumption it to be uint64_t. And it compiled in both.
However, for mac os x, it is either size_type or unsigned long int. This especially created problem because the code was matching c type to Matlab mex type with one-to-one mapping. In inverse mapping, with the use of size_t this one-to-one mapping was getting lost and instead became many-to-one. Once that was addressed it was easy to fix the rest.

undefined reference to '__gthrw___pthread_key_create(unsigned int*, void (*)(void*))

I'm using 64-bit gcc-4.8.2 to generate a 32-bit target, and my machine is 64-bit. I'm using c++11 concurrency features such as thread, mutex, conditiona_variables and etc.
The linker gave the above error message when trying to link the executable. libMyLib is also part of the project.
libMyLib.so: undefined reference to '__gthrw___pthread_key_create(unsigned int*, void (*)(void*))
nm libMyLib.so | grep pthread_key_create shows:
U _ZL28__gthrw___pthread_key_createPjPFvPvE
w __pthread_key_create##GLIBC_2.0
where is the symbol 'ghtrw___pthread_key_create' from? I tried adding '-lpthread(-pthread)' as compiler flag, but it does not help.
More information. nm libMyLib.so | grep pthread shows other symbols such as _ZL20__gthread_mutex_lockP15pthread_mutex_t is defined
where is the symbol 'ghtrw___pthread_key_create' from?
It is defined in GCC's "gthreads" abstraction layer for thread primitives, in the gthr-posix.h header.
#if SUPPORTS_WEAK && GTHREAD_USE_WEAK
# ifndef __gthrw_pragma
# define __gthrw_pragma(pragma)
# endif
# define __gthrw2(name,name2,type) \
static __typeof(type) name __attribute__ ((__weakref__(#name2))); \
__gthrw_pragma(weak type)
# define __gthrw_(name) __gthrw_ ## name
#else
# define __gthrw2(name,name2,type)
# define __gthrw_(name) name
#endif
/* Typically, __gthrw_foo is a weak reference to symbol foo. */
#define __gthrw(name) __gthrw2(__gthrw_ ## name,name,name)
...
#ifdef __GLIBC__
__gthrw2(__gthrw_(__pthread_key_create),
__pthread_key_create,
pthread_key_create)
After preprocessing that expands to:
static __typeof(pthread_key_create) __gthrw___pthread_key_create __attribute__ ((__weakref__("__pthread_key_create")));
It is supposed to be a weak reference to __pthread_key_create, so it should never have a definition, because it is just a reference to glibc's internal __pthread_key_create symbol.
So it looks like something has gone wrong with how you build you library. You should not have an undefined weak symbol.
I recently stumbled into this error, and not because of a missing -pthread.
The situation happened in a somewhat unusual setup: I was compiling a software written in C++14 under Centos7. Since C++14 requires a recent version of GCC, I relied on devtoolset 6 for it.
Given the specific setup, I opened a thread on the Centos mailing list, so I'm directly referencing the relevant thread. See https://lists.centos.org/pipermail/centos-devel/2018-May/016701.html and https://lists.centos.org/pipermail/centos-devel/2018-June/016727.html
In short, it might be caused by some bug in the preprocessor macros, either of glibc or libgcc. It can be fixed by placing #include <thread> in the beginning of the source code which gives problems once compiled. Yes, even if you don't use std::thread in it.
I don't claim this will work for everyone, but it might do the job in some particular situations.

DMD: misunderstandings with linking and building

I'm trying to build a project, using the DMD-compiler itself (without IDE) in Windows. And I found myself hardly capable to realise some moments about linking. Usually the IDE does this for me.
The structure of my project
project
├──bin
| ├──exemple.obj
| └──exemple.exe
└──src
├──a
| └──b.d
└──exemple.d
exemple.d
import a.b;
void main() { B obg = new B(); }
b.d
module a.b;
class B {
private int i;
public this() {i=0;}
public void act() {i++;}
}
At first it seemed to be easy to build with command:
cd C:\path\to\my\project
dmd bin\exemple.exe src\exemple.d -IC:\path\to\my\project\src
But it only showed me some error-massages:
OPTLINK (R) for Win32 Release 8.00.13
Copyright (C) Digital Mars 1989-2010 All rights reserved.
http://www.digitalmars.com/ctg/optlink.html
bin\exemple.obj(exemple)
Error 42: Symbol Undefined _D1a1b1B7__ClassZ
bin\exemple.obj(exemple)
Error 42: Symbol Undefined _D1a1b1B6__ctorMFZC1a1b1B
--- errorlevel 2
Finally I guessed that the obj-file was missing. I made it manually with commands:
cd bin
dmd ..\src\a\b.d -c
cd ..
And manually added it to my build-command:
dmd bin\exemple.exe src\exemple.d -IC:\path\to\my\project\src bin\b.obj
And now it works.
Great. But what if we've got lots of additional d-files and complicated folders structure?
How could it be atomised?
I was strongly surprised, when I found out that DMD doesn't doing all this automatically. Maybe, I'm just doing it wrong.
You don't have to build a/b.d separately. But you do have to pass all source (or object) files to dmd. dmd does not figure out the dependencies.
Have a look at rdmd. It's a tool that does figure out the dependencies and then runs dmd on all of them (and then it runs the executable by default, --build-only prevents that). It comes with the dmd releases.

Resources