cabal-version option ignored - haskell-stack

I'm trying to install a fairly old package (see here) with stack. It uses a custom Setup.hs script which depends on cabal >= 1.20 due to (among other things) a reliance on the buildNumJobs field of the BuildFlags type.
For some reason, the cabal-version was set to >= 1.10 which is clearly too low. I changed it once I figured out what was going on, but the problem persisted. What do I have to do for stack and cabal to pick up on the new cabal-version constraint?
I tried simply replacing the buildNumJobs value with a default value, which resulted in the following error at build time:
Warning: skia.cabal: This package requires at least Cabal version 1.20
Configuring skia-0.1.0.0...
setup.EXE: This package description follows version 1.20 of the Cabal
specification. This tool only supports up to version 1.18.1.5.
Again, what do I need to do for stack to respect the cabal-version option?

Turns out, all I had to do was to run stack setup --upgrade-cabal. I still wonder though why stack doesn't detect the inconsistency automatically...

Related

unresolved reference to boost::program_options::options_description::options_description

First off I know there are several posts similar to this,but I am going to ask anyway. Is there a known problem with boost program_options::options_description in Debian "Buster" and Boost 1.67 packages?
I have code that was developed back in Debian 7, system upgraded to 8.3 then 8.11 now using Boost 1.55.
Code builds and runs. Now upgrade system to Debian Buster with Boost 1.67 and get the link errors for unresolved reference to options_description(const std::string& , unsigned int, unsigned int) along with several other program_options functions. All unresolved, expect the options_description, are from boost calling another boost function, so not even directly called from within my code. boost_program_options IS in the link line.
I AM not a novice and understand link order and this has nothing to do with link order.
I am going to try getting the source code for boost and building to see if that works, if not I will be building a system from scratch and testing against that.
Since this is all on a closed network simply saying try a newer version of boost or Debian is not an option because I am contractually obligated to only use Debian "Buster" and Boost 1.67 as the newest revisions, so if the package is unavailable (newer) in Buster it is out of the question, without having a new contract be drafted and go through approvals which could take months.
So to the point of this question, is there an issue with the out of the box version of Boost in Buster?
I don't think there's gonna be an issue with the package in Buster.
My best bet is that either
you're relinking old objects with the new libraries - and they don't match (did you do a full make clean e.g. to eliminate this possibility?).
Often build systems do not do complete header dependencies, so the
build system might not notice that the boost headers changed,
requiring the objects to be be rebuilt.
if that doesn't explain it, there could be another version of boost on the include path, leading to the same problem as under #1 even when rebuilding.
You could establish this by inspecting the command lines (make -Bsn or compile_commands.json e.g. depending on your tools). Another trick is to include boost/version.hpp and see what BOOST_VERSION evaluates to
Lastly, there could be a problem where the library was built with different compiler version or compiler flags, leading to incompatible synbols (this is a QoI issue that you might want to report with the Boost Developers).
This is assuming ABI/ODR issues, in case you want to validate this possibility.

What is reason not to use stack --nix when I using nix?

I am on NixOS but I think this question should apply to any platform that use nix.
I found by trial that stack can be used with couple options to build a project but I don't fully understand difference among them,
stack
stack --system-ghc
stack --nix
Question :
If I am using nix (NixOS in my case), is there any reason I will want to not use --nix argument?
What is the nix way to deal with haskell project, should cabal (cabal2nix) be used in stead of stack?
I found that stack is rebuilding lots of libraries that already installed by nix, what is the reason of that?
As I understand it, the --nix option to stack uses a nix-shell to manage non-Haskell dependencies (instead of requiring them to be in the "global" environment that stack is run from). This is probably a good idea if you want to use the stack tool on nix. Stack also usually installs its own GHC: the --system-ghc option prevents this, and asks it to use the GHC in the "global" environment from which it is run. I believe that with --nix, stack will ask Nix to handle GHC versioning as well, so on a Nix system, I would recommend building with --nix and without --system-ghc
At least in my opinion, it is better to avoid the stack tooling when working with Nix. This is because stack, even when run with --nix, does not inform Nix about the Haskell packages that it wants to build: it builds them all itself inside ~/.stack, and doesn't share them with the Nix store. If your project builds with the latest nixpkgs versions of Haskell packages, or with relatively few simple overrides on those, cabal2nix is a great solution that will make sure that Haskell libraries only get built once (by Nix). If you want to make sure that you are building your project with the same package versions as stack would, I would recommend stackage2nix or stack2nix. I personally have been using the nixpkgs-stackage overlay, which is related to stackage2nix: this gives you both LTS package sets in nixpkgs, and nixpkgs.haskell.packages.stackage.lib.callStackage2nix, which you can use in your default.nix/shell.nix like so: callStackage2nix (baseNameOf ./.) (cleanSource ./.).${(baseNameOf ./.)}, with whatever definition of cleanSource fits your project (this can use filterSource to remove some files which shouldn't really be considered part of the project, like autosaves, build directories, etc.).
With this setup, instead of using the stack tooling, for interactive work on the project you should use nix-shell to set up an environment where Nix has built all of the dependencies of your project and "installed" them. Then you can just use cabal-install to build just your project, without any dependency issues or (re)building dependencies.
As mentioned above, stack does not coordinate with Nix, and tries to build every Haskell dependency itself in ~/.stack. If you want to keep all of your Haskell packages in the Nix store (which I would recommend) while following the same build plans as Stack, you will need to use one of the linked tools, or something similar.

How do i get GHCi to load Opengl packages?

I can successfully build executables that link against OpenGL using GHC, however I cannot get the package to load into GHCi. This is definitely a regression for me because it works on 32-bit GHC (at least the version I upgraded from). I do not think the GHC version matters, just the fact that I am using the 64-bit GHC system.
On the recommendation of the maintainer I explicitly brought the correct 64-bit version of opengl32 into GHCi successfully. It seems to be an issue higher up the stream.
Here is the output that is relevant. The verbose output is unfortunately just as specific. The function wglGetProcAddress is used to find where the opengl api hooks are in the dll.
$ ghcii.sh -package OpenGL
GHCi, version 7.6.1: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package OpenGLRaw-1.2.0.0 ... linking ... ghc.exe: unable to load
package `OpenGLRaw-1.2.0.0'
ghc.exe: C:\...\cabal\OpenGLRaw-1.2.0.0\ghc-7.6.1\HSOpenGLRaw-1.2.0.0.o:
unknown symbol `__imp_wglGetProcAddress'
It's been some time since I've dabbled on that level of Haskell development. But it looks similar enough to a standard linking problem.
I can give you an answer on why it happens, but at the moment am at a loss how to resolve it short of fixing the problem upstream.
The function wglGetProcAddress is found in opengl32.dll. So your HSOpenGLRaw seems not to be properly linked against that, hence the failure to locate the symbol.
If this happened in a *nix environment a simple solution would be to LD_PRELOAD libGL.so. However on Windows loading a module into a process doesn't make its symbols automatically visible to the rest of the process, so that wouldn't work there.
This also explains why it works for standalone binaries. Those are linked outside of runtime. Thus extra libraries can be passed to the linker which will resolve the missing dependencies.

Where does `haskell-mode` look for libraries? (`stack`, GHC for OSX)

I'm using Emacs 24.5 and the latest GHC for OSX (let's call it GFO),
$ which stack
/Applications/ghc-7.10.3.app/Contents/bin/stack
$ which ghc
/Applications/ghc-7.10.3.app/Contents/bin/ghc
When compiling (C-c C-l) a module which requires libraries that are not in the GFO distribution, e.g. vector, transformers, I obviously get a series of Could not find module ...
Now, I know that these packages are available in the system (in ~/.stack/snapshots/x86_64-osx/lts-5.15/7.10.3/lib/x86_64-osx-ghc-7.10.3/), because another project compiled them (via stack build).
QUESTION
Where do I change the behaviour of haskell-mode when loading a module? Otherwise, what's the exact command that is issued with C-c C-l, and how do I make it aware of the additional context introduced by stack?
Thank you in advance

Issues compiling libf2c w/ latest mingw-get-inst (3.16.11), gcc

I'm trying to port some very old fortran code to windows. I'd like to use mingw and f2c, which has no problem converting the code to usable C on OS X and Ubuntu. I used f2c.exe as distributed by netlib on a fresh install of mingw, and it translated the code fine. I have a "ported" version of libf2c that seems to still contain some unresolved references -- mostly file i/o routines (do_fio, f_open, s_wsfe, e_wsfe) and, peculiarly, one arithmetic routine (pow_dd). To resolve these issues, I tried to build libf2c from source, but ran into an issue during the make process. The make proceeds to dtime_.c, but then fails due to a dependency on sys/times.h, which is no longer a part of the mingw distro. There appears to be a struct defined in times.h that defines the size of a variable in dtime_.c, specifically t and t0 on lines 53 and 54 (error is "storage size of 't' isn't known"; same for t0).
The makefile was modified to use gcc, and make invoked with no other options passed.
Might anyone know of a workaround for this issue? I feel confident that once I have a properly compiled libf2c, I'll be able to link it with gcc and the code will work like it does on linux and os X.
FOLLOW-UP: I was able to build libf2c.a by commenting out the time related files in the makefile (my code does not contain any time related functions, so don't think it will matter). I copied it to a non-POSIX search directory as show in -print-search-dirs, specifically C:\MinGW\lib\gcc\mingw32\3.4.5. That seems to have fixed the issue on the unresolved references, although the need to eliminate the time files does concern me. While my code is now working, the original question stands -- how to handle makefiles that call for sys/times.h in mingw?
Are you sure the MinGW installation went correct? As far as I can tell the sys/times.h header is still there, in the package mingwrt-3.18-mingw32-dev.tar.gz. I'm not familiar with the gui installer, but perhaps you have to tick a box for the mingwrt dev component.

Resources