I am on NixOS but I think this question should apply to any platform that use nix.
I found by trial that stack can be used with couple options to build a project but I don't fully understand difference among them,
stack
stack --system-ghc
stack --nix
Question :
If I am using nix (NixOS in my case), is there any reason I will want to not use --nix argument?
What is the nix way to deal with haskell project, should cabal (cabal2nix) be used in stead of stack?
I found that stack is rebuilding lots of libraries that already installed by nix, what is the reason of that?
As I understand it, the --nix option to stack uses a nix-shell to manage non-Haskell dependencies (instead of requiring them to be in the "global" environment that stack is run from). This is probably a good idea if you want to use the stack tool on nix. Stack also usually installs its own GHC: the --system-ghc option prevents this, and asks it to use the GHC in the "global" environment from which it is run. I believe that with --nix, stack will ask Nix to handle GHC versioning as well, so on a Nix system, I would recommend building with --nix and without --system-ghc
At least in my opinion, it is better to avoid the stack tooling when working with Nix. This is because stack, even when run with --nix, does not inform Nix about the Haskell packages that it wants to build: it builds them all itself inside ~/.stack, and doesn't share them with the Nix store. If your project builds with the latest nixpkgs versions of Haskell packages, or with relatively few simple overrides on those, cabal2nix is a great solution that will make sure that Haskell libraries only get built once (by Nix). If you want to make sure that you are building your project with the same package versions as stack would, I would recommend stackage2nix or stack2nix. I personally have been using the nixpkgs-stackage overlay, which is related to stackage2nix: this gives you both LTS package sets in nixpkgs, and nixpkgs.haskell.packages.stackage.lib.callStackage2nix, which you can use in your default.nix/shell.nix like so: callStackage2nix (baseNameOf ./.) (cleanSource ./.).${(baseNameOf ./.)}, with whatever definition of cleanSource fits your project (this can use filterSource to remove some files which shouldn't really be considered part of the project, like autosaves, build directories, etc.).
With this setup, instead of using the stack tooling, for interactive work on the project you should use nix-shell to set up an environment where Nix has built all of the dependencies of your project and "installed" them. Then you can just use cabal-install to build just your project, without any dependency issues or (re)building dependencies.
As mentioned above, stack does not coordinate with Nix, and tries to build every Haskell dependency itself in ~/.stack. If you want to keep all of your Haskell packages in the Nix store (which I would recommend) while following the same build plans as Stack, you will need to use one of the linked tools, or something similar.
Related
Consider two software projects, proj_a and proj_b, with the latter depending on the former; and with both using CMake.
When reading about modern CMake, one gets the message that the "appropriate" way to express dependencies is via target dependencies; and one should arrange it so that dependent projects are represented as (imported) targets you can depend on. More specifically, in our example, proj_b will idiomatically have:
find_package(proj_a)
# etc etc.
target_link_library(bar proj_a::foo)
and proj_a will need to have been installed, utilizing the CMake installation-and-export-related commands, someplace where proj_b's CMake invocation will search for proj_a-config.cmake.
I like this approach and encourage others to adapt to it. It offers flexibility in the choice of your own version of proj_a vs the system version; and also allows for non-CMake proj_a's via a Findproj_a.cmake script (which again, can be system-level or part of proj_b).
So far so good, right? However, there are people who want to "take matters into their own hands" in terms of dependencies - and CMake officially condones this, with commands such as ExternalProject and more recently, FetchContent: This allows proj_b's configuration stage to actually download a (built, or in our case source-form) version of proj_a.
The puzzling part to me is that, after proj_a is downloaded, say to an external/proj_a directory, CMake's default behavior will be to
add_subdirectory(external/proj_a)
that is, to use proj_a as a subproject of proj_b and build them together. This, while the idiomatic use above allows the maintainer of proj_a to "do their own thing" in my CMakeFile, and only keep things neat and tidy for others via what I export/install.
My questions:
Why does it make sense to add_subdirectory(), rather than to build, install, and perform the equivalent of find_package() to meet the dependency? Or rather, why should the former, rather than the latter, be the default?
Should I really have to write my project-level CMakeLists.txt to be compatible with being add_subdirectory()'ed?
Note: Just to give some concrete examples of how this use constrains proj_a:
Must use unique option names which can't possibly clash with super-project names. So no more WITH_TESTS, BUILD_STATIC_LIB - it has to be: WITH_PROJ_A_TESTS and BUILD_PROJ_A_STATIC_LIB.
You have to account for the parent project having searched for other dependencies already, and perhaps differently than how you would like to search for them.
Following the discussion in comments, I decided to post a bug report about this:
#22904: Support FetchContent_MakeAvailable performing build+install+find_package rather than add_subdirectory
So maybe this will change and the question becomes moot.
Why does it make sense to add_subdirectory(), rather than to build, install, and perform the equivalent of find_package() to meet the dependency? Or rather, why should the former, rather than the latter, be the default?
FetchContent doesn't just have to be for project() dependencies. It can be used for fetching utility scripts too. I'm guessing it was designed with that kind of consideration in mind. If your utility script is just one file, you can just file(DOWNLOAD) and add_subdirectory() directly, but the utilities could be multiple files, such as is the case with aminaya/project_options. FetchContent() uses a lot of the same machinery as ExternalProject, so it can do a lot of the useful things that ExternalProject does. For example, you can use FetchContent to fetch aminaya/project_options as a remote git repo, or as its archive artifacts- ex. v0.20.0.zip
Should I really have to write my project-level CMakeLists.txt to be compatible with being add_subdirectory()'ed?
It's your choice! The reasoning here can be highly objective, or subjective. It's up to you. Some people just like to put in a lot of effort to support whatever their users might want. Some people have a lot of historical configuration baggage and are still catching up to newer CMake. And as you mentioned at the end of your question post, there are certain adjustments that need to be made to accomodate for cleanly allowing people to add_subdirectory() you as a dependency. One example of a project which chose "no" is glew (see issue #314 for explanation).
Just to give another reference to some related work mentioned in responses to the KitWare/CMake ticket your raised, here's the ticket which tracked work on "FetchContent and find_package() integration".
I'd like to have similar functionality as stack install (e.g. the --copy-bins flag) does for executables, but for libraries.
Currently, I have to stack build and then manually find the libHS*-<version>-<fingerprint>.a files in .stack-work. That is problematic/uncomfy for two reason:
I have to rely on the internal folder structure of stack (reliable enough, though)
I have to manually get rid of the fingerprint and the version
Well, I could work around both, I guess, but I'd like to know if this might already be available/sensible to implement.
Some background, which may or may not be relevant to the question rather than to its motivation:
I am playing around with https://hackage.haskell.org/package/dynamic-loader-0.0/docs/System-Plugins-DynamicLoader.html and want to provide as realistic an example as I can, so I plan to compile a package's object code into a *.a (containing the compilation of multiple modules) which I want to link in at runtime.
What I want to do works already for trivial single module files, where I only need to use loadModule. Currently I'm tinkering around with loadPackage.
I'm using Emacs 24.5 and the latest GHC for OSX (let's call it GFO),
$ which stack
/Applications/ghc-7.10.3.app/Contents/bin/stack
$ which ghc
/Applications/ghc-7.10.3.app/Contents/bin/ghc
When compiling (C-c C-l) a module which requires libraries that are not in the GFO distribution, e.g. vector, transformers, I obviously get a series of Could not find module ...
Now, I know that these packages are available in the system (in ~/.stack/snapshots/x86_64-osx/lts-5.15/7.10.3/lib/x86_64-osx-ghc-7.10.3/), because another project compiled them (via stack build).
QUESTION
Where do I change the behaviour of haskell-mode when loading a module? Otherwise, what's the exact command that is issued with C-c C-l, and how do I make it aware of the additional context introduced by stack?
Thank you in advance
I'd like to use open source library on Windows. (ex:Aquila, following http://aquila-dsp.org/articles/iteration-over-wave-file-data-revisited/) But I can't understand anything about "Build System"... Everyone just say like, "Unzip the tar, do configure, make, make file" at Linux, but I want to use them for Windows. There are some several questions.
i) Why do I have to "Install" for just source code? Why can't I use these header files by copying them to the working directory and throw #include ".\aquila\global.h" ??
ii) What are Configuration and Make/Make Install? I can't understand them. I just know that configuration open source with Windows need "CMake", and it is configuration tool... But what it actually does??
iii) Though I've done : cmake, mingw32-make, mingw32-make install... My compiler said "undefined references to ...". What this means and what should I do with them?
You don't need to install for sources. You do need to install for the libraries that get built from that source code and that your code is going to use.
configure is the standard name for the script that does build configuration for the software about to be built. The usual way it is run (and how you will see it mentioned) is ./configure.
make is a build management tool (as the tag here on SO will tell you). One of the most common mechanisms for building code on linux (etc.) is to use the autotools suite which uses the aforementioned configure script to generate build configuration information for use by generated makefiles which make then uses to build the software. make is also the way to run the default build target defined in a makefile (which is often the all target and which usually builds the appropriate library/binary/etc.).
make install is a specific, secondary, invocation of the make tool on the install target which (generally) installs the (in this case previously) built code into an appropriate location (in the autotools/configure universe the default location is generally under /usr/local).
cmake is, again as the SO tag says, a build system that generates configuration files for other build tools (make, VS, etc.). This allows the developers to create the build configuration once and build on multiple platforms/etc. (at least in theory).
If running cmake worked correctly then it should have generated the correct information for whatever target system you told it to use (make or VS or whatever). Assuming that was make that should have allowed mingw32-make to build the software correctly (assuming additionally that mingw32-make is not a distinct cmake target than make). If that is not working correctly then something is still missing from your system (and cmake probably should have caught that).
But to give any more detail you will need to give more detail about what errors you are actually getting and from what command.
(Oh, and on Windows, and especially if you plan on building your software with VS (or some other non-mingw32-make tool) the chances of you needing to run mingw32-make install are incredibly small).
For Windows use cmake or latest ninja.
The process is not simple or straight, but achievable. You need to write CMake configuration.
Building process is not simple and straight, that's why there exists language like Java(that's another thing though)
Rely on CMake build the library, and you will get the Open-Source library for Windows.
You can distribute this as library for Windows systems, distribute and integrate with your own software, include the Open Source library, in either cases, you would have to build it for Windows.
Writing CMake helps, it would be helpful to build for other platforms as well.
Now Question comes: Is there any other way except CMake for Windows Build
Would you love the flavor of writing directly Assembly?
If obviously answer is no, you would have to write CMake and generate sln for MSVC and other compilers.
Just fix some of the errors comes, read the FAQ, Documentation before building an Open Source library. And fix the errors as they lurk through.
It is like handling burning iron, but it pays if you're working on something meaningful. Most of the server libraries are Open Source(e.g. age old Apache httpd). So, think before what you're doing.
There are also not many useful Open Source libraries which you could use in your project, but it's the way to Use the Open Source libraries.
What are the things I need in my install and uninstall targets in a Makefile for an OCaml library in order to make it play nicely with the rest of the installation, work seamlessly with ocamlfind and so on? Basically to be a "good citizen". I am not interested in GODI at the present time. Thanks!
META files for ocamlfind are easy to write (basically, look for a META in another ocaml project you know¹, copy it and make the corresponding changes), and they will give you ocamlfind integration, with in particular easy support for post-build installation and desinstallation (using ocamlfind install and ocamlfind remove). You should begin with that.
¹: for example I take inspiration from batteries's META.
The building part of the Makefile is more tricky, their are numerous solutions (OCamlMakefile, OMake, ocamlbuild, plain Makefile, etc.) with varying strenghts and weaknesses. If you project is simple enough I would recommend ocamlbuild that takes care of a lot of the dependency tracking by itself.
You may also use Oasis, which is a relatively new tools that builds on ocamlbuild and ocamlfind and seeks to provide a unified configuration file for pre-build configuration and various building and deployment (of your program, your software libraries if any, accompanying data or documentation...). It's not yet a mature project (and its little brother Oasis-DB isn't released yet), but I encourage you to give it a try if you have time. It's a bit more complex than META, as it does more in the end, so building the META first is still a good step.
Finally, you said you weren't interested in Godi (Godi is a very good system, and in some cases (eg. BSD etc.) it's a premium choice to have a good OCaml installation), but in case you may still be interested in Godiva, a tool to help the building of GODI packages. I have never used it myself, though.
I don't use makefiles but ocamlbuild and a shell script to install the software I distribute. Debian people did packages for my software with these scripts without problems. So you may want to check them out since they correspond to some of their requirements (e.g. separate targets for byte and native code).
You may also want to have a look
to their packaging policy, though I don't know if this document is still up to date.
Don't forget to add a META file for ocamlfind. And you may also want to include an _oasis file for the upcoming oasis-db project (not yet done in the software I distribute).