Using pkg-config with Haskell Stack's Docker integration - haskell-stack

I'm trying to build a Haskell Stack project whose extra-deps includes opencv, which in itself depends on OpenCV 3.0 (presently only buildable from source).
I'm following the Docker integration guidelines, and using my own image which builds upon fpco/stack-build:lts-9.20 and installs OpenCV 3.0 (Dockerfile.stack.opencv).
If I build my image I can confirm that opencv is installed and visible to pkg-config:
$ docker build -t stack-opencv -f Dockerfile.stack.opencv .
$ docker run stack-opencv pkg-config --modversion opencv
3.3.1
However if I specify this image in my stack.yml:
docker:
image: stack-opencv
Attempting to stack build yields:
Configuring opencv-0.0.2.0...
setup: The pkg-config package 'opencv' version >=3.0.0 is required but it
could not be found.
I've run the build without the Docker integration, and it completes successfully.

The Dockerfile is passing CMAKE_INSTALL_PREFIX=$HOME/usr.
When running docker build the the root user is used, and thus $HOME is set to /root.
However when doing stack build the stack user is used, they do not have permission to see /root, and thus pkg-config cannot find opencv.
By removing the -D CMAKE_INSTALL_PREFIX=$HOME/usr flag from cmake, the default prefix (/usr/local) is used instead. This is also accessible to the stack user, and thus pkg-config can find it during a stack build.

Related

How to correctly build cabal project using hmatrix under Windows 10?

Using Windows 10 64-bit, Cabal-3.4.0.0, ghc-8.10.7.
I installed OpenBLAS in MSYS2 environment with command
pacman -S mingw-w64-x86_64-openblas.
Than, I successfully installed hmatrix-0.20.2 with command
cabal install --lib hmatrix --flags=openblas --extra-include-dirs="C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\bin" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\lib"
I am trying to build simple test project using cabal build cabalhmatrix with Main
module Main where
import Numeric.LinearAlgebra
main :: IO ()
main = do
putStrLn $ show $ vector [1,2,3] * vector [3,0,-2]
But now I am getting output
Resolving dependencies...
Build profile: -w ghc-8.10.7 -O1
In order, the following will be built (use -v for more details):
- hmatrix-0.20.2 (lib) (requires build)
- cabalhmatrix-0.1.0.0 (exe:cabalhmatrix) (first run)
Starting hmatrix-0.20.2 (lib)
Failed to build hmatrix-0.20.2. The failure occurred during the configure
step.
Build log (
C:\cabal\logs\ghc-8.10.7\hmatrix-0.20.2-6dd2e8f2795550e4dd624770ac98c326dacc0cac.log
):
Warning: hmatrix.cabal:21:28: Packages with 'cabal-version: 1.12' or later
should specify a specific version of the Cabal spec of the form
'cabal-version: x.y'. Use 'cabal-version: 1.18'.
Configuring library for hmatrix-0.20.2..
cabal-3.4.0.0.exe: Missing dependencies on foreign libraries:
* Missing (or bad) C libraries: blas, lapack
This problem can usually be solved by installing the system packages that
provide these libraries (you may need the "-dev" versions). If the libraries
are already installed but in a non-standard location then you can use the
flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.If
the library files do exist, it may contain errors that are caught by the C
compiler at the preprocessing stage. In this case you can re-run configure
with the verbosity flag -v3 to see the error messages.
cabal-3.4.0.0.exe: Failed to build hmatrix-0.20.2 (which is required by
exe:cabalhmatrix from cabalhmatrix-0.1.0.0). See the build log above for
details.
What should I do to correctly build that package?
I guess I need to somehow pass arguments --flags=openblas --extra-include-dirs="C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\bin" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\lib" to hmatrix during compilation, but don't know how to do that. To be honest, I don't understand for what program exactly are those arguments (cabal, ghc, ghc-pkg or something else) and why cabal is trying to install hmatrix again. I see hmatrix in directory "C:\cabal\store\ghc-8.10.7\hmatrix-0.20.2-e917eca0fc7690010007a19f4f2a3602d86df0f0".
Created cabal.project file:
packages: .
package hmatrix
flags: +openblas
extra-include-dirs: C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS
extra-lib-dirs: C:\\ghcup\\msys64\\mingw64\\bin, C:\\ghcup\\msys64\\mingw64\\libenter code here
After adding libopenblas.dll location to PATH variable cabal project is working.
Even though there is the --lib flag, it's generally best to work under the assumption that Cabal doesn't do library installs. Never install a library, instead just depend on it – and have Cabal install, update etc. it whenever necessary.
But then how can you pass the necessary flags? With a cabal.project file.
packages: .
package hmatrix
flags: openblas
extra-include-dirs: C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS
...
Put this file in the working directory of your own project, together with cabalhmatrix.cabal. Then running cabal build in that directory will use a hmatrix install with the suitable library etc. flags.

Build conda package from a local C++ program

I am trying to build (and later upload) a conda package which would contain my custom program that I have developed in C++.
Simplifying the problem, I have a following meta.yaml:
package:
name: CoolName
version: "1.0.0"
source:
path: ./source
requirements:
build:
- make
and the following build.sh:
make
I have two questions here:
1) How and where should I copy the binary which is a result of the make compilation so that it is indeed recognized upon environment activation?
2) How should I specify g++ as a dependancy? I would like to have this package be later available for linux-64 and osx-64... In the building process (in the Makefile) I am using only g++.
Edit
I have modified my build script to have:
make
mkdir -p $PREFIX/bin
cp my_binary $PREFIX/bin/my_binary
And now the conda-build is successful. However, when I later try to install the package locally with conda install --use-local I get:
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
But this is not true, my binary is not installed anywhere and is not recognized...
How and where should I copy the binary which is a result of the make compilation so that it is indeed recognized upon environment activation?
As you mentioned in your edit, install somewhere within ${PREFIX}
How should I specify g++ as a dependancy?
To use conda-supplied compilers (rather than your system compiler), use this:
requirements:
build:
- {{ compiler('cxx') }}
I would like to have this package be later available for linux-64 and osx-64... In the building process (in the Makefile) I am using only g++.
Note: On Mac, it will use clang++, not g++. Make sure your Makefile respects the ${CXX} environment variable instead of hard-coding g++.
However, when I later try to install the package locally with conda install --use-local I get:
That is strange. conda install --use-local CoolName should do what you want. But here are some things to try:
Double-check the contents of the environment you're trying to install it into:
conda list
Try installing to a fresh environment:
conda create -n my-new-env --use-local CoolName
Delete any obsolete versions of the package you might have created before you successfully built the package:
# Inspect the packages you've created,
# and consider deleting all but the most recent one.
ls $(conda info --base)/conda-bld/linux-64/CoolName*.tar.bz2
...then try running conda install again.

"linking with arm-linux-gnueabihf-gcc failed" when cross-compiling a Rust application from macOS to a Raspberry Pi 2

I want to cross-compile my Rust application on macOS to a Raspberry Pi 2. I searched a lot, but did not find a working solution. The last solution I tried was following this answer, but I couldn't get it to work.
macOS version: 10.13.5 (High Sierra)
rustup version: 1.11.0
cargo version: 1.26.0
What I did:
I cloned raspberrypi/tools
Installed arm-unknown-linux-gnueabihf and
armv7-unknown-linux-gnueabihf toolchains via rustup
Created .cargo/config file in the root of my project with following content
[target.armv-unknown-linux-gnueabihf]
linker = "/Users/user/Documents/Programming/RustProjects/hello-pi/../../Utils/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/arm-linux-gnueabihf-gcc"
Then I run cargo build --target=arm-unknown-linux-gnueabihf
I get the following error:
linking with /Users/user/Documents/Programming/RustProjects/hello-pi/../../Utils/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/arm-linux-gnueabihf-gcc failed: exit code: 126
....
cannot execute binary file
It seems that I cannot run the ...gcc binary on my macOS machine. What would be the right way to cross-compile my Rust application from macOS to the ARM architecture for a Raspberry Pi 2?
rust-std library relies on glibc for things like syscalls and other low-level stuff, in order to cross-compile a Rust binary, one needs the appropriate C toolchain to be present as well. And this is where crosstool-NG comes into play.
crosstool-NG is in the toolchain building business. You’re going to use it to build yourself a toolchain for linking against ARMv7-compatible glibc, which will in turn allow you to successfully build your Rust binary for the Pi.
Clone the repo to a good location and bootstrap it:
cd /Users/USER
git clone https://github.com/crosstool-ng/crosstool-ng
cd crosstool-ng
./bootstrap
Configure the installation and run it. To set where the tool goes on install, run:
./configure --prefix=$PWD
make
make install
export PATH="${PATH}:${PWD}/bin"
If all things went as expected, you should be able to run ct-ng version and verify the tool’s ready to go.
Configure the tool to build your ARMv7 toolchain. Luckily, crosstool-NG comes with some preset configurations, namely armv7-rpi2-linux-gnueabihf. Run:
ct-ng armv7-rpi2-linux-gnueabihf
There should be some output indicating that it’s now configured for armv7-rpi2-linux-gnueabihf. You just need to tell ct-ng where the toolchain ought to go:
mkdir /Users/USER/ct-ng-toolchains
cd /Users/USER/ct-ng-toolchains
ct-ng menuconfig
It can be overwhelming, as there are a ton of options, but stick to the Paths and misc options ---> menu option. Highlight it and hit Enter.
Under *** crosstool-NG behavior ***, scroll down until you see this long string:
(${CT_PREFIX:-${HOME}/x-tools}/${CT_HOST:+HOST-${CT_HOST}/}${CT_TARGET}) Prefix directory
- Hit Enter, delete the contents, and replace it with /Users/USER/ct-ng-toolchains.
- When you’re finished, hit Enter to confirm, scroll over and save, and then exit the configurator.
Build your toolchain (this may take half an hour):
ct-ng build
If it worked successfully, You should see a great many binaries now in /Users/USER/ct-ng-toolchains/armv7-rpi2-linux-gnueabihf/bin, namely armv7-rpi2-linux-gnueabihf-gcc.
For cargo to build using your new cross-compiler, you must:
Add the bin folder listed above to your PATH:
export PATH="${PATH}:/Users/USER/ct-ng-toolchains/armv7-rpi2-linux-gnueabihf/bin"
Update (or create) your global /Users/USER/.cargo/config file with (you can avoid this and use it in local .cargo/config):
[target.armv7-unknown-linux-gnueabihf]
linker = "armv7-rpi2-linux-gnueabihf-gcc"
3.Return to your Rust project and rerun cargo build:
cd /Users/USER/rust/hello
cargo build --target=armv7-unknown-linux-gnueabihf
The output should be something similar to:
Compiling hello v0.1.0 (file:///Users/USER/rust/hello)
Finished dev [unoptimized + debuginfo] target(s) in 0.85 secs
SCP your file over to the RPi and run the binary remotely:
scp target/armv7-unknown-linux-gnueabihf/debug/hello pi#192.168.1.43:
ssh pi#192.168.3.155 'chmod +x ~/hello && ~/hello'
Hello, world!
Credit goes to Kappel Codes I tried to summarize it here, as I found this question hours before I get that article :)

Building and linking shared Tensorflow library on OSX El Capitan to call from Ruby via Swig

I'm trying to help build a Ruby wrapper around Tensorflow using Swig. Currently, I'm stuck at making a shared build, .so, and exposing its C/C++ headers to Ruby. So the question is: How do I build a libtensorflow.so shared build including the full Tensorflow library so it's available as a shared library on OSX El Capitan (note: /usr/lib/ is read-only on El Capitan)?
Background
In this ruby-tensorflow project, I need to package a Tensorflow .bundle file, but whenever I irb -Ilib -rtensorflow or try to run the specs rspec, I get and errors that the basic numeric types are not defined, but they are clearly defined here.
I'm guessing this happens because my .so-file was not created properly or something is not linked as it should. C++/Swig/Bazel are not my strong sides, I'd like to focus on learning Tensorflow and building a good wrapper in Ruby, but I'm pretty stuck at this point getting to that fun part!
What I've done:
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
bazel build //tensorflow:libtensorflow.so (wait 10-15min on my machine)
Copied the generated libtensorflow.so (166.6 MB) to the /ext-folder
Run the ruby extconf.rb, make, and make install described in the project
Run rspec
In desperation, I've also gone through the official installation from source several times, but I don't know if that, the last sudo pip install /tmp/tensorflow_pkg/tensorflow-0.9.0-py2-none-any.whl-step even creates a shared build or just exposes a Python interface.
The guy, Arafat, who made the original repository and made the instructions that I've followed, says his libtensorflow.so is 4.5 GB on his Linux machine – so over 20X the size of the shared build on my OSX machine. UPDATE1: he says his libtensorflow.so-build is 302.2 MB, 4.5GB was the size of the entire tensorflow folder.
Any help or alternative approaches are very appreciated!
After more digging around, discovering otool (thanks Kristina) and better understanding what a .so-file is, the solution didn't require much change in my setup:
Shared Build
# Clone source files
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
# Build library
bazel build //tensorflow:libtensorflow.so
# Copy the newly shared build/library to /usr/local/lib
sudo cp bazel-bin/tensorflow/libtensorflow.so /usr/local/lib
Calling from Ruby using Swig
Follow the steps here, https://github.com/chrhansen/ruby-tensorflow#install-ruby-tensorflow, to run Swig, create a Makefile and make
When you run make you should see a line saying:
$ make
$ linking shared-object libtensorflow.bundle
If your shared build is not accessible you'll see something like:
$ ld: library not found for -ltensorflow
Simple tutorial
For those starting on this adventure, using C/C++ libraries in Ruby, this post was a good tutorial for me: http://engineering.gusto.com/simple-ruby-c-extensions-with-swig/
I don't think you actually want a .so, I think you want a .dylib (see What are the differences between .so and .dylib on osx?). You're forcing Bazel to build a .so by specifying libtensorflow.so as the target, build this instead:
bazel build //tensorflow
(//tensorflow is shorthand for //tensorflow:tensorflow, which is "build the tensorflow target." Specifying an exact file you want forces Bazel to build that file, if possible.)
Once you have a .dylib, you can check its contents with otool:
otool -L bazel-bin/tensorflow/libtensorflow.dylib
Not sure if this will solve all your problems, but worth a try.

Apache Mesos configured failed on OS X Yosemite

I am following the doc (http://mesos.apache.org/gettingstarted/) and trying to install Mesos on my mac. When I try to configure it, it gives me the error:
checking python extra linking flags... -u _PyMac_Error Python.framework/Versions/2.7/Python
checking consistency of all components of python development environment... no
configure: error: in `/Users/syang/Desktop/git/mesos/build':
configure: error:
Could not link test program to Python. Maybe the main Python library has been
installed in some non-standard library path. If so, pass it to configure,
via the LDFLAGS environment variable.
Example: ./configure LDFLAGS="-L/usr/non-standard-path/python/lib"
============================================================================
ERROR!
You probably have to install the development version of the Python package
for your distribution. The exact name of this package varies among them.
============================================================================
I use Python 2.7.8 and I am trying to install Mesos 0.23.0. I did some search, it looks like after installing command tools using xcode, the linking problem should get handled. However, it doesn't look like that to me. Is there anyone who has similar experience and can help me?
Thank you.
The easiest way of running Mesos on local machine is to use https://github.com/bobrik/mesos-compose (Docker) or https://github.com/mesosphere/playa-mesos (Vagrant)
There are a bit different when build it in OSX. You could use "brew install mesos" to install it directly. https://github.com/Homebrew/homebrew/tree/master/Library/Formula/mesos.rb also show how to build mesos in osx.
I dont know if you have resolved this issue but for future reference I would like to suggest the below steps based on this blog http://gwikis.blogspot.com/2015/08/building-mesos-0230-on-os-x-yosemite.html
$ cd mesos-0.x./build/
$ PYTHON=/usr/bin/python ../configure
Moreover in case that you receive any errors like libapr-1 is required for mesos to build. or libsubversion-1 is required for mesos to build. then you could do the following, assuming that apr and subversion libs are installed with brew.
$ PYTHON=/usr/bin/python ../configure -with-svn=/usr/local/Cellar/subversion/1.8.13/ -with-apr=/usr/local/Cellar/apr/1.5.2/libexec/
To verify why the Python path is not correct and the compile fails in the first place please go once through the blogpost.

Resources