Missing dynamic (.so) library in fftw 2.1.5 installation - installation

I am trying to run simulations using Gadget2, an astrophysics N-body simulation package. It requires a few libraries, including fftw-2.1.5. I have installed fftw using the guidelines given in the user manual:
./configure --prefix=<PATH> --enable-typeprefix --enable-mpi
make
make install
make clean
./configure --prefix=<PATH> --enable-float--enable-type-prefix --enable-mpi
make
make install
The two makes are to get both single and double precision files according to this source. The install happened successfully, and I was also able to compile Gadget2.
But when I try to run Gadget2, I get the following error:
./Gadget2: error while loading shared libraries: libsrfftw_mpi.so.2: cannot open shared object file: No such file or directory
The file libsrfftw_mpi.so.2 is missing in the fftw lib folder, even though a few download sites for fftw packages say that it is part of the contents. What am I missing?

Specify the below and run your command again.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH=<PATH from your install command>
also don't forget to additionally specify --enable-shared for both configure commands.

Related

Get travisCI build environment working for netcdf-dependent Fortran built in R package

The title should have two subtitles:
What is the pathname to any installed libraries in the Travis CI environment?
or
How do I get my Makevars file portable for NetCDF libraries??
Background:
I am developing and R package that is supposed to work with a shared library I have written in Fortran. I want to check my builds with TravisCI, so my package is currently on GitHub.
So upon package installation, the Fortran source code should be compiled. I can do this locally, but TravisCI errors with the following message:
gfortran -fdefault-real-8 -c HANDLE_ERR.f90
HANDLE_ERR.f90:4: Error: Can't open included file 'netcdf.inc'
make: *** [HANDLE_ERR.o] Error 1
Which I understand as the compiler not finding the NetCDF library, which I made sure I installed by adding this to my .travis.yml:
before_install:
- sudo apt-get install libnetcdf-dev -y
Example
I have created a minimal working example, which is failing in TravisCI, with the error message (above) I am getting on my big project.
See here for the travis build https://travis-ci.org/teatree1212/nctest
you can access my minimal working example repository from there, but here is the link as well: https://github.com/teatree1212/nctest/tree/master
The compilation works when I do it locally, as I can specify the NetCDF library directories. I don't know where these are installed in the Travis build environment, so I think this is where my problem lies at the moment.
However, I would like to make this package portable and not only make it work in the travis container. Therefore these two questions:
What is the pathname to any installed libraries in the Travis CI environment?
and more importantly:
How do I make my Makevars file portable for compiling with NetCDF libraries?

Building and linking shared Tensorflow library on OSX El Capitan to call from Ruby via Swig

I'm trying to help build a Ruby wrapper around Tensorflow using Swig. Currently, I'm stuck at making a shared build, .so, and exposing its C/C++ headers to Ruby. So the question is: How do I build a libtensorflow.so shared build including the full Tensorflow library so it's available as a shared library on OSX El Capitan (note: /usr/lib/ is read-only on El Capitan)?
Background
In this ruby-tensorflow project, I need to package a Tensorflow .bundle file, but whenever I irb -Ilib -rtensorflow or try to run the specs rspec, I get and errors that the basic numeric types are not defined, but they are clearly defined here.
I'm guessing this happens because my .so-file was not created properly or something is not linked as it should. C++/Swig/Bazel are not my strong sides, I'd like to focus on learning Tensorflow and building a good wrapper in Ruby, but I'm pretty stuck at this point getting to that fun part!
What I've done:
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
bazel build //tensorflow:libtensorflow.so (wait 10-15min on my machine)
Copied the generated libtensorflow.so (166.6 MB) to the /ext-folder
Run the ruby extconf.rb, make, and make install described in the project
Run rspec
In desperation, I've also gone through the official installation from source several times, but I don't know if that, the last sudo pip install /tmp/tensorflow_pkg/tensorflow-0.9.0-py2-none-any.whl-step even creates a shared build or just exposes a Python interface.
The guy, Arafat, who made the original repository and made the instructions that I've followed, says his libtensorflow.so is 4.5 GB on his Linux machine – so over 20X the size of the shared build on my OSX machine. UPDATE1: he says his libtensorflow.so-build is 302.2 MB, 4.5GB was the size of the entire tensorflow folder.
Any help or alternative approaches are very appreciated!
After more digging around, discovering otool (thanks Kristina) and better understanding what a .so-file is, the solution didn't require much change in my setup:
Shared Build
# Clone source files
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
# Build library
bazel build //tensorflow:libtensorflow.so
# Copy the newly shared build/library to /usr/local/lib
sudo cp bazel-bin/tensorflow/libtensorflow.so /usr/local/lib
Calling from Ruby using Swig
Follow the steps here, https://github.com/chrhansen/ruby-tensorflow#install-ruby-tensorflow, to run Swig, create a Makefile and make
When you run make you should see a line saying:
$ make
$ linking shared-object libtensorflow.bundle
If your shared build is not accessible you'll see something like:
$ ld: library not found for -ltensorflow
Simple tutorial
For those starting on this adventure, using C/C++ libraries in Ruby, this post was a good tutorial for me: http://engineering.gusto.com/simple-ruby-c-extensions-with-swig/
I don't think you actually want a .so, I think you want a .dylib (see What are the differences between .so and .dylib on osx?). You're forcing Bazel to build a .so by specifying libtensorflow.so as the target, build this instead:
bazel build //tensorflow
(//tensorflow is shorthand for //tensorflow:tensorflow, which is "build the tensorflow target." Specifying an exact file you want forces Bazel to build that file, if possible.)
Once you have a .dylib, you can check its contents with otool:
otool -L bazel-bin/tensorflow/libtensorflow.dylib
Not sure if this will solve all your problems, but worth a try.

Libtool installation issue with make install

I use the following autotool steps to install my pacakges:
./configure
make
make install prefix=/my/path
However I got the following libtool warning "libtool: warning: remember to run 'libtool --finish /usr/local/lib' and "libtool: warning: 'lib/my.la' has not been installed in '/usr/local/lib'" when using the autotool to install my software package. If I change to the following command, the problem disappear:
./configure
make prefix=/my/path
make install prefix=/my/path
It looks like the first method doesn't substitute the prefix correctly to libtool. How can I avoid this problem?
Among the information that libtool archives record about the libraries they describe is the expected installation location. That information is recorded when the library is created. You can then install to a different location, but libtool will complain. Often, libtool's warning is harmless.
In order to avoid such a warning, you need to tell libtool the same installation location at build time that you do at install time. You present one way to do that in the question, but if you're using a standard Autotools build system then it is better to specify the installation prefix to configure:
./configure --prefix=/my/path
make
make install
Alternatively, if you're installing into a staging area, such as for building an RPM, then use DESTDIR at install time. libtool will still warn, but you'll avoid messing up anything else:
./configure
make
make install DESTDIR=/staging/area

Solving the "No package 'json' found" error

I'm on Mac OS X Mountain Lion and a newbie to autotools and other GNU build tools. I'm trying to build a custom version of json-c to use with a a C project (axis2/c). After running the auto tools, and I run the configure command I get a failure with this output:
checking whether to use JSON... yes
checking for JSON... no
configure: error: Package requirements (json) were not met:
No package 'json' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables JSON_CFLAGS
and JSON_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
If I install json-c from macports, configure runs properly. Unfortunately, the project needs a later version of json-c, than what is available in macports (even though this is successful in the configure stage, it later results in a compilation error).
When I install this manually from source, I see that the libs are there in /usr/local/lib and header files in /usr/local/include/json-c. After removing any json-c files that came from macports, I tried copying these repective to the locations in /opt/local/lib and /opt/local/include/json-c but it still resulted in the same package not found error.
What does macports do differently that the package is 'found' when you run configure? Can I replicate the same when I manually install json-c from source?
Thanks in advance.
Macports creates a .pc file with under /opt/local/pkgconfig/. In this case it was json.pc. I edited that to point to the locations in /usr/local and the configure found and used the package I manually built from source.

Build shared libraries in ATLAS

I've read the entire ATLAS installation guide, and it says all you need to build shared (.so) libraries is to pass the --shared flag to the configure script. However, when I build, the only .so files that appear in my lib folder are libsatlas.so and libtatlas.so, though the guide says that there should be six others:
libatlas.so, libcblas.so, libf77blas.so, liblapack.so, libptcblas.so, libptf77blas.so
After installation some of the tests fail because these libraries are missing. Furthermore, FFPACK wants these libraries during installation.
Has anyone encountered this? What am I doing incorrectly?
In my experience, it's a lot more complex than that, see our EasyBuild implementation of the ATLAS build procedure at https://github.com/hpcugent/easybuild-easyblocks/blob/master/easybuild/easyblocks/a/atlas.py .
We needed to:
enable the -fPIC compiler option
run 'make shared cshared ptshared cptshared' in the 'lib' directory
We're not even using --shared for configure, probably because it doesn't do much.
If you want to build ATLAS (and whatever you will be linking it with) without headaches, look into EasyBuild.
(disclaimer: I'm a developer for EasyBuild)
First if you have incorrectly specified the --force-tids flag for configure then the parallel libs won't build. To check this you can run make ptcheck. I have question regarding the specification of this flag here
Then if I examine my resulting ATLAS Makefile it says " ... only when atlas is built to one lib" and indeed only two "fat" libs are constructed: libsatlas.so and libtatlas.so.
I quess you can either link FFPACK against those libs or change the resulting ATLAS Makefile to contain the targets you need (Which won't be too hard since the static libs are available).
I had to manually create links to the .so.3 files.
So the versioned library files existed, but not the files the cmake was looking for.
Running
sudo ln -s libatlas.so.3 libatlas.so
sudo ln -s libcblas.so.3 libcblas.so
sudo ln -s liblapack_atlas.so.3
(I didn't build the cblas, atlas or lapack but installed them with apt-get. Wondering why the links were not automatically created).

Resources