Pybind11 module not working on deployment to heroku - heroku

I'm trying to deploy an app built in dash to heroku. This app uses simulation code written in c++, which is imported as a python module using pybind 11. When I upload the compiled code I get the following error message when looking at the heroku logs
ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /app/simulation_module_name.so)
I presume that this means that code compiled with pybind11 on my machine is not compatible with the machines I am trying to deploy to.
My next attempt was the build the module directly on the heroku servers using setup.py and cmake, but this requires functions from the boost libraries, which are >1GB and so I can't upload them over git. I also tried uploading only the relevant boost header files, with the relevant cmake, such that
set(include_dir "lib")
set(source_dir "src")
# Boost
list(APPEND include_dirs ${include_dir}/boost)
set(header_files
${include_dir}/boost/math/tools/minima.hpp
${include_dir}/boost/math/constants/constants.hpp
${include_dir}/boost/math/tools/roots.hpp
${include_dir}/boost/math/tools/tuple.hpp
)
# Pybind11
add_subdirectory(${include_dir}/pybind11)
include_directories(${source_dir} ${include_dirs})
pybind11_add_module(simulation_module_name ${header_files} "${source_dir}/simulation_module.cpp")
and
#include "minima.hpp"
#include "constants.hpp"
#include "roots.hpp"
But I received the error:
fatal error: minima.hpp: No such file or directory
remote: #include "minima.hpp"
remote: ^~~~~~~~~~~~
Are any of these three options possible?
Get the heroku dynos to run the pybind11 module I compiled on my machine
Upload the installed boost library (over the size limit) and compile on the heroku server
Use the header functions and cmake and compile on the heroku server (as I have tried but failed to do here)

Get the heroku dynos to run the pybind11 module I compiled on my machine
To do this, you will need to compile the shared library on a linux version with (at least) the same or older version of GLIBC. One possibility is to compile your binaries in a docker image matching or predating whatever linux distro is used by Heroku. Alternatively, I would also suggest looking at the manylinux10 or manylinux14 images here, which are designed specifically for solving this cross-distro build/sharing problem and include many of the necessary build tools.

Related

appImage-builder V1.0.3

I am trying to use the latest version of the appImage-builder because appimages of my application built with the old version of appImage-builder do not run on ubuntu 22.04 anymore. So I got the order to try and see if it works with the new appImage-builder.
Currently (June 2022), only versions below 1.0 which are based on ubuntu 18.04 are available on docker (which we previously used to build our appimage).
The newer versions are available via github (https://github.com/AppImageCrafters/appimage-builder/releases).
However, I seem to be unable to execute:
appimage-builder --generate
or
appimage-builder --recipe AppImageBuilder.yml
Is there any documentation available on how to correctly use the .appimage version of appImage-builder? All I could find in https://appimage-builder.readthedocs.io/en/latest/ seems to refer to the docker version or a manually built version of appImage-builder.
Depending on the error message you get, there could be a couple of issues at play here.
If you got an error related to FUSE, then you need to install the libfuse2 package with apt install libfuse2. AppImages rely on libfuse2, but Ubuntu has stopped including it since 22.04, in favor of libfuse3.
If you get an error related to "file not found", then it could be that you do not have AppImageLauncher installed. Sadly, with type 2 AppImages the design decision was taken to modify the ELF header of the executable with 3 magic bytes at offset 8 of the executable. This means that Linux linkers will not run the file. AppImageLauncher actually copies the file to a temporary directory and zeroes out the magic number in order to be able to execute it.
A good starting point for debugging issues like this is to run the strace command, which will let you see which system call likely cause the error. Keep in mind that if you try to execute a file and you get File not found, it might mean that the linker specified by the file can not be found on the system or the ELF header is not valid. You can also run the executable by using the linker directly, which might give you more clues. For example with: /lib64/ld-linux-x86-64.so.2 <NAME-OF-YOUR-EXECUTABLE>.

Get travisCI build environment working for netcdf-dependent Fortran built in R package

The title should have two subtitles:
What is the pathname to any installed libraries in the Travis CI environment?
or
How do I get my Makevars file portable for NetCDF libraries??
Background:
I am developing and R package that is supposed to work with a shared library I have written in Fortran. I want to check my builds with TravisCI, so my package is currently on GitHub.
So upon package installation, the Fortran source code should be compiled. I can do this locally, but TravisCI errors with the following message:
gfortran -fdefault-real-8 -c HANDLE_ERR.f90
HANDLE_ERR.f90:4: Error: Can't open included file 'netcdf.inc'
make: *** [HANDLE_ERR.o] Error 1
Which I understand as the compiler not finding the NetCDF library, which I made sure I installed by adding this to my .travis.yml:
before_install:
- sudo apt-get install libnetcdf-dev -y
Example
I have created a minimal working example, which is failing in TravisCI, with the error message (above) I am getting on my big project.
See here for the travis build https://travis-ci.org/teatree1212/nctest
you can access my minimal working example repository from there, but here is the link as well: https://github.com/teatree1212/nctest/tree/master
The compilation works when I do it locally, as I can specify the NetCDF library directories. I don't know where these are installed in the Travis build environment, so I think this is where my problem lies at the moment.
However, I would like to make this package portable and not only make it work in the travis container. Therefore these two questions:
What is the pathname to any installed libraries in the Travis CI environment?
and more importantly:
How do I make my Makevars file portable for compiling with NetCDF libraries?

Building and linking shared Tensorflow library on OSX El Capitan to call from Ruby via Swig

I'm trying to help build a Ruby wrapper around Tensorflow using Swig. Currently, I'm stuck at making a shared build, .so, and exposing its C/C++ headers to Ruby. So the question is: How do I build a libtensorflow.so shared build including the full Tensorflow library so it's available as a shared library on OSX El Capitan (note: /usr/lib/ is read-only on El Capitan)?
Background
In this ruby-tensorflow project, I need to package a Tensorflow .bundle file, but whenever I irb -Ilib -rtensorflow or try to run the specs rspec, I get and errors that the basic numeric types are not defined, but they are clearly defined here.
I'm guessing this happens because my .so-file was not created properly or something is not linked as it should. C++/Swig/Bazel are not my strong sides, I'd like to focus on learning Tensorflow and building a good wrapper in Ruby, but I'm pretty stuck at this point getting to that fun part!
What I've done:
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
bazel build //tensorflow:libtensorflow.so (wait 10-15min on my machine)
Copied the generated libtensorflow.so (166.6 MB) to the /ext-folder
Run the ruby extconf.rb, make, and make install described in the project
Run rspec
In desperation, I've also gone through the official installation from source several times, but I don't know if that, the last sudo pip install /tmp/tensorflow_pkg/tensorflow-0.9.0-py2-none-any.whl-step even creates a shared build or just exposes a Python interface.
The guy, Arafat, who made the original repository and made the instructions that I've followed, says his libtensorflow.so is 4.5 GB on his Linux machine – so over 20X the size of the shared build on my OSX machine. UPDATE1: he says his libtensorflow.so-build is 302.2 MB, 4.5GB was the size of the entire tensorflow folder.
Any help or alternative approaches are very appreciated!
After more digging around, discovering otool (thanks Kristina) and better understanding what a .so-file is, the solution didn't require much change in my setup:
Shared Build
# Clone source files
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
# Build library
bazel build //tensorflow:libtensorflow.so
# Copy the newly shared build/library to /usr/local/lib
sudo cp bazel-bin/tensorflow/libtensorflow.so /usr/local/lib
Calling from Ruby using Swig
Follow the steps here, https://github.com/chrhansen/ruby-tensorflow#install-ruby-tensorflow, to run Swig, create a Makefile and make
When you run make you should see a line saying:
$ make
$ linking shared-object libtensorflow.bundle
If your shared build is not accessible you'll see something like:
$ ld: library not found for -ltensorflow
Simple tutorial
For those starting on this adventure, using C/C++ libraries in Ruby, this post was a good tutorial for me: http://engineering.gusto.com/simple-ruby-c-extensions-with-swig/
I don't think you actually want a .so, I think you want a .dylib (see What are the differences between .so and .dylib on osx?). You're forcing Bazel to build a .so by specifying libtensorflow.so as the target, build this instead:
bazel build //tensorflow
(//tensorflow is shorthand for //tensorflow:tensorflow, which is "build the tensorflow target." Specifying an exact file you want forces Bazel to build that file, if possible.)
Once you have a .dylib, you can check its contents with otool:
otool -L bazel-bin/tensorflow/libtensorflow.dylib
Not sure if this will solve all your problems, but worth a try.

Building bazel from source on IBM power8

I have access to a large IBM Power8 machine (running Ubuntu), and would like to build Bazel on it. However, when I try to do it as their installation instructions suggest, I get:
me#machine:~/bazel-0.1.5$ ./compile.sh
INFO: You can skip this first step by providing a path to the bazel binary as second argument:
INFO: ./compile.sh compile /path/to/bazel
🍃 Building Bazel from scratch.
Compiling Java stubs for protocol buffers...
third_party/protobuf/protoc-linux-x86_32.exe -Isrc/main/protobuf/ --java_out=/tmp/bazel.T9C83cNa/src src/main/protobuf/android_studio_ide_info.proto
scripts/bootstrap/buildenv.sh: line 63: third_party/protobuf/protoc-linux-x86_32.exe: cannot execute binary file: Exec format error
pv#sardonis:~/bazel-0.1.5$ ^C
Clearly, part of the problem is the compiler trying the 32-bit compiler. I tried the following things to no avail.
Replacing the third_party/protobuf/protoc-linux-x86_32.exe by a copy of third_party/protobuf/protoc-linux-x86_64.exe. This gave the same error.
Replacing third_party/protobuf/protoc-linux-x86_32.exe by a symbolic link to /usr/local/bin/protoc, which came with my distribution (this is version libprotoc 3.0.0 according to protoc --version). However, this gave a large amount of errors: http://pastebin.com/HN0MQiC4
Following the instructions on http://www.cnblogs.com/rodenpark/p/5007744.html to compile Protobuf from source and then building Bazel with the modifications on http://www.cnblogs.com/rodenpark/p/5007846.html but this resulted in a similar large amount of errors: http://pastebin.com/KjkseaGx for reference.
So, I'm out of inspiration. How can I compile Bazel on the IBM Power8 machines?
(PS: I've posted this as a part of resolving installing TensorFlow on the IBM power8, so it's not a duplicate question, just one aspect in order to solve it stepwise.)
The version of protobuf you're using must match the protobuf runtime that is checked in. In this case, that's protobuf-java-3.0.0-beta-1.jar [1], so you have to use the compiler version 3.0.0-beta-1.
(I work on Bazel.)
[1] https://github.com/bazelbuild/bazel/tree/master/third_party/protobuf

How to use Lua 5.2 with luasocket 3

I am trying to compile luasocket 3 that I found on GitHub with lua 5.2. Problem is, I'm not sure how to bind together Lua with luasocket. Do I need to compile luasocket as DLL and then reference if somewhere in lua code, or should I just call it from lua console?
Try installing it using luarocks. If you don't have luarocks, install it following instructions on the site.
Then download the rockspec file(luasocket-scm-0.rockspec) from luasocket repo and run
$ luarocks install *path to the rockspec file*
If everything goes OK, you'll be able to use luasocket from Lua like this:
local socket = require "socket"
-- now you can use socket.xxx functions
Usually you only need to reference lua include files (there are only 4 needed: luaconf.h, lua.h, lualib.h, and lauxlib.h) and library/dll (-llua52 in your case). You don't say what compiler you are using, so it's difficult to be more specific, but I have script(s) that build luasocket with lua5.2 on Windows using mingw (and using gcc on OSX/Linux). For example, to compile on Windows, you can get build-win32.sh script and run it as: bash build-win32.sh 5.2 lua luasocket. It will get all the files needed (using wget) and compile everything in deps/ folder; the resulting executable and libraries will be put in ../bin folder.
You can also get compiled libraries from the same repository.

Resources