How to confirm what libraries Octave is *actually* using at runtime - makefile

I've built octave (successfully) using the ATLAS libraries (specifically the multithreaded libraries: libtatlas.so).
All looks well during the configure and make process (after much debugging), but after making Octave I'm still seeing matrix multiplication operations run in a single thread (the ATLAS libraries should make that operation multithreaded).
Is there a way I can see what library Octave is actually using when it does a matrix multiplication operations such as:
x = rand(10000,10000); y = rand(10000,10000); t=time();
z = x * y;
I'm trying to determine if this is still a build issue (e.g. Octave didn't link in the right ATLAS libraries) or if this is an ATLAS issue (Octave uses the right libraries but ATLAS isn't behaving as expected).

If you are on a linux platform then you can debug the library resolution most easily using ldd. If you simply run it on the application binary:
ldd <the binary file>
it will output a list of how the library dependencies have been resolved.
A more complex approach would be to set LD_DEBUG to libs before running the application:
env LD_DEBUG=libs <command to run application>
That will output information to the command line showing the whole shared library resolution and initialization process.

Related

Execute binaries / tests on host that are built by non-host toolchain

Scenario:
We would like to build our sources by an external / hermetic toolchain so all includes, libs and tools from the host are ignored
To do so, a new toolchain is introduced to Bazel-Configuration. >>> working well
Using -Wl,-rpath=$ORIGIN/%{runtime_library_search_directories}/... the relative path from the binary to the libs that should be loaded on runtime gets defined >>> works quite well
When doing bazel run the Binary is executed but the host's LD is being used
To get rid of this a wrapper (written in Bash) is injected using --run_under. >>> dirty workaround
Problem:
--run_under could only be used once. We also need this option to execute tests within a specific environment and so this option is not the way of choice for us. IMHO it's also a bit dirty workaround.
We also tried to use -Wl,--dynamic-linker=<<PATH_TO_LD>>. But we were not able to get neither a relative nor an absolute path to LD when linking the executable.
Questions:
Is there ...
... any way to get the absolute/relative path to LD when linking?
... any other way of running a binary on Host using a toolchain?
... a possibility to do sandboxing/chroot so the correct LD of the toolchain is being used automatically?
Sidenotes:
Bazel 1.1.0 is used
the toolchain is GCC8 build from sources
the host is an Ubuntu 18.04.1 image running in docker

How to run a C program in android-x86 terminal?

I have to C program which is compiled using gcc in ubuntu. I want to run that executable in android terminal. When i run it is showing either "file or directory is not found" or "not executable:ELF32".
I want to run the code in android terminal. Is there any way or flags in gcc or using any other compiler so that i can run my code in the android terminal.?
Android does not use the same system libraries as Ubuntu, so they will not be found.
There are two solutions:
Copy the libraries you need.
If you can place them in the same filesystem locations they have in Ubuntu then great, otherwise you'll need to run the ld-linux.so manually and tell it where to find the libraries. Or, you could relink the program such that it expects to find the dynamic linker and libraries in a non-standard place. You might also use a chroot, but that requires root, and you'd need to find a chroot binary that works.
Use a static link.
This usually just means passing -static to GCC. You get a much larger binary that should be entirely self-contained, with no dependencies. It requires that static versions of all your libraries are available on your build system. Also, some features (such as DNS lookup) always expect a shared library, so they won't work this way.
Even then, you should expect some Linux features to not work. Basically, anything that requires hardware features or configuration files in /etc are going to need a lot of effort. There are various projects that have done this already (search "linux chroot android").
I'm not sure what the "not executable:ELF32" message means, but you should check whether you're building 32 or 64-bit executables, and which the Android binaries are using (file <whatever> should tell you).

How to cross compile from Mac to Linux?

I wrote a little game using Rust, and I used cargo build --release to compile a release version on Mac.
I tried to share this with my friend who is using Ubuntu, but when he tried to run the binary, he got the following error:
cannot execute binary file: Exec format error
I searched for this but found no answers. Doesn't Rust claim to have "no runtime"? Shouldn't it be able to run anywhere in binary form?
Rust not having a runtime means that it doesn't have a lot of code running as part of the language (for example a garbage collector or bytecode interpreter). It does still need to use operating system primitives (i.e. syscalls), and these are different on MacOS and Linux.
What you want is a cross compiler. If you're using rustup, then installing a cross compiler should be simple:
# Install the toolchain to build Linux x86_64 binaries
rustup target add x86_64-unknown-linux-gnu
Then building is:
cargo build --release --target=x86_64-unknown-linux-gnu
Caveat: I don't have an OS X machine to test this on; please comment or edit to fix this if it works!
Well, it is because Rust has no runtime (unlike e.g. Java's JVM) that you can't just compile code on one OS and expect it to run on a different one; what you are looking for is cross-compilation. I haven't done it in Rust, but from what I can gather you can find relevant information on different cross-compilation Rust strategies on this GitHub repo.

How to use the eigs() function in octave 3.6.4 on Mac OS X

I am trying to use a toolbox which makes use of the Matlab's eigs() function. When I run this in Octave (3.6.4, installed via Homebrew on Mac OS X), the following is returned:
error: eigs: not available in this version of Octave
I have found a lot op potential solutions, about getting the ARPACK(-ng) program to work with Octave. I have tried more methods then I can remember, but none seemed to work.
Does anybody know the current status of Octave using the eigs() function? Is this possible, preferably by using packages in Homebrew?
Thanks.
I think you're referring to the fact that as of 3.6, Octave no longer comes with eigs, and depends on an external arpack library. From the Octave release notes:
Summary of important user-visible changes for version 3.6:
---------------------------------------------------------
...
** The ARPACK library is no longer distributed with Octave.
If you need the eigs or svds functions you must provide an
external ARPACK through a package manager or by compiling it
yourself. If a pre-compiled package does not exist for your system,
you can find the current ARPACK sources at
http://forge.scilab.org/index.php/p/arpack-ng
So you'll need an arpack library installed before installing Octave, somewhere visible to Octave. For homebrew, that means under /usr/local/.
Octave's configure file has arpack detection logic, and looks like it will detect arpack during the build process by default, and build against it if present. So Homebrew's octave should pick it up if you have it installed, even without special support for it in the formula.
There's no arpack formula in the current homebrew-science version, but there is an open pull request to add one: https://github.com/Homebrew/homebrew-science/pull/112. Go over there and comment to show support and maybe it'll get merged in soon. Once that's in, do brew install libarpack; brew install octave and your Octave may well pick up eigs. If it doesn't, then put in an issue against homebrew-science to add arpack support.

Combine wxLua with LuaJIT on Mac OS X

How do you build wxLua on Mac OS X (10.6.8) so that it uses LuaJIT2 instead of the standard Lua interpreter?
I have tried:
./configure --with-lua-prefix=/Users/finnw/LuaJIT-2.0.0-beta9
where /Users/finnw/LuaJIT-2.0.0-beta9 is the directory in which I built LuaJIT.
I have also tried copying src/libluajit.a to lib/liblua5.1.a and src/libluajit.so to lib/liblua5.1.so and various other combinations such as changing the extension from .so to .dylib
But still I always get Lua not LuaJIT (as can be verified by loading a script that requires the ffi module.)
How can I force it to link against LuaJIT2? And why does the configure --with-lua-prefix option not do what it claims to do?
The following works on Debian:
$ ./configure --with-lua-prefix=/path/to/luajit --enable-systemlua
which points at /path/to/luajit/include/lua5.1/*.h and /path/to/luajit/lib/liblua5.1.a.
--enable-systemlua ensures that it tries to find Lua at the prefix you specify, and will make configure fail rather than fall back on the Lua bundled with wxLua.
You'll also need to replace the two instances of luaI_openlib in wxlbind.cpp and wxlstate.cpp with luaL_openlib, as this is deprecated in 5.1 and not present in LuaJIT2.

Resources