I've read the entire ATLAS installation guide, and it says all you need to build shared (.so) libraries is to pass the --shared flag to the configure script. However, when I build, the only .so files that appear in my lib folder are libsatlas.so and libtatlas.so, though the guide says that there should be six others:
libatlas.so, libcblas.so, libf77blas.so, liblapack.so, libptcblas.so, libptf77blas.so
After installation some of the tests fail because these libraries are missing. Furthermore, FFPACK wants these libraries during installation.
Has anyone encountered this? What am I doing incorrectly?
In my experience, it's a lot more complex than that, see our EasyBuild implementation of the ATLAS build procedure at https://github.com/hpcugent/easybuild-easyblocks/blob/master/easybuild/easyblocks/a/atlas.py .
We needed to:
enable the -fPIC compiler option
run 'make shared cshared ptshared cptshared' in the 'lib' directory
We're not even using --shared for configure, probably because it doesn't do much.
If you want to build ATLAS (and whatever you will be linking it with) without headaches, look into EasyBuild.
(disclaimer: I'm a developer for EasyBuild)
First if you have incorrectly specified the --force-tids flag for configure then the parallel libs won't build. To check this you can run make ptcheck. I have question regarding the specification of this flag here
Then if I examine my resulting ATLAS Makefile it says " ... only when atlas is built to one lib" and indeed only two "fat" libs are constructed: libsatlas.so and libtatlas.so.
I quess you can either link FFPACK against those libs or change the resulting ATLAS Makefile to contain the targets you need (Which won't be too hard since the static libs are available).
I had to manually create links to the .so.3 files.
So the versioned library files existed, but not the files the cmake was looking for.
Running
sudo ln -s libatlas.so.3 libatlas.so
sudo ln -s libcblas.so.3 libcblas.so
sudo ln -s liblapack_atlas.so.3
(I didn't build the cblas, atlas or lapack but installed them with apt-get. Wondering why the links were not automatically created).
Related
Program is part of the Xenomai test suite, cross-compiled from Linux PC into Linux+Xenomai ARM toolchain.
# echo $LD_LIBRARY_PATH
/lib
# ls /lib
ld-2.3.3.so libdl-2.3.3.so libpthread-0.10.so
ld-linux.so.2 libdl.so.2 libpthread.so.0
libc-2.3.3.so libgcc_s.so libpthread_rt.so
libc.so.6 libgcc_s.so.1 libstdc++.so.6
libcrypt-2.3.3.so libm-2.3.3.so libstdc++.so.6.0.9
libcrypt.so.1 libm.so.6
# ./clocktest
./clocktest: error while loading shared libraries: libpthread_rt.so.1: cannot open shared object file: No such file or directory
Is the .1 at the end part of the filename? What does that mean anyway?
Your library is a dynamic library.
You need to tell the operating system where it can locate it at runtime.
To do so,
we will need to do those easy steps:
Find where the library is placed if you don't know it.
sudo find / -name the_name_of_the_file.so
Check for the existence of the dynamic library path environment variable(LD_LIBRARY_PATH)
echo $LD_LIBRARY_PATH
If there is nothing to be displayed, add a default path value (or not if you wish to)
LD_LIBRARY_PATH=/usr/local/lib
We add the desired path, export it and try the application.
Note that the path should be the directory where the path.so.something is. So if path.so.something is in /my_library/path.so.something, it should be:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/my_library/
export LD_LIBRARY_PATH
./my_app
Reference to source
Here are a few solutions you can try:
ldconfig
As AbiusX pointed out: If you have just now installed the library, you may simply need to run ldconfig.
sudo ldconfig
ldconfig creates the necessary links and cache to the most recent
shared libraries found in the directories specified on the command
line, in the file /etc/ld.so.conf, and in the trusted directories
(/lib and /usr/lib).
Usually your package manager will take care of this when you install a new library, but not always, and it won't hurt to run ldconfig even if that is not your issue.
Dev package or wrong version
If that doesn't work, I would also check out Paul's suggestion and look for a "-dev" version of the library. Many libraries are split into dev and non-dev packages. You can use this command to look for it:
apt-cache search <libraryname>
This can also help if you simply have the wrong version of the library installed. Some libraries are published in different versions simultaneously, for example, Python.
Library location
If you are sure that the right package is installed, and ldconfig didn't find it, it may just be in a nonstandard directory. By default, ldconfig looks in /lib, /usr/lib, and directories listed in /etc/ld.so.conf and $LD_LIBRARY_PATH. If your library is somewhere else, you can either add the directory on its own line in /etc/ld.so.conf, append the library's path to $LD_LIBRARY_PATH, or move the library into /usr/lib. Then run ldconfig.
To find out where the library is, try this:
sudo find / -iname *libraryname*.so*
(Replace libraryname with the name of your library)
If you go the $LD_LIBRARY_PATH route, you'll want to put that into your ~/.bashrc file so it will run every time you log in:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/library
Update
While what I write below is true as a general answer about shared libraries, I think the most frequent cause of these sorts of message is because you've installed a package, but not installed the -dev version of that package.
Well, it's not lying - there is no libpthread_rt.so.1 in that listing. You probably need to re-configure and re-build it so that it depends on the library you have, or install whatever provides libpthread_rt.so.1.
Generally, the numbers after the .so are version numbers, and you'll often find that they are symlinks to each other, so if you have version 1.1 of libfoo.so, you'll have a real file libfoo.so.1.0, and symlinks foo.so and foo.so.1 pointing to the libfoo.so.1.0. And if you install version 1.1 without removing the other one, you'll have a libfoo.so.1.1, and libfoo.so.1 and libfoo.so will now point to the new one, but any code that requires that exact version can use the libfoo.so.1.0 file. Code that just relies on the version 1 API, but doesn't care if it's 1.0 or 1.1 will specify libfoo.so.1. As orip pointed out in the comments, this is explained well at here.
In your case, you might get away with symlinking libpthread_rt.so.1 to libpthread_rt.so. No guarantees that it won't break your code and eat your TV dinners, though.
You need to ensure that you specify the library path during
linking when you compile your .c file:
gcc -I/usr/local/include xxx.c -o xxx -L/usr/local/lib -Wl,-R/usr/local/lib
The -Wl,-R part tells the resulting binary to also look for the library
in /usr/local/lib at runtime before trying to use the one in /usr/lib/.
Try adding LD_LIBRARY_PATH, which indicates search paths, to your ~/.bashrc file
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path_to_your_library
It works!
The linux.org reference page explains the mechanics, but doesn't explain any of the motivation behind it :-(
For that, see Sun Linker and Libraries Guide
In addition, note that "external versioning" is largely obsolete on Linux, because symbol versioning (a GNU extension) allows you to have multiple incompatible versions of the same function to be present in a single library. This extension allowed glibc to have the same external version: libc.so.6 for the last 10 years.
cd /home/<user_name>/
sudo vi .bash_profile
add these lines at the end
LD_LIBRARY_PATH=/usr/local/lib:<any other paths you want>
export LD_LIBRARY_PATH
Another possible solution depending on your situation.
If you know that libpthread_rt.so.1 is the same as libpthread_rt.so then you can create a symlink by:
ln -s /lib/libpthread_rt.so /lib/libpthread_rt.so.1
Then ls -l /lib should now show the symlink and what it points to.
I had a similar error and it didn't fix with giving LD_LIBRARY_PATH in ~/.bashrc .
What solved my issue is by adding .conf file and loading it.
Go to terminal an be in su.
gedit /etc/ld.so.conf.d/myapp.conf
Add your library path in this file and save.(eg: /usr/local/lib).
You must run the following command to activate path:
ldconfig
Verify Your New Library Path:
ldconfig -v | less
If this shows your library files, then you are good to go.
running:
sudo ldconfig
was enough to fix my issue.
I had this error when running my application with Eclipse CDT on Linux x86.
To fix this:
In Eclipse:
Run as -> Run configurations -> Environment
Set the path
LD_LIBRARY_PATH=/my_lib_directory_path
Wanted to add, if your libraries are in a non standard path, run ldconfig followed by the path.
For instance I had to run:
sudo ldconfig /opt/intel/oneapi/mkl/2021.2.0/lib/intel64
to make R compile against Intel MKL
All I had to do was run:
sudo apt-get install libfontconfig1
I was in the folder located at /usr/lib/x86_64-linux-gnu and it worked perfectly.
Try to install lib32z1:
sudo apt-get install lib32z1
If you are running your application on Microsoft Windows, the path to dynamic libraries (.dll) need to be defined in the PATH environment variable.
If you are running your application on UNIX, the path to your dynamic libraries (.so) need to be defined in the LD_LIBRARY_PATH environment variable.
The error occurs as the system cannot refer to the library file mentioned. Take the following steps:
Running locate libpthread_rt.so.1 will list the path of all the files with that name. Let's suppose a path is /home/user/loc.
Copy the path and run cd home/USERNAME. Replace USERNAME with the name of the current active user with which you want to run the file.
Run vi .bash_profile and at the end of the LD_LIBRARY_PATH parameter, just before ., add the line /lib://home/usr/loc:.. Save the file.
Close terminal and restart the application. It should run.
I got this error and I think its the same reason of yours
error while loading shared libraries: libnw.so: cannot open shared
object file: No such file or directory
Try this. Fix permissions on files:
cd /opt/Popcorn (or wherever it is)
chmod -R 555 * (755 if not ok)
I use Ubuntu 18.04
Installing the corresponding -dev package worked for me,
sudo apt install libgconf2-dev
Before installing the above package, I was getting the below error:
turtl: error while loading shared libraries: libgconf-2.so.4: cannot open shared object file: No such file or directory
I got this error and I think its the same reason of yours
error while loading shared libraries: libnw.so: cannot open shared object
file: No such file or directory
Try this. Fix permissions on files:
sudo su
cd /opt/Popcorn (or wherever it is)
chmod -R 555 * (755 if not ok)
chown -R root:root *
A similar problem can be found here.
I've tried the mentioned solution and it actually works.
The solutions in the previous questions may work. But the following is an easy way to fix it.
It works by reinstalling the package libwbclient
in fedora:
dnf reinstall libwbclient
You can read about libraries here:
https://domiyanyue.medium.com/c-development-tutorial-4-static-and-dynamic-libraries-7b537656163e
I'm working on a Windows 7 computer at work and want to use the libpostal package. Unfortunately, it's apparently not available for Windows, so I'm trying to configure it through Cygwin and I'm SO close. The last step is to install snappy from Google. Again, not available on Windows...
My assumption (based on nothing) is that I can just download the tarball and build it from source, right? I tried that, and I think it worked? But a) I don't know how to tell, and b) if it did, I don't know how to tell ./configure in libpostal to find it.
In order to build it from source, I downloaded the tarball and saved it in the folder that Cygwin reads as my home, which is C:\cygwin64\home\brittenb\. From there, I ran bash autogen.sh, which created the ./configure that I needed. So I ran that and while some responses to the checks were no, it seemed to run fine. I then ran make and make install. Nothing seemed out of place, so my assumption is that it did what it was supposed to do. I just have no idea where to go from here.
Here is the output from ls after I run everything:
aclocal.m4 snappy.cc
AUTHORS snappy.h
autogen.sh snappy.lo
autom4te.cache snappy.o
ChangeLog snappy.pc
compile snappy.pc.in
config.guess snappy_unittest.cc
config.h snappy_unittest.exe
config.h.in snappy_unittest-snappy_unittest.o
config.log snappy_unittest-snappy-test.o
config.status snappy-c.cc
config.sub snappy-c.h
configure snappy-c.lo
configure.ac snappy-c.o
COPYING snappy-internal.h
depcomp snappy-sinksource.cc
format_description.txt snappy-sinksource.h
framing_format.txt snappy-sinksource.lo
INSTALL snappy-sinksource.o
install-sh snappy-stubs-internal.cc
libsnappy.la snappy-stubs-internal.h
libtool snappy-stubs-internal.lo
ltmain.sh snappy-stubs-internal.o
m4 snappy-stubs-public.h
Makefile snappy-stubs-public.h.in
Makefile.am snappy-test.cc
Makefile.in snappy-test.h
missing stamp-h1
NEWS testdata
README test-driver
ls /usr/local/bin shows nothing, but ls /usr/local/include shows:
snappy.h snappy-c.h snappy-sinksource.h snappy-stubs-public.h
So... my question: did it work? Why does ./configure in libpostal say it can't find snappy? Thanks in advance.
The snappy dependency has been removed as of release 1.0.0. I made changes to the source and make and config so that it will build on MinGW.
Get it in my repository:
https://github.com/BenK10/libpostal_windows
Note that this is not the complete source since not everything had to be changed. I would suggest merging my changes with the official libpostal distribution to make sure you've got everything. Also, there are some extra DLLEXPORTs in some source files that I haven't removed yet, and the part in the Makefile that builds the executables like address_parser.exe was removed because some porting is necessary to build those programs on Windows. You can write your own using the DLL you'll get in the Windows build and the original source as a reference.
Check the return code from make install ($?). If it is zero, make install succeeded.
snappy looks like a library, so maybe it doesn't install anything in /usr/local/bin. The library is probably installed into /usr/local/lib.
I'm trying to help build a Ruby wrapper around Tensorflow using Swig. Currently, I'm stuck at making a shared build, .so, and exposing its C/C++ headers to Ruby. So the question is: How do I build a libtensorflow.so shared build including the full Tensorflow library so it's available as a shared library on OSX El Capitan (note: /usr/lib/ is read-only on El Capitan)?
Background
In this ruby-tensorflow project, I need to package a Tensorflow .bundle file, but whenever I irb -Ilib -rtensorflow or try to run the specs rspec, I get and errors that the basic numeric types are not defined, but they are clearly defined here.
I'm guessing this happens because my .so-file was not created properly or something is not linked as it should. C++/Swig/Bazel are not my strong sides, I'd like to focus on learning Tensorflow and building a good wrapper in Ruby, but I'm pretty stuck at this point getting to that fun part!
What I've done:
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
bazel build //tensorflow:libtensorflow.so (wait 10-15min on my machine)
Copied the generated libtensorflow.so (166.6 MB) to the /ext-folder
Run the ruby extconf.rb, make, and make install described in the project
Run rspec
In desperation, I've also gone through the official installation from source several times, but I don't know if that, the last sudo pip install /tmp/tensorflow_pkg/tensorflow-0.9.0-py2-none-any.whl-step even creates a shared build or just exposes a Python interface.
The guy, Arafat, who made the original repository and made the instructions that I've followed, says his libtensorflow.so is 4.5 GB on his Linux machine – so over 20X the size of the shared build on my OSX machine. UPDATE1: he says his libtensorflow.so-build is 302.2 MB, 4.5GB was the size of the entire tensorflow folder.
Any help or alternative approaches are very appreciated!
After more digging around, discovering otool (thanks Kristina) and better understanding what a .so-file is, the solution didn't require much change in my setup:
Shared Build
# Clone source files
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
cd tensorflow
# Build library
bazel build //tensorflow:libtensorflow.so
# Copy the newly shared build/library to /usr/local/lib
sudo cp bazel-bin/tensorflow/libtensorflow.so /usr/local/lib
Calling from Ruby using Swig
Follow the steps here, https://github.com/chrhansen/ruby-tensorflow#install-ruby-tensorflow, to run Swig, create a Makefile and make
When you run make you should see a line saying:
$ make
$ linking shared-object libtensorflow.bundle
If your shared build is not accessible you'll see something like:
$ ld: library not found for -ltensorflow
Simple tutorial
For those starting on this adventure, using C/C++ libraries in Ruby, this post was a good tutorial for me: http://engineering.gusto.com/simple-ruby-c-extensions-with-swig/
I don't think you actually want a .so, I think you want a .dylib (see What are the differences between .so and .dylib on osx?). You're forcing Bazel to build a .so by specifying libtensorflow.so as the target, build this instead:
bazel build //tensorflow
(//tensorflow is shorthand for //tensorflow:tensorflow, which is "build the tensorflow target." Specifying an exact file you want forces Bazel to build that file, if possible.)
Once you have a .dylib, you can check its contents with otool:
otool -L bazel-bin/tensorflow/libtensorflow.dylib
Not sure if this will solve all your problems, but worth a try.
I'm currently learning how to use the autoconf/automake toolchain. I seem to have a general understanding of the workflow here - basically you have a configure.ac script which generates an executable configure file. The generated configure script is then executed by the end user to generate Makefiles, so the program can be built/installed.
So the installation for a typical end-user is basically:
./configure
make
make install
make clean
Okay, now here's where I'm confused:
As a developer, I've noticed that the auto-generated configure script sometimes won't run, and will error with:
config.status: error: cannot find input file: `somedir/Makefile.in'
This confuses me, because I thought the configure script is supposed to generate the Makefile.in. So Googling around for some answers, I've discovered that this can be fixed with an autogen.sh script, which basically "resets" the state of the autoconf environment. A typical autogen.sh script would be something like:
aclocal \
&& automake --add-missing \
&& autoconf
Okay fine. But as an end-user who's downloaded countless tarballs throughout my life, I've never had to use an autogen.sh script. All I did was uncompress the tarball, and do the usual configure/make/make install/make clean routine.
But as a developer who's now using autoconf, it seems that configure doesn't actually run unless you run autogen.sh first. So I find this very confusing, because I thought the end-user shouldn't have to run autogen.sh.
So why do I have to run autogen.sh first - in order for the configure script to find Makefile.in? Why doesn't the configure script simply generate it?
In order to really understand the autotools utilities you have to remember where they come from: they come from an open source world where there are (a) developers who are working from a source code repository (CVS, Git, etc.) and creating a tar file or similar containing source code and putting that tar file up on a download site, and (b) end-users who are getting the source code tar file, compiling that source code on their system and using the resulting binary. Obviously the folks in group (a) also compile the code and use the resulting binary, but the folks in group (b) don't have or need, often, all the tools for development that the folks in group (a) need.
So the use of the tools is geared towards this split, where the people in group (b) don't have access to autoconf, automake, etc.
When using autoconf, people generally check in the configure.ac file (input to autoconf) into source control but do not check in the output of autoconf, the configure script (some projects do check in the configure script of course: it's up to you).
When using automake, people generally check in the Makefile.am file (input to automake) but do not check in the output of automake: Makefile.in.
The configure script basically looks at your system for various optional elements that the package may or may not need, where they can be found, etc. Once it finds this information, it can use it to convert various XXX.in files (typically, but not solely, Makefile.in) into XXX files (for example, Makefile).
So the steps generally go like this: write configure.ac and Makefile.am and check them in. To build the project from source code control checkout, run autoconf to generate configure from configure.ac. Run automake to generate Makefile.in from Makefile.am. Run configure to generate Makefile from Makefile.in. Run make to build the product.
When you want to release the source code (if you're developing an open source product that makes source code releases) you run autoconf and automake, then bundle up the source code with the configure and Makefile.in files, so that people building your source code release just need make and a compiler and don't need any autotools.
Because the order of running autoconf and automake (and libtool if you use it) can be tricky there are scripts like autogen.sh and autoreconf, etc. which are checked into source control to be used by developers building from source control, but these are not needed/used by people building from the source code release tar file etc.
Autoconf and automake are often used together but you can use autoconf without automake, if you want to write your own Makefile.in.
For this error:
config.status: error: cannot find input file: `somedir/Makefile.in'
In the directory where the configure.ac is located in the Makefile.am add a line with the subdirectory somedir
SUBDIRS = somedir
Inside somedir put a Makefile.am with all the description. then run automaker --add-missing
A better description can be found in 7.1 Recursing subdirectories automake manual.
https://www.gnu.org/software/automake/manual/automake.html
I am trying to install a rubygem which keeps on trying to read a library which is not available.
grep: /usr/lib64/libgdbm.la: No such file or directory
/bin/sed: can't read /usr/lib64/libgdbm.la: No such file or directory
libtool: link: /usr/lib64/libgdbm.la' is not a valid libtool archive
In order to work around this, I installed my own libgdbm and provided the path to the libgdbm in the makefile LDFLAGs but to no avail.
Any help is much appreciated.
This rubygem seems to do dirty stuff, since any clean library search (-L or pkg-config) would have resulted in a message like "library/package gdbm not found". And especially the grep-and-sed procedure on the la file seems really dirty. Make sure Santa knows that the author of this gem gets no presents this year.
The gem probably has the path to the libtool archive hardcoded. First of all, try to grep for /usr/lib64/libgdbm.la in the Makefile of the gem. Change the hardcoded path, and make sure the installation script has no write permissions on any system directories, because it seems to run wild with seds.
Libtool requires intimate knowledge of your compiler suite and operating system in order to be able to create shared libraries and link against them properly. When you install the libtool distribution, a system-specific libtool script is installed into your binary directory.
However, when you distribute libtool with your own packages, you do not always know the compiler suite and operating system that are used to compile your package.
For this reason, libtool must be configured before it can be used. This idea should be familiar to anybody who has used a GNU configure script. configure runs a number of tests for system features, then generates the Makefiles (and possibly a config.h header file), after which you can run make and build the package.
Libtool adds its own tests to your configure script in order to generate a libtool script for the installer's host machine. For this you can play with LT_INIT macro in configure.ac.
So in short, if your package has configure file, run it before running Make
make distclean //clean up all the previous generated files
autoconf //or autoreconf to generate configure script from configure.ac and configure.in
automake //to generate new Makefile.in from Makefile.ac
./configure //to generate new Makefile and libtool