how to set path in bashrc for multiple locations - bash

I am installing same library (maybe with different release versions) at two different locations. Now I am exporting the path in bashrc for both. In linux which one is taken if I call the library in some program?
For example:
mylib_version1 is installed in /home/PATH1/lib ,
mylib_version2 is installed in /home/PATH2/lib
in bashrc I do,
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/PATH1/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/PATH2/lib
which path is actually taken by some other program when calling this library? How does the ordering work in bashrc?
Similarly what happens when PATH1 is just /usr/local/lib (which I don't export in bashrc)
and PATH2 is some user define path.
What I experience for some program is, if I install it in /usr/local/bin
and if I install using some prefix like /home/PATH/bin and export it in bashrc like
export PATH=$PATH:/home/PATH/bin
it always takes from /usr/local/bin.

If I understand you correctly, you have library.so with 2 versions and you have a binary that might use library.so version 1 or 2.
To deal with that problem you must first understand the meaning of libraries version mechanism. All libraries should be placed in the same place and you might have something like that:
/usr/lib/library.so.1.0.0
/usr/lib/library.so.2.0.0
Your binary would be linked to the correct library based on the API used and linking during the build process.
Please read more information regarding libraries here

Related

combining pkg-config with module environment

This question may not make much sense if my understanding of both the pkg-config and environment modules is somewhat incorrect, but I'll ask anyways as I could not find anything specific on this topic. There might be an entirely better solution available, if that is the case, I am all ears!
I while back I started using modules to easily load my development environment as needed (i.e. using commands like module load foo etc.). More recently, I have adopted the meson build system for my projects. In meson, libraries are treated as dependencies, which are found using pkg-config in the background instead. So now I have two ways of discovering libraries and setting up their lib and include directory.
As an example, I have the following (simplified) module script for library foo (I am using lmod which is based on lua):
prepend_path("LD_LIBRARY_PATH", "/opt/foo/lib")
prepend_path("CPATH", "/opt/foo/include")
I could also have a pkg-config file (*.pc) doing something similar like (that is, if my understanding of pkg-config is correct)
prefix=/opt/foo
includedir=${prefix}/include
libdir=${exec_prefix}/lib
Name: foo
Cflags: -I${includedir}
Libs: -L${libdir} -lfoo
Now both seem to be doing pretty much the same thing (in terms of setting up my environment), but simply using modulefiles will not allow meson to find my dependencies and I still have to use pkg-config (which requires basically creating two files, either manually or dynamically, but that sounds like a maintenance burden and also not very clean). Equally, I could create the pkg-config file and add the location of that file into the PKG_CONFIG_PATH, i.e. something like
prepend_path("LD_LIBRARY_PATH", "/opt/foo/lib")
prepend_path("CPATH", "/opt/foo/include")
prepend_path("PKG_CONFIG_PATH", /path/to/*.pc/file)
but again this requires two files (pkg and module). I rather like the module environment and so don't want to ditch that, so is there a better / cleaner way of doing things, where I just load a module file which will allow pkg-config (and thus meson in turn) to know about the dependency?
As of today, there is no bridge between the environment module and the pkg-config tools. The best thing I think that could be achieved to keep the module system, is to have a script that queries every pkg-config files available and create the corresponding modulefile. And run that script regularly to keep things in sync.

./configure to use specific compiler

My linux environment is based off of a high compute cluster that does not allow users to install to /usr/bin/ or use sudo. I'm trying to use ./configure (made from protocol buffers) to install to my home. When configure searches for the CXX files it is not finding the compilers that are located in the bin because they are named things like 'g++34' instead of 'g++'. I want to point the configure file at this specific compiler, but can't seem to get the command right to do so. Again the directory where the compiler is gets searched, it is just named funny (using an alias hasn't worked either).
How do you use a arguments in a configure file to point at a specific compiler?
Just use:
./configure CC=gcc34 CXX=g++34
etc. If you have a really old version of configure you might have to do it via the environment instead:
CC=gcc34 CXX=g++34 ./configure

correct usage of rpath (relative vs absolute)

When building a binary or library, specifying the rpath, i.e.
-Wl,rpath,<path/to/lib>
tells the linker where to find the required library at runtime of the binary.
What is the UNIX philosphy regarding absolute and relative paths here? Is it better to use an absolute path so the lib can be found from everywhere? Or is it better to make it relative so copying an entire directory or renaming a higher level path won't render the binary unusable?
Update
Using $ORIGIN is usually the preferred way of building binaries. For libraries I like to put in the absolute path, because otherwise you won't be able to link to the library. A symbolic link will change the $ORIGIN to point to the path of the link and not of the link target.
In the case of rpath, it makes no sense to use a relative path, since a relative path will be relative to the current working directory, NOT relative to the directory where the binary/library was found. So it simply won't work for executables found in $PATH or libraries in pretty much any case.
Instead, you can use the $ORIGIN "special" path to have a path relative to the executable with-Wl,-rpath,'$ORIGIN' -- note that you need quotes around it to avoid having the shell interpret it as a variable, and if you try to do this in a Makefile, you need $$ to avoid having make interpret the $ as well.
What is the UNIX philosphy regarding absolute and relative paths here?
Using relative path makes an executable that only works when invoked from a particular directory, which is almost never what you want. E.g. if the executable is in /app/foo/bin/exe and has DT_RUNPATH of lib/, and a dependent library is in /app/foo/lib/libfoo.so, then the exe would only run when invoked from /app/foo, and not when invoked from any other directory.
Using absolute path is much better: you can do cd /tmp; /app/foo/bin/exe and have the executable still work. This is still however less than ideal: you can't easily have multiple versions of the binary (important during development), and you dictate to end-users where they must install the package.
On systems that support $ORIGIN, using DT_RUNPATH of $ORIGIN/../lib would give you an executable what works when installed anywhere and invoked from any directory, so long as relative paths to bin/ and lib/ are preserved.
From The Linux Programming Interface:
Using $ORIGIN in rpath
Suppose that we want to distribute an application that uses some of
its own shared libraries, but we don’t want to require the user to
install the libraries in one of the standard directories. Instead, we
would like to allow the user to unpack the application under an
arbitrary directory of their choice and then immediately be able to
run the application. The problem is that the application has no way of
determining where its shared libraries are located, unless it requests
the user to set LD_LIBRARY_PATH or we require the user to run some
sort of installation script that identifies the required directories.
Neither of these approaches is desirable. To get around this
problem, the dynamic linker is built to understand a special string,
$ORIGIN (or, equivalently, ${ORIGIN}), in an rpath
specification. The dynamic linker interprets this string to mean “the
directory containing the application.” This means that we can, for
example, build an application with the following command:
$ gcc -Wl,-rpath,'$ORIGIN'/lib ...
This presumes that at run time the application’s shared libraries will
reside in the subdirectory lib under the directory that contains the
application executable. We can then provide the user with a simple
installation package that contains the application and associated
libraries, and the user can install the package in any location and
then run the application (i.e., a so-called “turn-key application”).

Installing AUBit4GL via binary package

I am trying to experiment with AUBIT4GL, an Informix clone. I am running into a problem with the process as the steps outlined in the manual and the instructions given in the ./etc/aubitrc file seem to be a tad incomplete.
My questions are:
What is the purpose of the ./configure and ./make scripts in the distribution directory given that the software is distributed as a binary package and the install instructions make not reference to them?
Where is the env TARGET_OS set and why is there no reference to this setting in the install instructions when failing to define it causes the aubit program to fail?
Is anyone else besides me using this software or has attempted to?
If you're running on Linux - always trying compiling from source.
If you must run from binary - you dont need to do anything with the ./configure or make.
Just point $AUBITDIR to the code.
set PATH to include $AUBITDIR/bin
set LD_LIBRARY_PATH to include $AUBITDIR/lib
and you should be good to go.
For Windows - its pretty much the same - except compiling from source is a massive pain - so use the binary ;)
There - you need to have PATH include both %AUBITDIR%/bin and %AUBITDIR%/lib
In both cases - you'll likely need to make some configuration settings (what type of database, what UI etc etc)
If you're using Informix on Linux, setting :
export A4GL_UI=TUI
export A4GL_SQLTYPE=esql
will probably be enough (if they are not defaulted in the $AUBITDIR/etc/aubitrc)

ImageMagick deployment on Mac OS

I'm trying to deploy ImageMagick with my own software. On windows I've just included all the core dlls with coders dlls at the exe path and it works well.
But on mac os I have troubles with coders. I installed ImageMagick via macports and found it with the help of CMake. CMake does all the job of copying and fixing up all the core libs I've linked against. Then I copied all the coder libs and fixed them up also, but when I start my application it just can't find any coder. So I'd like to know what am I missing there.
Note: if I didn't fix up any paths it works well. It is only my deployment that is in trouble. Maybe I should include some kind of config file?
P.S. I have all ImageMagick libs including coders SOs near the executable in MacOS bundle sub-folder.
How about setting the MAGICK_CODER_MODULE_PATH in your bundle?
see here: http://www.imagemagick.org/script/resources.php
EDIT:
To improve the information:
Originally when embedding IM in our own app bundle we had three problems:
our app and the IM dylibs not finding referenced IM dylibs,
IM not finding its config files,
IM not finding coders (the No Decode Delegate error)
We tried changing the hardcoded paths in the dylibs using the install_name_tool but finally when doing some tests with moving the IM around to different directories and testing
convert -debug configuration
we found out the all three above problems could be solved just by setting and exporting at least these three environment variables in the terminal console before running convert:
DYLD_LIBRARY_PATH
MAGICK_CONFIGURE_PATH
MAGICK_CODER_MODULE_PATH
With this experience, we returned back to our bundle and in the beginning tried to use the Info.plist fiel to set these variables but it didn't seem to work - probably because there were problems with making the paths to IM inside the bundle relative.
Finally we created a simple sh script and put it into our bundle and configured this bundle to run this script instead of the main app:
#!/bin/sh
CURR_DIR="$( cd -P "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IMAGE_MAGICK_PATH=$CURR_DIR/../Resources/ImageMagick
export DYLD_LIBRARY_PATH=$IMAGE_MAGICK_PATH/lib
export MAGICK_CONFIGURE_PATH=$IMAGE_MAGICK_PATH/lib/ImageMagick-6.8.0/config
export MAGICK_CODER_MODULE_PATH=$IMAGE_MAGICK_PATH/lib/ImageMagick-6.8.0/modules-Q16/coders
# run application
exec $CURR_DIR/OurAppName
The key thing to make it working was properly getting the CURR_DIR of the app bundle (thanks to this post).
And as came out of our tests, setting the environment variables this way makes them visible only for this application execution context - i.e. when we started our app using the bundle, opened terminal and typed
env
the above three variables were missing from the output.
Hope this will help others save couple of days of research and pulling hairs out of their heads ;)
I've found a full solution for deploying ImageMagick in a bundle with the help of CMake. If you don't use CMake then #Tomasz's answer will be of help also.
So let's start:
First of all you need to know what and where ImageMagick is trying to locate when it is used from your own code. To find it out you can use MAGICK_DEBUG environmental variable which could be set to those parameters. It really helps when you debug ImageMagick.
Prerequisites:
I assume that you used FIND_PACKAGE and FIXUP_BUNDLE to find ImageMagick and set its binary paths inside the bundle. The only thing left is to deploy coders. Also I assume that you've installed ImageMagick from Mac Ports.
We need to get ImageMagick version string to correctly locate the coders:
STRING(REGEX REPLACE "-.+" "" ImageMagick_SHORT_VERSION ${ImageMagick_VERSION_STRING})
Now ImageMagick_SHORT_VERSION contains full version without any sub versions.
Then we need to copy all the coders to some predefined folder(I've used ImageMagick/coders subfolder under MacOS part of the bundle)
FILE(COPY /opt/local/lib/ImageMagick-${ImageMagick_SHORT_VERSION}/modules-Q16/coders/ DESTINATION ${PATH_TO_YOUR_BUNDLE}/Contents/MacOS/ImageMagick/coders/)
Now we need to fixup all the *.so libs we have, so we list it and pass to fixup_bundle
FILE(GLOB IMAGEMAGICK_CODERS ${PATH_TO_YOUR_BUNDLE}/Contents/MacOS/ImageMagick/coders/*.so)
Now we should update *.la files which accompanies coders *.so. To achieve it I've used script:
INSTALL(SCRIPT LaScript.cmake COMPONENT Runtime)
Script content:
SET(TARGET_BINARY_DIR "${PATH_TO_YOUR_BUNDLE}")
FILE(GLOB IMAGEMAGICK_CODERS_LA ${TARGET_BINARY_DIR}/Contents/MacOS/ImageMagick/coders/*.la)
FOREACH(file ${IMAGEMAGICK_CODERS_LA})
FILE(READ ${file} FILE_CONTENT)
STRING(REGEX REPLACE "dependency_libs='.*'" " " MODIFIED_FILE_CONTENT ${FILE_CONTENT})
STRING(REGEX REPLACE "libdir='.*'" " " MODIFIED_FILE_CONTENT ${MODIFIED_FILE_CONTENT})
FILE(WRITE ${file} ${MODIFIED_FILE_CONTENT})
ENDFOREACH()
We almost ready the only thing left to be done is to change the way we launch the application. But let's digress a little bit and find out where ImageMagick searches for the coders:
It tries to get the content of MAGICK_CODER_MODULE_PATH environmental variable
Then it checks if MAGICKCORE_CODER_PATH macro is defined(and in fact it does!) and use its value.
Then it will try to use MAGICK_HOME environmental variable and MAGICKCORE_CODER_RELATIVE_PATH to get path to the modules but we don't care since we will stop on #2 anyway!(NOTE: that it is true for Mac Ports installation)
So the only way we can interfere with search is to set MAGICK_CODER_MODULE_PATH environmental variable(Well we can also edit libMagickCodre and replace MAGICKCORE_CODER_PATH with some static path we need but it is too britle way to do things and it won't save us if someone set MAGICK_CODER_MODULE_PATH anyway)
We shouldn't set it system wide since we can break some user installtion so we have 2 options:
Use LSEnvironment to set the MAGICK_CODER_MODULE_PATH to some predefined location
Use script to launch our app and set this variable inside it.
I've chose the later since it is more flexible,
I have the following script:
#!/bin/bash
working_dir="${0%/*}"
export MAGICK_CODER_MODULE_PATH=$working_dir/ImageMagick/coders
executable="${working_dir}/ApplicationName"
"$executable"
and set CFBundleExecutable to the name of the script.
That's all and I hope it will help someone to save his/her time.
You should follow the Mac OS X-specific Build instructions but specifying --enable-shared in the configure options (see this document for details).
I guess that your application can't find the codecs because they have been statically linked to ImageMagick tools. This is usually done to address portability issues. To make codecs available in your application, you should build them as shared objects.

Resources