I am trying to compile a project with a large code base and (possibly) not fully up-to-date CMakeLists.txt. The software has several components. In particular you separately build core of the application and then proceed to build various extensions. The core also uses boost as one of its many dependencies.
I successfully configured and built the core component. I am now building the GUI extension. Configure is successful but make fails whilst linking against boost with the following errors:
/usr/bin/ld: cannot find -lBoost::filesystem
/usr/bin/ld: cannot find -lBoost::system
I can fix this by manually invoking gcc with -lBoost::filesystem replaced by-lboost_filesystem.
Clearly something went wrong with the configuration. When I inspect the variables with ccmake I can confirm that cmake is pointing to the right boost directory. After investigating CMakeLists.txt I found that ${Boost_FILESYSTEM_LIBRARY} is referenced in the core source code, but not in extension e.g.
SET(COMMON_LIBS
Registry
...
${Boost_FILESYSTEM_LIBRARY}
${Boost_SYSTEM_LIBRARY}
)
...
TARGET_LINK_LIBRARIES(Launcher ResourcesManager ${LIBBATCH_LIBRARIES} ${LIBXML2_LIBS} ${Boost_FILESYSTEM_LIBRARY} ${Boost_SYSTEM_LIBRARY})
...
TARGET_LINK_LIBRARIES(SalomeLauncher Launcher ${COMMON_LIBS})
Could you please point me in the right direction? In particular do problem like this indicate issues with Boost, with the application kernel or with application extension? Any hint at this stage would be useful.
Motivation and setup
I am trying to compile SALOME on Arch Linux with cmake version 3.17.1.
I am going to post a less manual solution in case anyone comes across it and cannot fix their CMakeLists.txt. I ended up using sed to automate this build:
cmake \
-DCMAKE_BUILD_TYPE=Release \
...
../ \
&& find -name link.txt -exec sed -i \
-e 's/Boost::filesystem/boost_filesystem/' \
-e 's/Boost::system/boost_system/' {} \; \
&& make -j && make install
This should obviously not be necessary, but if cannot debug your build systems, it should work.
Related
I'm very new to Yesod and I'm having trouble building Yesod statically
so I can deploy to Heroku.
I have changed the default .cabal file to reflect static compilation
if flag(production)
cpp-options: -DPRODUCTION
ghc-options: -Wall -threaded -O2 -static -optl-static
else
ghc-options: -Wall -threaded -O0
And it no longer builds. I get a whole bunch of warnings and then a
slew of undefined references like this:
Linking dist/build/personal-website/personal-website ...
/usr/lib/ghc-7.0.3/libHSrts_thr.a(Linker.thr_o): In function
`internal_dlopen':
Linker.c:(.text+0x407): warning: Using 'dlopen' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/unix-2.4.2.0/libHSunix-2.4.2.0.a(HsUnix.o): In
function `__hsunix_getpwent':
HsUnix.c:(.text+0xa1): warning: Using 'getpwent' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/unix-2.4.2.0/libHSunix-2.4.2.0.a(HsUnix.o): In
function `__hsunix_getpwnam_r':
HsUnix.c:(.text+0xb1): warning: Using 'getpwnam_r' in statically
linked applications requires at runtime the shared libraries from the
glibc version used for linking
/usr/lib/libpq.a(thread.o): In function `pqGetpwuid':
(.text+0x15): warning: Using 'getpwuid_r' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/libpq.a(ip.o): In function `pg_getaddrinfo_all':
(.text+0x31): warning: Using 'getaddrinfo' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__63.o): In function `sD3z_info':
(.text+0xe4): warning: Using 'gethostbyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__164.o): In function `sFKc_info':
(.text+0x12d): warning: Using 'getprotobyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__155.o): In function `sFDs_info':
(.text+0x4c): warning: Using 'getservbyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/libpq.a(fe-misc.o): In function `pqSocketCheck':
(.text+0xa2d): undefined reference to `SSL_pending'
/usr/lib/libpq.a(fe-secure.o): In function `SSLerrmessage':
(.text+0x31): undefined reference to `ERR_get_error'
/usr/lib/libpq.a(fe-secure.o): In function `SSLerrmessage':
(.text+0x41): undefined reference to `ERR_reason_error_string'
/usr/lib/libpq.a(fe-secure.o): In function `initialize_SSL':
(.text+0x2f8): undefined reference to `SSL_check_private_key'
/usr/lib/libpq.a(fe-secure.o): In function `initialize_SSL':
(.text+0x3c0): undefined reference to `SSL_CTX_load_verify_locations'
(... snip ...)
If I just compile with just -static and without -optl-static
everything builds fine but the application crashes when it tries to
start on Heroku.
2011-12-28T01:20:51+00:00 heroku[web.1]: Starting process with command
`./dist/build/personal-website/personal-website -p 41083`
2011-12-28T01:20:51+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: error while loading shared libraries: libgmp.so.10:
cannot open shared object file: No such file or directory
2011-12-28T01:20:52+00:00 heroku[web.1]: State changed from starting
to crashed
I tried adding libgmp.so.10 to the LD_LIBRARY_PATH as suggested in here
and then got the following error:
2011-12-28T01:31:23+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: /lib/libc.so.6: version `GLIBC_2.14' not found
(required by ./dist/build/personal-website/personal-website)
2011-12-28T01:31:23+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: /lib/libc.so.6: version `GLIBC_2.14' not found
(required by /app/dist/build/personal-website/libgmp.so.10)
2011-12-28T01:31:25+00:00 heroku[web.1]: State changed from starting
to crashed
2011-12-28T01:31:25+00:00 heroku[web.1]: Process exited
It seems that the version of libc that I'm compiling against is
different. I tried also adding libc to the batch of libraries the
same way I did for libgmp but this results in a segmentation fault
when the application starts on the Heroku side.
Everything works fine on my PC. I'm running 64bit archlinux with ghc
7.0.3. The blog post on the official Yesod blog looked pretty easy
but I'm stumped at this point. Anyone have any ideas? If there's a way to get this thing working without building statically I'm open to that too.
EDIT
Per Employed Russians answer I did the following to fix this.
First created a new directory lib under the project directory and copied the missing shared libraries into it. You can get this information by running ldd path/to/executable and heroku run ldd path/to/executable and comparing the output.
I then did heroku config:add LD_LIBRARY_PATH=./lib so when the application is started the dynamic linker will look for libraries in the new lib directory.
Finally I created an ubuntu 11.10 virtual machine and built and deployed to Heroku from there, this has an old enough glibc that it works on the Heroku host.
Edit:
I've since written a tutorial on the Yesod wiki
I have no idea what Yesod is, but I know exactly what each of your other errors means.
First, you should not try to link statically. The warning you get is exactly right: if you link statically, and use one of the routines for which you are getting the warning, then you must arrange to run on a system with exactly the same version of libc.so.6 as the one you used at build time.
Contrary to popular belief, static linking produces less, not more, portable executables on Linux.
Your other (static) link errors are caused by missing libopenssl.a at link time.
But let's assume that you are going to go the "sane" route, and use dynamic linking.
For dynamic linking, Linux (and most other UNIXes) support backward compatibility: an old binary continues to work on newer systems. But they don't support forward compatibility (a binary built on a newer system will generally not run on an older one).
But that's what you are trying to do: you built on a system with glibc-2.14 (or newer), and you are running on a system with glibc-2.13 (or older).
The other thing you need to know is that glibc is composed of some 200+ binaries that must all match exactly. Two key binaries are /lib/ld-linux.so and /lib/libc.so.6 (but there are many more: libpthread.so.0, libnsl.so.1, etc. etc). If some of these binaries came from different versions of glibc, you usually get a crash. And that is exactly what you got, when you tried to place your glibc-2.14 libc.so.6 on the LD_LIBRARY_PATH -- it no longer matches the system /lib/ld-linux.
So what are the solutions? There are several possibilities (in increasing difficulty):
You could copy ld-2.14.so (the target of /lib/ld-linux symlink) to the target system, and invoke it explicitly:
/path/to/ld-2.14.so --library-path <whatever> /path/to/your/executable
This generally works, but can confuse an application that looks at argv[0], and breaks for applications that re-exec themselves.
You could build on an older system.
You could use appgcc (this option has disappeared, see this for description of what it used to be).
You could set up a chroot environment matching the target system, and build inside that chroot.
You could build yourself a Linux-to-olderLinux crosscompiler
You have several issues.
You should not build production binaries on bleeding edge distributions. The libraries on the production system will not be forward compatible.
You should not link glibc statically - it will always at runtime try to load additional libraries. For example cpu-based assembly. That is what your first warnings are about.
The last linker errors look like they are related to a missing openssl library on the command line.
But all in all - downgrade your distribution.
I had similar problems launching to Heroku (which uses glibc-2.11) where I had an application that required glibc-2.14, but I did not have access to the source and could not re-build it. I tried many things and nothing worked.
My workaround was to launch the service on Amazon Elastic Beanstalk and just provide an API interface.
I found the information provided useful as well, I think the various descriptions miss a critical issue I also ran into while forcing an updated version of Vagrant to start working again.
It's the dependency references internal to something like complicated installs, like Yesod to Heroku. Those interanl refences need to be preserved.
This is the script I wrote to make problems go away (at least, hopefully, for a little while):
#!/bin/bash
cd $HOME/
GLIBC_VERSION="2.17"
GLIBC_PREFIX="/usr/glibc/"
VAGRANT_VERSION="2.2.19"
# Install the basic build system utilities.
yum groupinstall -y "Development tools"
yum install -y curl patchelf
# Grab the tarball with the GNU libc source code.
curl -Lfo glibc-${GLIBC_VERSION}.tar.gz "https://ftp.gnu.org/gnu/glibc/glibc-${GLIBC_VERSION}.tar.gz"
echo "a3b2086d5414e602b4b3d5a8792213feb3be664ffc1efe783a829818d3fca37a glibc-${GLIBC_VERSION}.tar.gz" | sha256sum -c || exit 1
# Extract the secrets and get ready to rumble.
tar xzvf glibc-${GLIBC_VERSION}.tar.gz
# The configure script requrires an independent build directory.
mkdir -p glibc-build && cd glibc-build
# Configure glibc with a GLIBC_PREFIX so it doesn't conflict with distro libc files..
../glibc-${GLIBC_VERSION}/configure --prefix="${GLIBC_PREFIX}" --libdir="${GLIBC_PREFIX}/lib" \
--libexecdir="${GLIBC_PREFIX}/lib" --enable-multi-arch
# Compile and then install GNU libc.
make -j8 && make install
# Download and install Vagrant.
curl -Lfo vagrant_${VAGRANT_VERSION}_x86_64.rpm "https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}_x86_64.rpm"
echo "990e8d2159032915f21c0f1ccdcbca1a394f7937e06e43dc1dabe605d208dc20 vagrant_${VAGRANT_VERSION}_x86_64.rpm" | sha256sum -c || exit 1
yum install -y vagrant_${VAGRANT_VERSION}_x86_64.rpm
# Patch the binaries and shared libraries inside the Vagrant directory, so they use the new version of GNU libc.
(find /opt/vagrant/ -type f -exec file {} \; )| grep "dynamically linked" | awk -F':' '{print $1}' | while read FILE ; do
patchelf --set-rpath /opt/vagrant/embedded/lib:/opt/vagrant/embedded/lib64:/usr/glibc/lib:/usr/lib64:/lib64:/lib --set-interpreter /usr/glibc/lib/ld-linux-x86-64.so.2 "${FILE}"
done
The script should be pretty easy to understand, and adapt easily to whatever MacGuffin you want to make work, provied you understand it.
The only tricky part is the rpath you pass to patchelf. Upi need to make sure you preserve the search paths, and precedence your software requires. Or you end up fixing one problem only to create another equally frustrating roadblock.
P.S. Don't forget the update the hashes for any file you down. In particular, you need to compile/install a different version of GNU libc, you will need to update that hash to match the version you want to use.
I want to build my own operating system , but how install i686-elf-gcc in manjaro
i I found a tool(https://github.com/lordmilko/i686-elf-tools), but it can only be run in ubuntu
A simple solution would be to build the compiler yourself. I went through the same thing recently. If you are into operating system development, you won't be able to avoid looking at the compiler in more detail and building cross compilation tools anyway.
Building your own compiler
The build process can be roughly divided into 4 steps:
Install all dependencies necessary for the build. If I remember correctly, you can get everything from the official package sources in Arch Linux. Make sure that these packages/tools are present: make, bison, flex, gmp, mpc, mpfr, texinfo, libisoburn, mtools.
Download the source code of binutils (GNU's assembler and binary tools) and gcc (the GNU compiler collection). I recommend using the newest versions at the bottom of the respective pages.
Decide where your new compiler should be installed. Although it sounds tempting, it should not end up in any system directory, rather somewhere in your home folder. I used $HOME/tools/crc to store my cross-compilation tools. You can at it to your $PATH lateron for convenience.
Do the actual build. First of all: The build takes a while and needs one or the other command line switch. Do not omit any of them. The build may pass and problems may occur later. Just follow the instructions below.
The actual build process
The first thing to do is to compile binutils, because it is needed for the gcc build. For convenience set a few shell variables to minimize error sources:
# This is where the tools will end up
export PREFIX="$HOME/tools/crc"
# Prefix of the produced assemblies (for example i686-elf-gcc)
export TARGET=i686-elf
# Add the new installation to the PATH variable temporarily
# since it is required for the gcc build
export PATH="$PREFIX/bin:$PATH"
Now create a new directory somewhere and extract both the gcc and binutils source code archives in there. You should end up with two subdirectories like yourdir/binutils-x.y.z and yourdir/gcc-x.y.z. It is recommended to do the build in an empty directory, so create yourdir/build-binutils and yourdir/build-gcc as well. Notice: These directories are not placed inside the source directories!
Building binutils
cd into the yourdir/build-binutils directory and run the following commands. Replace the x.y.z part with your version.
../binutils-x.y.z/configure \
--target=$TARGET \
--prefix="$PREFIX" \
--with-sysroot \
--disable-nls \
--disable-werror
make
make install
Now check the installation with which -- $TARGET-as. This will return the location of i686-elf-as, which is the assembler we just build.
Building gcc
cd into the yourdir/build-gcc directory. The process is pretty much the same as with binutils above:
../gcc-x.y.z/configure \
--target=$TARGET \
--prefix="$PREFIX" \
--disable-nls \
--enable-languages=c,c++ \
--without-headers
make all-gcc
make all-target-libgcc
make install-gcc
make install-target-libgcc
Verify the build
Check the installation by invoking i686-elf-gcc --version. If you used the same values as I, this can be done with $HOME/tools/crc/bin/$TARGET-gcc --version.
I have a CortexM0 project using a custom Makefile that builds and debugs successfully on a 1st machine.
Now trying to move the project to a second Mac.
Same version of Eclipse.
On build I get a linker error:
EclipseApr2019/gcc-arm-none-eabi-5_2-2015q4/bin/../lib/gcc/arm-none-eabi/5.2.1/../../../../arm-none-eabi/bin/ld: cannot find -lg
My make file looks like this (extract):
# echo "path="$(TOOLS)
$(TOOLS)arm-none-eabi-gcc -n -v -mcpu=cortex-m0 -mthumb -g -nostartfiles -T STM32F031C6_simple.ld main.c StartUp_simple.s -o $(NAME).elf
I have tried to append the ARM gcc tools directory to the PATH variable in the Project, but no luck.
I would add a -l option to the link stage in the makefile, but do not know why this library is being pulled in or where it is. My code only does a series of shifts and reads/writes to registers on an MCU. The build on the 1st machine worked fine without specifying a library location like this.
Given I have custom makefile and am not generating Makefile automatically, there are no tool settings (and Library search path) available under Properties/CC++Build/Settings.
What is library "g" that the linker is pulling in?
Where is it?
Under Eclipse, how can I point the linker to the library?
Why didn't I need to do that before?
What is some general advice for designing an Eclipse project with a custom makefile to make it most portable between machines?
Thank you.
Eclipse IDE for C/C++ Developers
Version: 2019-03 (4.11.0)
My build system has libtiff installed in this path:
/usr/lib/x86_64-linux-gnu/libtiff.so.5.2.4
And I have built a custom libtiff in a local path:
/home/user/libtiff/usr/local/lib/libtiff.so.3.8.2
I want to build a binary linked with libtiff installed on my local path. To do that, I use this command:
cc -o binary \
obj1.o ... objn.o \
-L /home/user/libtiff/usr/local/lib/ \
-Wl,-rpath,L/home/user/libtiff/usr/local/lib/ \
-ltiff
The problem is after linking and generating the binary, ldd shows the binary is not using the local libtiff, but the library installed on the build system:
$ ldd binary | grep libtiff
libtiff.so.5 => /usr/lib/x86_64-linux-gnu/libtiff.so.5 (0x00007fbaf9ad6000)
I don't understand why the linker is not using the local library.
I have read some related posts talking about setting LD_LIBRARY_PATH, LD_PRELOAD or LIBRARY_PATH, but none of them works as expected.
Modifiying /etc/ld.so.conf is not a nice option.
Remove the spurious L in front of the root slash:
-Wl,-rpath,L/home/user/libtiff/usr/local/lib/
#yugr, thank you for this -verbose tip. It helped me to fix the issue. The problem was with another library compiled locally (spandsp) that depends on libtiff. The configure script of the spandsp was deciding to use libtiff.so.5 (build system) instead of libtiff.so.3 (compiled locally). That was because LDFLAGS was not properly defined before executing the configure script. Defining LDFLAGS as -L/home/user/usr/local/lib/ -Wl,-rpath-link,/home/user/usr/local/lib/ fixed the issue. Thank you very much for your interest in helping with this issue! :)
I have compiled gdc together with gcc using the android build-gcc.sh script, and have included a new stub in build/core/definitions.mk to deal with D language files as a part of the build process. I know things are compiling OK at this point, but my problem is linking:
When I build a project, I get this error:
ld: crtbegin_so.o: No such file: No such file or directory
This is true for regular c-only projects as well. Now I ran a quick find in my build directory, and found that the file (crtbegin_so.o) does exist within the sysroot I specified when I compiled gcc (or rather, when build-gcc.sh built it).
What are some things I could look for to find a solution to this problem?
Would copying the files locally and linking directly to them be a decent solution in the
interim?
Why would ld (or collect2) be trying to include these for a gdc (D Language) linkage?
The issue arises on NDK r7c for linux as well.
I found that the toolchain ignores the platform location ($NDK_ROOT/platforms/android-8/arch-arm/usr/lib/) and searches for it in the toolchain path, which is incorrect.
However, as the toolchain also searches for the file in the current directory, one solution is to symlink the correct platform crtbegin_so.o and crtend_so.o into the source directory:
cd src && ln -s NDK_ROOT/platforms/android-8/arch-arm/usr/lib/crtbegin_so.a
cd src && ln -s NDK_ROOT/platforms/android-8/arch-arm/usr/lib/crtend_so.a
Thus your second point should work out (where you can do a symlink, instead of a copy)
NOTE 1:This assumes that the code is being compiled for API8 (Android 2.2) using the NDK. Please alter the path to the correct path as per your requirement.
NOTE 2:Configure flags used:
./configure \
--host=arm-linux-androideabi \
CC=arm-linux-androideabi-gcc \
CPPFLAGS="-I$NDK_ROOT/platforms/android-8/arch-arm/usr/include/" \
CFLAGS="-nostdlib" \
LDFLAGS="-Wl,-rpath-link=$NDK_ROOT/platforms/android-8/arch-arm/usr/lib/ -L$NDK_ROOT/platforms/android-8/arch-arm/usr/lib/" \
LIBS="-lc"
I have found that adding --sysroot=$(SYSROOT) to the compiler options fixes the error:
cannot open crtbegin_so.o: No such file or directory
from my makefile...
CC= $(CROSS_COMPILE)gcc -fvisibility-hidded $(INC) $(LIB) -shared
Note: this assumes that the setenv-android.sh has been run to setup the environment
$. ./setenv-android.sh
In my case quotes were missing from sysroot path.
When I changed
--sysroot=${ANDROID_NDK}\platforms\android-17\arch-arm
to
--sysroot="${ANDROID_NDK}\platforms\android-17\arch-arm"
the project was compiled and linked successfully.
I faced with the same issue in two separate cases:
during building boost for android
during using android-cmake project.
Once I have switched to standalone toolchain issue gone, here is example of command which prepare standalone toolchain
$NDK_ROOT/build/tools/make-standalone-toolchain.sh --platform=android-9 --install-dir=android-toolchain --ndk-dir=$NDK_ROOT --system=darwin-x86_64 --toolchain=arm-linux-androideabi-4.9
Boost specific
for boost you need specify --sysroot several times in your jam
<compileflags>--sysroot=$NDK_ROOT/platforms/android-9/arch-arm
<linkflags>--sysroot=$NDK_ROOT/platforms/android-9/arch-arm