I have a virtual Debian system which I use to develop. Today I wanted to try llvm/clang. After installing clang I can't compile my old c-projects (with gcc).
This is the error:
/usr/bin/ld: cannot find crt1.o: No such file or directory
/usr/bin/ld: cannot find crti.o: No such file or directory
collect2: ld returned 1 exit status
I uninstalled clang and it still did not work. Does anyone have any idea how I can fix this?
Debian / Ubuntu
The problem is you likely only have the gcc for your current architecture and that's 64bit. You need the 32bit support files. For that, you need to install them
sudo apt install gcc-multilib
What helped me is to create a symbolic link:
sudo ln -s /usr/lib/x86_64-linux-gnu /usr/lib64
It seems that while you were playing with llvm/clang you(or the package manager) removed previously existing standard C library development package(eglibc on Debian) or maybe you didn't have it installed in the first place, thus you need to reinstall it, now that you reverted back to gcc.
You can do so like this on Debian:
aptitude show libc-dev
Ubuntu:
apt-get install libc-dev
On Ubuntu, if you don't have libc-dev, since I cannot find it on packages.ubuntu.com, you can try installing libc6-dev directly.
Or on Redhat like systems:
yum install glibc-devel
NB: Although you were briefly answered in the comments, here is an answer just so there is one on record in case someone encounters this one and might be looking for an answer, but not in the comments or the comment is not explicit enough for them.
This is a BUG reported in launchpad, but there is a workaround :
Run this to see where these files are located
$ find /usr/ -name crti*
/usr/lib/x86_64-linux-gnu/crti.o
then add this path to LIBRARY_PATH variable
$ export LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LIBRARY_PATH
After reading the http://wiki.debian.org/Multiarch/LibraryPathOverview that jeremiah posted, i found the gcc flag that works without the symlink:
gcc -B/usr/lib/x86_64-linux-gnu hello.c
So, you can just add -B/usr/lib/x86_64-linux-gnu to the CFLAGS variable in your Makefile.
If you're using Debian's Testing version, called 'wheezy', then you may have been bitten by the move to multiarch. More about Debian's multiarch here: http://wiki.debian.org/Multiarch
Basically, what is happening is various architecture specific libraries are being moved from traditional places in the file system to new architecture specific places. This is why /usr/bin/ld is confused.
You will find crt1.o in both /usr/lib64/ and /usr/lib/i386-linux-gnu/ now and you'll need to tell your toolchain about that. Here is some documentation on how to do that; http://wiki.debian.org/Multiarch/LibraryPathOverview
Note that merely creating a symlink will only give you one architecture and you'd be essentially disabling multiarch. While this may be what you want it might not be the optimal solution.
To get RHEL 7 64-bit to compile gcc 4.8 32-bit programs, you'll need to do two things.
Make sure all the 32-bit gcc 4.8 development tools are completely installed:
sudo yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 ncurses-devel.i686
Compile programs using the -m32 flag
gcc pgm.c -m32 -o pgm
stolen from here : How to Compile 32-bit Apps on 64-bit RHEL? - I only had to do step 1.
As explained in crti.o file missing , it's better to use "gcc -print-search-dirs" to find out all the search path. Then create a link as explain above "sudo ln -s" to point to the location of crt1.o
This worked for me with Ubuntu 16.04
$ export LIBRARY_PATH=/usr/lib/x86_64-linux-gnu
./configure --disable-multilib
works for it
On Alpine Linux that would mean that you need musl-dev:
apk add musl-dev
Although in my case the messages were:
/usr/lib/gcc/x86_64-alpine-linux-musl/11.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find Scrt1.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/11.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find crti.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/11.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lssp_nonshared: No such file or directory
collect2: error: ld returned 1 exit status
Which are also caused by missing musl-dev.
Ran into this on CentOs 5.4. Noticed that lib64 contained the crt*.o files, but lib did not. Installed glibc-devel through yum which installed the i386 bits and this resolved my issue.
Even I got the same compilation error when I was cross compiling i686-cm-linux-gcc.
The below compilation option solved my problem
$ i686-cm-linux-gcc a.c --sysroot=/opt/toolchain/i686-cm-linux-gcc
Note: The sysroot should point to compiler directory where usr/include available
In my case the toolchain is installed at /opt/toolchain/i686-cm-linux-gcc directory and usr/include is also available in the same directory
I solved it as follows:
1) try to locate ctr1.o and ctri.o files by using find -name ctr1.o
I got the following in my computer: $/usr/lib/i386-linux/gnu
2) Add that path to PATH (also LIBRARY_PATH) environment variable (in order to see which is the name: type env command in the Terminal):
$PATH=/usr/lib/i386-linux/gnu:$PATH
$export PATH
I had the same problem today, I solved it by installing recommended packages:
libc6-dev-mipsel-cross libc6-dev-mipsel-cross, libc-dev-mipsel-cross
This worked:
sudo apt-get install libc6-dev-mipsel-cross
One magic command:
sudo apt install build-essential
Fixed everything for me even on Raspberry Pi.
In my case, the crti.o error was entailed by the execution path configuration from Matlab.
For instance, you cannot perform a file if you have not set the path of your execution directory earlier.
To do this: File > setPath, add your directory and save.
use gcc -B lib_path_containing_crt?.o
In my case Ubuntu 16.04 I have no crti.o at all:
$ find /usr/ -name crti*
So I install developer libc6-dev package:
sudo apt-get install libc6-dev
Related
after I successfully set up my gcc/g++ environment under my Linux installation, I decided to do that for my Windows 11 machine as well. For that purpose I decided to use MSYS2. With the help of that handy tool I quickly installed MinGW as well as corresponding libraries.
One library which gives me headache (under Windows) is pkg-config. But before the installation of pkg-config, I installed gtk-3.0 first. I just installed it with the following command:
pacman -S mingw-w64-x86_64-gtk3
After that I installed pkg-config with the following command:
pacman -S mingw-w64-x86_64-pkg-config
After that, I tried to get all include and library flags for gtk3:
pkg-config --cflags gtk+-3.0
However after entering that command, the following error message occurs:
Perhaps you should add the directory containing `gtk+-3.0.pc'
to the PKG_CONFIG_PATH environment variable
Package 'gtk+-3.0', required by 'virtual:world', not found
The thing is that this exact command works like a charm under my Linux installation but somehow pkg-config can't find the package in the pkg-config search path. Why is that the case? Is that a known problem within the MSYS2 environment?
I would appreciate every tip I can get from you.
Thank you in advance!
EDIT: It looks like I just had to start the MinGW64 shell and not the one from MSYS2. Within that environment the files can be found and no error will occur. Thanks #HolyBlackCat!
The following answer is outdated, written in italic style and shouldn't be followed.
I just solved it by myself. I found out that I had to copy all .pc files from msys64\mingw64\lib\pkgconfig to the path I get from echoing PKG_CONFIG_PATH:
echo $PKG_CONFIG_PATH
gives me
/usr/lib/pkgconfig:/usr/share/pkgconfig:/lib/pkgconfig
So I just copied the files to /usr/lib/pkgconfig - problem solved!
Thank you anyway! :)
When attempting to compile RNNLib, I got an error in NetcdfDataset.hpp:26:24 saying that Netcdfcpp.h could not be found. I looked around and found a bug report from 2011 that suggested that this was a bug, but it claimed to have been fixed. I have tried everything I can think of, including rebuilding NetCDF (a dependency of RNNLib) with various different flags, and have been unable to fix this bug. Can anyone give me a hand?
I had some trouble on a virtual machine building rnnlib.
I had to install the C and C++ version of NetCDF to get it to work.
The C version can be installed via sudo apt-get install libnetcdf-dev
I had to install the C++ version by building it.
Hope it will help. It's quite a difficult lib to install.
Maybe this helps someone: you can avoid some of the pain by installing packages from APT, and access the correct version mentioned by user3620756, which contains the netcdfcpp.h header file
. This happens through a legacy package, available on Ubuntun 16.04 (Xenial universe, see APT repository).
First install libnetcdf for C, then install libnetcdf-cxx-legacy-dev which should depend on libnetcdf-c++4 and install required C++ libraries on the go:
sudo apt install libnetcdf-dev libnetcdf-cxx-legacy-dev
The newest version doesn't have this netcdfcpp.h file anymore.
I had to use ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-cxx-4.2.tar.gz to get it working.
I have also followed the same process and it worked for me
"The newest version doesn't have this netcdfcpp.h file anymore. I had to use ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-cxx-4.2.tar.gz to get it working."
After downloading the folder, I had to build it by entering into the netcdf folder. I used simple command for the task :
.\configure
make
sudo make install
But in the file named as "NetcdfDataset.hpp", I have to give the complete path of the netcdfcpp.h file. For my case the path of the include file is :
#include "/Volumes/Macintosh_HD_2/WordSpottingProj/trunk/CODE C++/rnnlib_source_forge_version/netcdf-cxx-4.2/cxx/netcdfcpp.h"
I had this problem in the context of trying to use a makefile that called for netcdfcpp.h:
$ make -f makefile_MAC
c++ -O2 -o burn7.x burn7.cpp -I/opt/local/include -L/opt/local/lib -lm -lnetcdf_c++
burn7.cpp:31:10: fatal error: 'netcdfcpp.h' file not found
#include <netcdfcpp.h>
^
1 error generated.
make: *** [burn7.x] Error 1
I'm on a Mac, so I used Homewbrew to install the NetCDF package, but version 4.3.3.1 didn't appear to have netcdfcpp.h:
brew install homebrew/science/netcdf
However, I found that installing it with an additional flag resulted in this version being included:
brew install homebrew/science/netcdf --with-cxx-compat
I assume that the same is true of other installation/compilation methods, and not that this file has been taken out of versions since 4.2 as others answers state. Maybe it was a default option before and now it isn't?
I am trying to compile, libgphoto2 with libxml2 support followint the guidelines here. Everything is ok until I try to run ./configure:
./configure --prefix=/tmp/gphoto2/local --with-libxml2=yes
That appears to me as a correct syntax, however I got an output:
LIBXML2 to support Olympus ..: no
I have checked this in 2 different systems (LinuxMint 11 x64 and Ubuntu 13.04), and I have found the same problem.
Can anyone give me a clue or solution?
Is there any problem with the syntax?
Is there a common problem with the configure --with-PACKAGE[=yes] option?
Is there a common problem with LIBXML2 used in compilation?
Thanks for any help!
This problem as appears on Debian Wheezy (Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2 x86_64 GNU/Linux) and the latest libgphoto2 release 2.5.2
libxml2-dev package is installed :-
Package: libxml2-dev
State: installed
Automatically installed: no
Multi-Arch: same
Version: 2.8.0+dfsg1-7+nmu1
Not totally familiar with configure scripts
but configure.ac file has line:-
AC_CHECK_HEADER(libxml/parser.h,[
which I assume looks for libmxl/parser.h
the libxml2-dev package delivers the file
/usr/include/libxml2/libxml/parser.h
It looks like libgphoto2 is designed for libxml2 library in a different place
Tried various solutions but only the following worked
as root I sym linked libxml2 to the place libgphoto2 was looking
ln -s /usr/include/libxml2/libxml /usr/include/libxml
After compiling libgphoto2 and gphoto2 this enabled gphoto2 to talk to my Olympus E-510
Bug raised on gphoto sourceforge site (https://sourceforge.net/p/gphoto/bugs/953/) and a patch fix has been provided
Just found another way. Thanks for your help.
After digging in the config.log file created after the ./configure tool, I found the libxml2 error (that I wrongly supposed to stop the configure script):
conftest.c:75:27: fatal error: libxml/parser.h: No such file or directory
But I knew it was there but can't find it! So I checked it and found it under
/usr/lib
And found somewhere else that libxml2 package comes with a script (xml2-config) to give library linking information and more so:
$ xml2-config --cflags
-I/usr/include/libxml2
And the just needed to add the output to the CFLAGS environment variable when configuring:
$ CFLAGS="-I/usr/include/libxml2" ./configure --prefix=/tmp/gphoto2/local --with-libxml2=yes
And everything else was just ok!
Usually, a --with-some-package=yes option checks for the existence of header files for some-package on your system. If it doesn't find the required header files, then it still outputs "no" to the terminal. Have you installed your distribution's libxml2-devel (or similarly named) package?
I have a project which requires opencl. I have installed CUDA and openCL on my machine but when I 'make' my project the following error occurs:
CL/cl.h: No such file or directory
I know that the i can create a hard link (in my unix (ubuntu) system) to fix the problem:
ln -s /usr/include/nvidia-current/CL
But i consider this a quick fix and not the correct solution. I would like to handle this in my makefile (i guess) so that a simple "make" command would compile. How could I do this?
You need to pass an appropriate -I option to the compiler (by setting CPPFLAGS or CFLAGS, for example). -I/usr/include/nvidia-current sounds like it'd work.
I saw this thread from compile opencl program using CL/cl.h file
I installed 7.5 and added below link in /usr/include, it works for my opencl program. looks like CUDA forget to implement this link after the installation.
ln -s /usr/local/cuda-7.5/include/CL /usr/include
Are you using Ubuntu or Debian distro? Then now you can use this package:
sudo apt-get install opencl-headers
1) I need gcc-4.1 for Matlab mex usage, but I can't get it installed fully with apt-get install:
The following packages have unmet dependencies:
libstdc++6-4.1-dev : Depends: gcc-4.1-base (= 4.1.2-27ubuntu1) but 4.1.2-29ubuntu1 is to be installed
Depends: g++-4.1 (= 4.1.2-27ubuntu1) but it is not going to be installed
E: Broken packages
2) I now only have gcc-4.1-base and -multilib installed. When compiling mex file:
/usr/bin/ld: cannot find -lstdc++
collect2: ld returned 1 exit status
Something is wrong with libstdc++6-4.1-dev.
So any easier fix than compiling by myself?
Thanks
I assume you use x64 version Ubuntu and your Matlab version is also 64bit. There are two ways that may solve your problem mentioned in 2):
Open mexopts.sh (located in yourhome/.matlab/MATLAB VERSION/ directory),
and comment CLIBS="CLIBS -lstdc++" for glnxa64.
Check whether libstdc++.so. exists in /usr/lib directory. If not, create a symbolic link /usr/lib/libstdc++.so to MATLABROOT/sys/os/glnxa64/libstdc++.so.6.0.xx (xx is a number that may change with matlab version).
I wouldn't compile it myself. I remember how long that takes (it's one the longest parts of building any Linux system)...
So I presume you don't have a fully functional GCC right now? I got this to install from apt-get in Ubuntu 10.10 x64...
Okay, so you have broken dependencies, eh? I know this is not elegant, but try downloading the deb files manually (http://packages.ubuntu.com/maverick/gcc-4.1 for 10.10 or http://packages.ubuntu.com/lucid/gcc-4.1 for 10.04), save them to a folder, cd into the folder from Terminal, and run this for each package:
dpkg -i package.deb
There is a more elegant way to do this, but I just don't know it...