So I'm trying to build a cross-compiler toolchain off of the latest GCC (gcc-5.1.0). GCC requires GMP and so I downloaded GNU MP 6.0 (gmp-6.0.0).
Instructions for building GMP suggest (for my purpose) to pass the parameter --enable-mpbsd which is documented as follows:
The meaning of the new configure options:
--enable-cxx
This parameter enables C++ support
--enable-mpbsd
This builds the Berkeley MP compatibility library
However, when I fun configure, it warns me:
configure: WARNING: unrecognized options: --enable-mpbsd
Which suggests that the option was introduced in 5.x and deprecated again in 6.x or replaced by something else ...?!
The exact command line I use is (just for completeness):
./configure --prefix=$PREFIX --enable-shared --enable-static --enable-mpbsd --enable-fft --enable-cxx --host=x86_64-pc-freebsd6
PS: for now I intend to disregard this warning and proceed anyway. I'll report back whether this still turns out as a functional toolchain.
--enable-mpbsd
This builds the Berkeley MP compatibility library
This was potentially useful 20 years ago, but it hasn't been for a long time, which is why it was removed from GMP. Linux From Scratch is wrong to recommend the use of that option, it was never required (though it didn't hurt). Please contact them so they can update their instructions.
By the way, you do not need --enable-shared --enable-static --enable-fft, they are the default.
Related
I have gcc 4.8.5 installed on a Red Hat 7.5 machine.
I wish to compile a software package on this machine.
In order to compile this package, I need to run "make".
However, when I run this, I see the following error message "error: ‘make_unique’ is not a member of ‘std’".
My understanding (possibly incorrect) is that this message originates from the fact that 4.8.5 uses C++11 and "make_unique" requires C++14. So I presume the way to compile this is to specify that C++14 should be used when I run "make".
How do I do this ?
I have tried to set the C++ to 14 as follows:
cmake -G "Unix Makefiles" -D CMAKE_CXX_STANDARD=14
And then I ran "make".
But this gave the same error message.
You are using compiler that does not support C++14. As stated in the documentation:
GCC supports the original ISO C++ standard (1998) and contains experimental support for the second ISO C++ standard (2011).
It is no surprise, since GCC 4.8 was originally released in 2013 so it would be weird if it implemented a standard introduced a year later.
To use C++14 or even C++17 you can compile it with RH Devtoolset 8. The built software would run on target OS w/o any additional effort due to the "nature" of DTS compiler: the symbols available in the OS-shipped libstdc++ gonna be resolved dynamically, while C++11 and above gonna be linked to you executable from the specially built static libstdc++.a provided by DTS.
I'm currently building Binutils 2.32 for the armv7l-unknown-linux-gnueabihf target, with this configure command:
chronos#localhost ~/Downloads/tarballs/binutils-2.32 $ ./configure --prefix=/usr/local/opt/arm-cross --target=armv7l-unknown-linux-gnueabihf --enable-shared --enable-host-shared --disable-static --enable-plugins --enable-gold=default --enable-ld --with-system-zlib
I ran make -j3 && make install, and no errors occured.
However, when I added /usr/local/opt/arm-cross/bin to my path and ran armv7l-unknown-linux-gnueabihf-objdump, this error occured:
armv7l-unknown-linux-gnueabihf-objdump: can't set BFD default target to `armv7l-unknown-linux-gnueabihf': invalid bfd target
How do I fix this? I searched on Stack Overflow and Google and couldn't come up with anything.
You configured with --enable-shared --enable-host-shared --disable-static. This means that it is you need to make sure that the binutils programs can find the shared objects they need. Therefore, in addition to PATH, you have to use LD_LIBRARY_PATH, or otherwise make the BFD library available to your custom binutils build.
This could however affect how other installed binutils versions find their BFD library, so it may be easier to link your version statically.
My ultimate goal is to get python package graph_tool working on my system and also on ipynb if possible. I have already brew install graph-tool, as indicated here, but that's still insufficient.
So I follow conda instructions here, and I make decent progress, until I encounter a compiler issue with the configuration.
$ conda create -n py36env python=3.6.3 anaconda
(py36env) $ conda install -c conda-forge cgal
... and so forth with the other required libraries
(py36env) Tams-MacBook-Pro:graph-tool-2.25 tamtam$ ./configure --prefix=/Users/tamtam/anaconda3/envs/py36env/ --with-python-module-path=/Users/tamtam/anaconda3/envs/py36env/lib/python3.6/site-packages
.
.
.
checking whether g++ supports C++14 features by default... no
checking whether g++ supports C++14 features with -std=gnu++14... no
checking whether g++ supports C++14 features with -std=gnu++1y... no
configure: error: *** A compiler with support for C++14 language features is required.
I'm not very familiar with compiler things, but I check my system as follows:
(py36env) Tams-MacBook-Pro:graph-tool-2.25 tamtam$ conda list | grep gcc
gcc 4.8.5 8
And outside of the conda env:
$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/Users/tamtam/anaconda3/envs/py36env/bin/../libexec/gcc/x86_64-apple-darwin11.4.2/4.8.5/lto-wrapper
Target: x86_64-apple-darwin11.4.2
Configured with: ./configure --prefix=/Users/ray/mc-x64-3.5/conda-bld/gcc-4.8_1477649012852/
Thread model: posix
gcc version 4.8.5 (GCC)
And with Homebrew
$ brew list --versions | grep gcc
gcc 7.1.0 7.2.0
$ /usr/local/bin/gcc-7 --version
gcc-7 (Homebrew GCC 7.2.0) 7.2.0
$ echo $PATH
/Users/tamtam/anaconda3/bin:/Users/tamtam/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin
So my questions are: EDITS INCLUDED
1) What's the difference between the conda gcc 4.8.5 and Homebrew gcc 7.2 on my system? I'm confused because they're each up-to-date packages.
EDIT: Nowhere online that I perused explicitly states this, but I am understanding (aka educated guess bc it's 3am and I need sleep) conda gcc 4.8.5 to just be equivalent to brew gcc 4.8.5, and that Anaconda is just way behind on developing an 'official' gcc supporting C++14. In the meantime I conda install -c salford_systems gcc-5 and ./configure seems to find itself a C++14 compiler. YAY (but this also led me to a new problem T.B.A.)
NOTE: I had also tried running the ./configure file with both conda gcc 4.8.5 and brew gcc 7.2.0 (switching of default gcc compiler guided here), but still same error of ./configure not finding C++14 compile (so ignore this)
2) I also have Xcode 8.3.3 that also comes with clang, which is what gcc is symlinked to... ?? No idea about the setup/relation
3) How can I adjust my system (for example, paths) so that the configuration process will detect a C++14 compiler? EDIT: resolved per 1)
(This solution is also included within the edits at the bottom of my question.)
Nowhere online that I perused explicitly states this, but I am understanding (aka educated guess bc it's 3am and I need sleep) conda gcc 4.8.5 to just be equivalent to brew gcc 4.8.5, and that Anaconda is just way behind on developing an 'official' gcc supporting C++14. In the meantime I conda install -c salford_systems gcc-5 and ./configure seems to find itself a C++14 compiler. YAY (but this also led me to a new problem)
NOTE: I had also tried running the ./configure file with both conda gcc 4.8.5 and brew gcc 7.2.0 (switching of default gcc compiler guided here), but still same error of ./configure not finding C++14 compile arises (so ignore this)
As a rule of thumb, if a product provides its own build tools (compiler, C headers, static libraries, include/library paths etc) instead of using the system's ones, you should use those when building things to use within its environment.
That's because, depending on include files, compiler, compiler version, compiler flags, predefined macros etc the same sources and libraries built with different toolchains can be binary incompatible. This goes double for C++ where there are no ABI standards at all. (In C, there are de facto ones for most platforms where there are competing compilers due to the need to do system calls.)
Now, what you have on your system:
The system's stock gcc (if any) is at /usr/bin (so includes are at /usr/include, static libraries at /usr/lib etc)
Homebrew's gcc 7.2 at /usr/local/bin (includes are probably at /usr/local/include)
Anaconda also has a gcc package that, if installed, would reside somewhere like /Users/<login>/anaconda3/envs/<env name>/bin
If an environment provides an own toolchain, it provides some way to make build scripts select that toolchain when building things. A common way is to set relevant environment variables (PATH, INCLUDE, LIB) - either directly or through some tool that the environment provides. Homebrew executables reside in /usr/local/bin which is always present in UNIX PATH as per the FHS; Anaconda adds itself to PATH whenever its environment is activated.
So, as you yourself guessed, since Anaconda provides its own gcc packages (thus enforces the compatibility of their output with its binary packages via package metadata - if there are known incompatibilities, the installation will fail due to requirement conflicts), you need to install one that meets graph-tool requirements -- which is gcc 5, available via gcc-5 package from salford_systems channel. Then specify --prefix as you already have.
You may need to install other dependencies, too, if you didn't already or you need optional features that they're required for.
The question is about a specific combination of versions but is relevant more generally.
I've just dist-upgraded from Kubuntu 12.04 to 14.04. Now, when I want to compile CUDA code (with CUDA 6.5), I get:
#error -- unsupported GNU version! gcc 4.9 and up are not supported!
I installed gcc-4.8 (and 4.7), and tried to use the symlinks-in-/usr/local/cuda/bin solution suggested here:
CUDA incompatible with my gcc version
but this doesn't work. What should I do?
This solution is relevant to multiple combinations of CUDA and GCC versions.
You can tell CUDA's nvcc to use a specific version of gcc. So, suppose you want gcc 4.7 for use with CUDA 6. You run:
sudo apt-get install gcc-4.7 g++-4.7
and then add the following switch to your nvcc command-line:
nvcc --compiler-bindir /usr/bin/gcc-4.7 # rest of the command line here
If you're building with CMake, add an appropriate setting before looking for CUDA to your CMakeLists.txt, e.g.:
set(CUDA_HOST_COMPILER /usr/bin/gcc-4.7) # -> ADD THIS LINE <-
find_package(CUDA)
Also, it seems clang can compile CUDA as well, maybe that's worth experimenting with (although you would have to build it appropriately).
Note: Some Linux (or other OS) distributions don't have packages for multiple versions of gcc (in the same release of the OS distribution). I would advise against trying to install a package from another release of the distribution on an older release, and consider building gcc instead. That's not entirely trivial but it is quite doable - and of course, it's your only option if you don't have root access to your machine.
Switch back to a supported config. They are listed in the getting started document for any recent CUDA distribution.
For your particular configuration you have currently listed, you might have better luck with CUDA 7 RC, which is now available to registered developers.
I had a similar issue with CUDA Toolkit 7.5 and gcc 5.2.1.
I did modify the host_config.h file in /usr/local/cuda/include/:
Just remove the lines where it check the gcc version. It did solve my problem.
Credits goes to Darren Garvey (https://groups.google.com/forum/#!topic/torch7/WaNmWZqMnzw)
Very often you will find that CUDA has had newer releases by the time you encounter this problem. For example, the original formulation of the question was about CUDA 6 and GCC 4.9; CUDA 7 supported GCC 4.9. CUDA 8 supports GCC 5.x . And so on.
I am attempting to compile gcc 4.4.0 on opensolaris 2009.6
Currently in the box (which is a AMD 64bit machine), I have the gcc 3.4.6 installed.
I unpacked the gcc 4.4.0 tarball.
I set the following env variables:
export CXX=/usr/local/bin/g++
export CC=/usr/local/bin/gcc
Then I ran "configure && make" and this is the error message that I got:
checking for i386-pc-solaris2.11-gcc... /export/home/me/wd/gcc/gcc-4.4.0/host-i386-pc-solaris2.11/gcc/xgcc -B/export/home/me/wd/gcc/gcc-4.4.0/host-i386-pc-solaris2.11/gcc/ -B/usr/local/i386-pc-solaris2.11/bin/ -B/usr/local/i386-pc-solaris2.11/lib/ -isystem /usr/local/i386-pc-solaris2.11/include -isystem /usr/local/i386-pc-solaris2.11/sys-include -m64
checking for suffix of object files... configure: error: in `/export/home/me/wd/gcc/gcc-4.4.0/i386-pc-solaris2.11/amd64/libgcc':
configure: error: cannot compute suffix of object files: cannot compile
See `config.log' for more details.
Anyone has any suggestion as to how to work around this error message?
/Edit:
Content of the config.log is posted here: link text
Normally the GCC build is bootstrapped, i.e. first it uses the system compiler to build GCC C compiler, and then it uses the freshly built compiler to recompile GCC once again (and then even once more time again). The configure line shows that it is not the system compiler but the already-built GCC compiler which is used for configure test there.
Since it fails, the problem is that the freshly-built GCC is somehow "stillborn" here. If config.log will not help you, I'd suggest to ask at gcc-help#gcc.gnu.org.
EDIT: Ah-ha, I think it is the assembler. You are using GNU assembler, but the unsupported options look like they were meant for Sun assembler. This should be solved by adding --with-gnu-as configure option (and then possibly having to specify its path explicitly with --with-as=/usr/gnu/bin/as)
You can also take a look at Solaris-specific GCC build instructions.
There's a readily available build for gcc4, which you can try updating. Its current version is 4.3.3. To get started, install pkg-get from OpenCSW and check out the build from the subversion repository:
svn co https://gar.svn.sourceforge.net/svnroot/gar/csw/mgar/pkg/gcc4/trunk/ gcc4
cd gcc4
gmake package