I have a problems with compilation of llvm and clang with bpf and x86 targets on debian machine. GCC version is 6.2,python exists on system.Compilation lasts more than 24 hours already.Now it hangs at
96% linking cxx executable ../../bin/opt
What to wait more or to do with this?
Most probably your linker is running out of RAM. Few suggestions:
Release builds tend to require less RAM for linking compared to Debug one
Use gold, not bfd ld
Add more RAM :)
Related
I tried almost a hundred things to make this works but nothing seems to be working.
I recently acquire a Mac book pro M1 MAX (so arm64 architecture), system provided by default with clang g++.
I wanted to install boost library. Using homebrew the 1.80 version was installed but I need to work on a project with the 1.65.1 version (I tried compiling my project with 1.80 version and tons of undefined symbols and errors were raised from boost library even if I have all of them, so I'm guessing I need to install the exact same version required)
so I decided to build and compile boost by myself following the boost guide
https://www.boost.org/doc/libs/1_65_1/more/getting_started/unix-variants.html
Following section 5.1 I tried to use the bootstrap script and this one fails using Darwin toolset. (apparently some Clang warning caused error). I resolve then by changing the boost source code like this :
https://github.com/boostorg/build/commit/48e9017139dd94446633480661e5447c7e0d8b1b
But there's still lot of issues during the compilation
I don't know what to do to be able to compile with clang, I don't even know if this will be compiled for arm64 architecture.
anyway I install gcc compiler and tried with gcc toolset.
./bootstrap --with-toolset=gcc
The bootstraps works but then running the b2 script cause a segmentation fault instantly and on every commands I tried (even the --help options raised an exception...).
Why is building boost so complicated on arm chipset ?
What can I do to build boost (either clang or gcc, arm or cross compiled universal library) ?
I'm desperate at this point.
Thanks for the help.
I tried everything
with clang (darwin)
with gcc
with options to add arm64 as architecture
changing the source code of boost to fix
I came across several SO questions regarding specific aspects of improving the turn-around time of CMake enabled C++ projects lately (like "At what level should I distribute my build process?" or "cmake rebuild_cache for just a subdirectory?"), I was wondering if there is a more general guidance utilizing the specific possibilities CMake offers. If there is probably no cross-platform compile time optimization, I'm mainly interested in Visual Studio or GNU toochain based approaches.
And I'm already aware of and investing into the generally recommended areas to speed up C++ builds:
Change/Optimize/fine-tune the toolchain
Optimize your code base/software architecture (e.g by reducing the dependencies and use well-defined sub-projects - unit tests)
Invest in a better hardware (SSD, CPU, memory)
like recommended here, here or here. So my focus in this question is on the first point.
Plus I know of the recommendations to be found in CMake's Wiki:
CMake: building with all your cores
CMake Performance Tips
The former just handles the basics (parallel make), the later handles mostly how to speed-up parsing CMake files.
Just to make this a little more concrete, if I take my CMake example from here with 100 libraries using MSYS/GNU I got the following time measurement results:
$ cmake --version
cmake version 3.5.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
$ time -p cmake -G "MSYS Makefiles" ..
-- The CXX compiler identification is GNU 4.8.1
...
-- Configuring done
-- Generating done
-- Build files have been written to: [...]
real 27.03
user 0.01
sys 0.03
$ time -p make -j8
...
[100%] Built target CMakeTest
real 113.11
user 8.82
sys 33.08
So I have a total of ~140 seconds and my goal - for this admittedly very simple example - would be to get this down to about 10-20% of what I get with the standard settings/tools.
Here's what I had good results with using CMake and Visual Studio or GNU toolchains:
Exchange GNU make with Ninja. It's faster, makes use of all available CPU cores automatically and has a good dependency management. Just be aware of
a.) You need to setup the target dependencies in CMake correctly. If you get to a point where the build has a dependency to another artifact, it has to wait until those are compiled (synchronization points).
$ time -p cmake -G "Ninja" ..
-- The CXX compiler identification is GNU 4.8.1
...
real 11.06
user 0.00
sys 0.00
$ time -p ninja
...
[202/202] Linking CXX executable CMakeTest.exe
real 40.31
user 0.01
sys 0.01
b.) Linking is always such a synchronization point. So you can make more use of CMake's Object Libraries to reduce those, but it makes your CMake code a little bit uglier.
$ time -p ninja
...
[102/102] Linking CXX executable CMakeTest.exe
real 27.62
user 0.00
sys 0.04
Split less frequently changed or stable code parts into separate CMake projects and use CMake's ExternalProject_Add() or - if you e.g. switch to binary delivery of some libraries - find_library().
Think of a different set of compiler/linker options for your daily work (but only if you also have some test time/experience with the final release build options).
a.) Skip the optimization parts
b.) Try incremental linking
If you often do changes to the CMake code itself, think about rebuilding CMake from sources optimized for your machine's architecture. CMake's officially distributed binaries are just a compromise to work on every possible CPU architecture.
When I use MinGW64/MSYS to rebuild CMake 3.5.2 with e.g.
cmake -DCMAKE_BUILD_TYPE:STRING="Release"
-DCMAKE_CXX_FLAGS:STRING="-march=native -m64 -Ofast -flto"
-DCMAKE_EXE_LINKER_FLAGS:STRING="-Wl,--allow-multiple-definition"
-G "MSYS Makefiles" ..
I can accelerate the first part:
$ time -p [...]/MSYS64/bin/cmake.exe -G "Ninja" ..
real 6.46
user 0.03
sys 0.01
If your file I/O is very slow and since CMake works with dedicated binary output directories, make use of a RAM disk. If you still use a hard drive, consider switching to a solid state disk.
Depending of your final output file, exchange the GNU standard linker with the Gold Linker. Even faster than Gold Linker is lld from the LLVM project. You have to check whether it supports already the needed features on your platform.
Use Clang/c2 instead of Visual C++ compiler. For the Visual C++ compiler performance recommendations are provided from the Visual C++ team, see https://blogs.msdn.microsoft.com/vcblog/2016/10/26/recommendations-to-speed-c-builds-in-visual-studio/
Increadibuild can boost the compilation time.
References
CMake: How to setup Source, Library and CMakeLists.txt dependencies?
Replacing ld with gold - any experience?
Is the lld linker a drop-in replacement for ld and gold?
For speeding up the CMake configure time see: https://github.com/cristianadam/cmake-checks-cache
LLVM + Clang got a ~3x speedup.
I need to debug a multi-threaded program which keeps throwing horrible segmentation faults, and I chose Valgrind to do so. The problem, though, is that the code is cross-compiled and run in an ARMv5 machine. I tried to build Valgrind for that architecture, but configure failed because that version is not supported:
$ CC=arm-linux-gnueabi-gcc ./configure --prefix=/opt/valgrind \
--host=armv5-none-linux-gnueabi --target=arm-none-linux-gnueabi \
--build=i386-ubuntu-linux
(...)
checking for a supported CPU... no (armv5)
configure: error: Unsupported host architecture. Sorry
Is there a way to solve this issue? Could it be somehow possible to compile for ARMv7 (which I read is fully supported), and use it in my platform? I found this question, but it was asked two years ago and the answer points to a patch for older versions of Valgrind.
If you get to compile valgrind for an ARMv5 instruction set CPU you cannot run it since valgrind only runs on ARMv7 CPUs.
Valgrind cross compilation for ARMv5tel
ARM support seems to be added since "Release 3.6.0 (21 October 2010)":
http://valgrind.org/docs/manual/dist.news.html
But it has to run on an ARMv7 CPU even if it supports older instruction sets.
I compiled valgrind for an ARMv5 and it does not run, it throws "Illegal instruction".
https://community.nxp.com/message/863066?commentID=863066#comment-863066
In configure file change "armv7*" to "arm", then your compilation will success.
Using MinGW and CMake I've compiled LLVM, Clang and Compiler-RT both via SVN or using the released source code (3.2).
I've modified InitHeaderSearch.cpp (in tools/clang/lib/frontend) to find GCC 4.7.2 headers.
I've set the compile options to Release and disabled assertions.
Clang seems to work properly, but it takes 4-5 seconds to start: even typing "clang --version" in the console does this. Compiling a projects takes a lot of time.
What am I missing? I've used rubenvb's old MinGW+Clang build (GCC 4.6), and it didn't have this problem. Is there any compilation flag I need to use?
This issue is discussed here http://lists.cs.uiuc.edu/pipermail/cfe-dev/2012-April/020651.html
AFAIK problem is caused by large relocation table and inefficient MinGW implementation (http://sourceforge.net/p/mingw/bugs/1747/).
Adding -static flag to linker flags should resolve this issue. You should invoke cmake with
-DCMAKE_EXE_LINKER_FLAGS=-static -DCMAKE_MODULE_LINKER_FLAGS=-static
I have trouble understanding the gcc compiler provided by OSX 10.6 snow leopard, mainly because of my lack of experience with 64 bits environments.
$ cat >foo.c
main() {}
$ gcc foo.c -o foo
$ file foo
foo: Mach-O 64-bit executable x86_64
$ lipo -detailed_info foo
input file foo is not a fat file
Non-fat file: foo is architecture: x86_64
However, my architecture is seen as an intel i386 type (I have one of the latest Intel Core2 duo MacBook)
$ arch
i386
and the compiler targets i686-apple-darwin10
$ gcc --version
i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5646)
Of course, if I compile 32 bits I get a 32 bit executable.
$ gcc -m32 foo.c -o foo
$ file foo
foo: Mach-O executable i386
but I don't get the big picture. The default setup for the compiler is to produce x86_64 executables, even if I have arch saying I have a 32 bit machine (why? Core2 is 64); even if (I guess) I am running a 32 bit kernel; even if I have a compiler targeting the i686-apple-darwin platform. Why? How can they run ? Should I compile 64 or 32 ?
This question is due to my attempt to compile gcc 4.2.3 on the mac, but I am having a bunch of issues with gmp, mpfr and libiberty getting (in some cases) compiled for x86_64. Should I compile everything x86_64 ? If so, what's the target (not i686-apple-darwin10 I guess)?
Thanks for the help
The default compiler on Snow Leopard is gcc4.2, and its default architecture is x86_64. The typical way to build Mac software is to build multiple architectures in separate passes, then use lipo to combine the results. (lipo only compiles single-arch files into a multiple-arch file, or strips archs out of a multi-arch file. It has no utility on single-arch files, as you discovered.)
The bitness of the compiler has nothing to do with anything. You can build 32-bit binaries with a 64-bit compiler, and vice versa. (What you think is the "target" of the compiler is actually its executable, which is different.)
The bitness of the kernel has nothing to do with anything. You can build and run 64-bit binaries when booted on a 32-bit kernel, and vice versa.
What matters is when you link, whether you have the appropriate architectures for linking. You can't link 32-bit builds against 64-bit binaries or vice versa. So the important thing is to see what the architectures of your link libraries are, make sure they're coherent, then build your binary of the same architecture so you can link against the libraries you have.
i686-apple-darwin10.0.0 contains an x86_64 folder which is not understood by most versions of autotools. In other words, I'd say that the gcc compiler is unfortunately nothing short of a joke on Snow Leopard. Why you would bundle 32-bit and 64-bit libraries into i686-apple-darwin10.0.0 is beyond me.
$ ls /usr/lib/gcc
i686-apple-darwin10 powerpc-apple-darwin10
You need to change all your autotools configure files to handle looking in *86-darwin directories and then looking for 64-bit libraries I'd imagine.
As with your system, my mac mini says its i386 even though its obviously using a 64-bit platform, again another mistake since its distributed with 64-bit hardware.
$arch
i386
Apple toolchains support multiple architectures. If you want to create a fat binary that contains x86 and x86_64 code, then you have to pass the parameters -arch i386 -arch x86_64 to gcc. The compiler will compile your code twice for both platforms in one go.
Adding -arch i386 -arch x86_64 to CFLAGS may allow you to compile gmp, mpfr, and whatnot for multiple archs in one go. Building libusb that way worked for me.
This answer is wrong, but see comments below
The real question is... how did you get a 32-bit version of OSX? I wasn't aware that Snow Leopard had a 32-bit version, as all of Apple's Intel chips are Core 2 or Xeon, which support the x86_64 architecture.
Oh, and Snow Leopard only works on Intel chips.
Edit: Apparently Snow Leopard starts in 32-bit mode.