Using gcc along with ccache - gcc

I was thinking about using ccache with gcc compiled code on the team wide base (same ccache's cache will be used by all developers on the same machine).
Since we're talking about commercial product, "correctness" of compilation is a top priority.
Here come questions:
Is the compilation using ccache is safe/reproducible?
Is there some exception situations that ccache supposes cache hit mistakenly.
If I checkout a source code and compile it, I expect to receive the same products
(exactly same libraries/binaries) each time I repeat a fresh compilation process.
This is must to for commercial product.
Is there're open source/commercial products using ccache an integral part of their
build system? This will make it easier to convince my colleagues to use ccache.
Thanks

According to its manual, ccache determines whether it has compiled some object before on the following:
the pre-processor output from running the compiler with -E
the command line options
the real compilers size and modification time
any stderr output generated by the compiler
If some PHB is still worried about any assumed risk you take because of ccache, only use it for development builds and build the final product using the compiler without any front-end. Or you could clear the cache before building the final product.
Update: I don't know about products using ccache as an integral part of their build system, but it is really trivial to integrate into any environment where you can set the compiler's path. I.e. for autoconf:
CC="ccache gcc" ./configure
And after looking at the author's name, I'd say it's a pretty safe assumption that it has been widely used within the Samba team.
Update in response to Ringding's comment about usage of stderr: From ccache's point of view, one interesting bit of information is the C compiler's version and configuration string. gcc outputs that to the standard error file:
$ gcc -v 2>err
$ cat err
Using built-in specs.
Target: i486-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian 4.3.4-2' --with-bugurl=file:///usr/share/doc/gcc-4.3/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.3 --program-suffix=-4.3 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-targets=all --with-tune=generic --enable-checking=release --build=i486-linux-gnu --host=i486-linux-gnu --target=i486-linux-gnu
Thread model: posix
gcc version 4.3.4 (Debian 4.3.4-2)
I'd bet that ccache uses this or a similar output. But, hey, you can always look at its source code. :-)

I am personally familiar only with ccache which is very simple to use, and I find it extremely useful for my large scale private projects.
However, as for a team wide base, I have no experience yet.
You may be interested also in AO (audited objects):
In general:
it provides a more robust mechanism, can use distributed environment for caching
ccache speed up only compilation time, while AO speed up link time too.
not limited only to c/c++
Not long after my posted answer (1.5 years ago...), I managed to convince our build and R&D managers, to integrate ccache into the automatic build system, and they are grateful to me on that. The company employs more than 200 developers, so it is really working. As for the linking phase, it is still an issue.

Related

Cross compiling FFTW for ARM Neon

I am trying to compile FFTW3 to run on ARM Neon (More precisely, on a Cortex a-53). The build env is x86_64-pokysdk-lunix, The host env is aarch64-poky-lunix. I am using the aarch64-poky-linux-gcc compiler.
I used the following command at first:
./configure --prefix=/build_env/neon/neon_install_8 --host=aarch64-poky-linux --enable-shared --enable-single --enable-neon --with-sysroot=/opt/poky/2.5.3/sysroots/aarch64-poky-linux "CC=/opt/poky/2.5.3/sysroots/x86_64-pokysdk-linux/usr/bin/aarch64-poky-linux/aarch64-poky-linux-gcc -march=armv8-a+simd -mcpu=cortex-a53 -mfloat-abi=softfp -mfpu=neon"
The compiler did not support the -mfloat-abi=softfp and the -mfpu=neon. It also did not let me define the path to the sysroot this way.
Then used the following command:
./configure --prefix=/build_env/neon/neon_install_8 --host=aarch64-poky-linux --enable-shared --enable-single --enable-neon "CC=/opt/poky/2.5.3/sysroots/x86_64-pokysdk-linux/usr/bin/aarch64-poky-linux/aarch64-poky-linux-gcc" "CFLAGS=--sysroot=/opt/poky/2.5.3/sysroots/aarch64-poky-linux -mcpu=cortex-a53 -march=armv8-a+simd"
This command succeeded with this config log and this config.h. Then I used the command make then make install. I then copied my shared library file into my host env and used fftwf_ instead of fftw_ in my code base. The final step was to recompile the program. I ran a test and compared the times for both algorithm using <sys/resource.h>. I also used the fftw[f]_forget_wisdom() on both algorithms so that It can be fair. However, I am not getting a speedup. I believe that using an SIMD architecture (NEON in our case) would accelerate the FFTW library.
I would really appreciate if anyone can point out something that I am doing wrong so that I can try a fix and see if I can get the performance boost I am looking for.

Run a program built with gcc8 on a producing environment without gcc8

My developing/producing environments are all CentOS-7.7.
In order to compile my program with gcc-8.3.0, I have installed "devtoolset-8" on my developing env, but it can not be used in the way same as gcc-4.8.5 that was shipped with CentOS7 oringinally.
Every time I need to compile a program, I must use "scl enable devtoolset-8 -- bash" to switch to gcc8 instead of gcc4.8.5.
When the program was deploying onto the producing-env, there is no gcc8, nor libstdc++.so.6.0.25, so it can not run.
I guess libstdc++.so.6.0.25 should be released with gcc8? I can neither install "devtoolset-8" on the producing-env, nor build gcc8 from source on the producing env.
The version of libstdc++ that can be installed from the official yum repo of CentOS, is libstdc++.so.6.0.19, hence my programs can not be loaded at the producing-env.
How to let such programs to run?
Thanks!
Pls forgive my Ugly English.
In order to not have to copy or ship a separate libstdc++.so but rather link statically (as suggested in a comment) against the C++ runtime, one can link C++ programs with -static-libstdc++ (also specifying -static-libgcc will also make sure that the program does not depend on a recent enough version of libgcc_s.so on the system - although that should rarely be a problem).
There can also be the issue of the target system having a version of glibc that is too old (relative to the build system). In that case, one could anyhow compile gcc of no matter how recent of a version on the older system, so that the resulting C++ executables as well as libstdc++ are linked against the older glibc. Linking C++ programs with -static-libstdc++ will again help to not depend on the program having to be able to find libstdc++.so at run-time.
Finally, the C++ program could also be linked with -static not depending on any dynamic libraries at all.

GCC built from source in different location is incorrectly using same shared libs as native GCC

I'm a student doing research involving extending the TM capabilities of gcc. My goal is to make changes to gcc source, build gcc from the modified source, and, use the new executable the same way I'd use my distro's vanilla gcc.
I built and installed gcc in a different location (not /usr/bin/gcc), specifically because the modified gcc will be unstable, and because our project goal is to compare transactional programs compiled with the two different versions.
Our changes to gcc source impact both /gcc and /libitm. This means we are making a change to libitm.so, one of the shared libraries that get built.
My expectation:
when compiling myprogram.cpp with /usr/bin/g++, the version of libitm.so that will get linked should be the one that came with my distro;
when compiling it with ~/project/install-dir/bin/g++, the version of libitm.so that will get linked should be the one that just got built when I built my modified gcc.
But in reality it seems both native gcc and mine are using the same libitm, /usr/lib/x86_64-linux-gnu/libitm.so.1.
I only have a rough grasp of gcc internals as they apply to our project, but this is my understanding:
Our changes tell one compiler pass to conditionally insert our own "function builtin" instead of one it would normally use, and this is / becomes a "symbol" which needs to link to libitm.
When I use the new gcc to compile my program, that pass detects those conditions and successfully inserts the symbol, but then at runtime my program gives a "relocation error" indicating the symbol is not defined in the file it is searching in: ./test: relocation error: ./test: symbol _ITM_S1RU4, version LIBITM_1.0 not defined in file libitm.so.1 with link time reference
readelf shows me that /usr/lib/x86_64-linux-gnu/libitm.so.1 does not contain our new symbols while ~/project/install-dir/lib64/libitm.so.1 does; if I re-run my program after simply copying the latter libitm over the former (backing it up first, of course), it does not produce the relocation error anymore. But naturally this is not a permanent solution.
So I want the gcc I built to use the shared libs that were built along with it when linking. And I don't want to have to tell it where they are every time - my feeling is that it should know where to look for them since I deliberately built it somewhere else to behave differently.
This sounds like the kind of problem any amateur gcc developer would have when trying to make a dev environment and still be able to use both versions of gcc, but I had difficulty finding similar questions. I am thinking this is a matter of lacking certain config options when I configure gcc before building it. What is the right configuration to do this?
My small understanding of the instructions for building and installing gcc led me to do the following:
cd ~/project/
mkdir objdir
cd objdir
../source-dir/configure --enable-languages=c,c++ --prefix=/home/myusername/project/install-dir
make -j2
make install
I only have those config options because they seemed like the ones closest related to "only building the parts I need" and "not overwriting native gcc", but I could be wrong. After the initial config step I just re-run make -j2 and make install every time I change the code. All these steps do complete without errors, and they produce the ~/project/install-dir/bin/ folder, containing the gcc and g++ which behave as described.
I use ~/project/install-dir/bin/g++ -fgnu-tm -o myprogram myprogram.cpp to compile a transactional program, possibly with other options for programs with threads.
(I am using Xubuntu 16.04.3 (64 bit), within VirtualBox on Windows. The installed /usr/bin/gcc is version 5.4.0. Our source at ~/project/source-dir/ is a modified version of 5.3.0.)
You’re running into build- versus run-time linking differences. When you build with -fgnu-tm, the compiler knows where the library it needs is found, and it tells the linker where to find it; you can see this by adding -v to your g++ command. However when you run the resulting program, the dynamic linker doesn’t know it should look somewhere special for the ITM library, so it uses the default library in /usr/lib/x86_64-linux-gnu.
Things get even more confusing with ITM on Ubuntu because the library is installed system-wide, but the link script is installed in a GCC-private directory. This doesn’t happen with the default GCC build, so your own GCC build doesn’t do this, and you’ll see libitm.so in ~/project/install-dir/lib64.
To fix this at run-time, you need to tell the dynamic linker where to find the right library. You can do this either by setting LD_LIBRARY_PATH (to /home/.../project/install-dir/lib64), or by storing the path in the binary using -Wl,-rpath=/home/.../project/install-dir/lib64 when you build it.

How to cross compile GCC on x86 for Sun4v?

I am writing a programming assignment using C++. The instructor of this course requires all code to be compiled and run on the UNIX server. The server is a SunOS machine. I wrote all my code on my personal laptop with GCC 5.2, which support most C++11 features. However, when I upload my code to the server and tried to compile it, I surprisingly found that the g++ version on the server is 4.2.1, which was released in mid-2007. Many of the C++11 features are not supported. Even the -std argument is not accepted.
I tried to download the source code of the latest GCC and compile it on the server. Unfortunately there is a disk quota limiting to 500M per account. I am just wondering if it is possible to cross compile GCC on my x86 machine and upload the binary on to the server so that I can compile my C++ code.
By the way, I have contacted the IT department about updating the software but they responded that they do not have such plans in the near future.
I did do research on the Internet about cross compilation and found a couple tutorials. But they are not easy to follow. In addition to binaries, there are also a lot dependencies like headers and libraries. So before I give up and modify my code to fit the old compiler, can anyone give me some suggestions?
Thank you.
uname -a returns the following result
SunOS 5.10 Generic_147147-26 sun4v sparc SUNW,T5240
Of course it's possible, and it's the way you usually do the things when writing operating systems.
First of all, you need to take binutils in the toolbox, too. Once you have all the Holy Sources, let's prepare!
export PREFIX="$HOME/opt" # All the stuff will get installed here!
export TARGET=sparc-sun-solaris # I'm not *100%* sure if this is correct, check it yourself
export PATH="$PREFIX/bin:$PATH" # If you forget this/close the terminal, you're doomed!
Now, let's get with the little monster... Shall binutils be built!
cd $HOME/src # Or where you have the sources
mkdir binutils-build
cd binutils-build
../binutils-src/configure --target=$TARGET --prefix="$PREFIX" --disable-nls
make
make install
--disable-nls disables the support for native natural languages (a.k.a: the compiler prints errors in your own language!), and just uses English for messages. That's not a must, but it certainly speeds up the process of building binutils.
Now, compiling GCC itself is a very fragile process, and it can fail anywhere, anyhow, so be prepared! This process is long (it can take up to an hour on some machines), but trust me, LLVM+Clang is worse ;).
cd $HOME/src
cd gcc-src
./contrib/download_prerequisites # Get GMP, MPFR, and MPC
cd ..
mkdir gcc-build
cd gcc-build
../gcc-src/configure --target=$TARGET --prefix="$PREFIX" --disable-nls --enable-languages=c,c++
make all-gcc
make all-target-libgcc
make install-gcc
make install-target-libgcc
If you don't get into issues while compiling (trust me, you will unless you're too lucky for this world), you'll have a toolchain that runs on your machine, but compiles for SunOS/SPARC! BTW, --enable-languages=c,c++ means that GCC will have support for compiling C and C++ code. Nothing less, nothing more. Try it out with...
sparc-sun-solaris-g++ --version
Now, if you want to get a compiler for the server, that runs on the server, you will have to some mess with a double canadian cross. Basically, what you have to do is...
export PREFIX="$HOME/some-holy-directory" # This path *must* be the same for both your machine and the target server!
export HOST=$TARGET
And then repeat the compilation process again, remembering to adding the option --host=$HOST to both configure scripts! Once done, you must move that some-holy-directory at exactly the same location into the server. If it didn't fit into the 500MB, well, ask your teacher if you can at least compile assignments in your own machine, then upload them to the server. Otherwise, you're left out with C++98.
BTW: Please note that cross-compiling GCC itself is an even more fragile process. All this post is just theoretical, because I won't do all this steps just for the sake of doing it. Please comment if you have any major issues, or if someone spots an error in the steps ;).
Edit: Apparently, you'll have to build Glibc and all that funky stuff too...
I hope this has led some light on you!

Is `--enable-mpbsd` no longer required when building GMP?

So I'm trying to build a cross-compiler toolchain off of the latest GCC (gcc-5.1.0). GCC requires GMP and so I downloaded GNU MP 6.0 (gmp-6.0.0).
Instructions for building GMP suggest (for my purpose) to pass the parameter --enable-mpbsd which is documented as follows:
The meaning of the new configure options:
--enable-cxx
This parameter enables C++ support
--enable-mpbsd
This builds the Berkeley MP compatibility library
However, when I fun configure, it warns me:
configure: WARNING: unrecognized options: --enable-mpbsd
Which suggests that the option was introduced in 5.x and deprecated again in 6.x or replaced by something else ...?!
The exact command line I use is (just for completeness):
./configure --prefix=$PREFIX --enable-shared --enable-static --enable-mpbsd --enable-fft --enable-cxx --host=x86_64-pc-freebsd6
PS: for now I intend to disregard this warning and proceed anyway. I'll report back whether this still turns out as a functional toolchain.
--enable-mpbsd
This builds the Berkeley MP compatibility library
This was potentially useful 20 years ago, but it hasn't been for a long time, which is why it was removed from GMP. Linux From Scratch is wrong to recommend the use of that option, it was never required (though it didn't hurt). Please contact them so they can update their instructions.
By the way, you do not need --enable-shared --enable-static --enable-fft, they are the default.

Resources