I'm trying to use ccache to speed up my rebuilds and I noticed this in the log:
[2015-04-17T10:41:45.845545 27682] Compiler option -M is unsupported
[2015-04-17T10:41:45.845584 27682] Failed; falling back to running the real compiler
In my experience you need something like the -M flag in order to have make or its equivalent trigger rebuilds correctly. It seems weird that ccache would be tripped up by an option that must be in almost every project's build. Am I missing something? Is there a more preferred option?
This is w/ ccache-3.2.1.
Edit: I tried with -MM as well, no luck.
It is correct that ccache currently doesn't support the compiler options -M and -MM (and it never has supported them).
Some reasons for why the options in question are unupported:
The options tell the compiler to let the preprocessor output make rules instead of the preprocessed source code. This is not a good match for how ccache works; it needs to get hold of the "real" preprocessed output for each compiler invocation (see https://ccache.dev/manual/3.7.11.html#_how_ccache_works).
Nobody has implemented support for the mentioned options, simply put.
It would most likely be possible to implement support by letting ccache run the compiler command twice: one without -M/-MM to retrieve the preprocessed source code (with which the result should be associated) and one with -M/-MM to retrieve the result (make rules).
However, I (speaking as the ccache maintainer for the last six years) have not heard anybody missing support for -M/-MM until now, so my impression is that -M/-MM actually aren't used much.
Am I missing something? Is there a more preferred option?
Yes, I would say that the standard way is to use -MD/-MMD (which are supported by ccache) instead of -M/-MM. -MD/-MMD are superior because they produce both the .o and the .d file in one go, whereas -M/-MM only produce the .d file, so the compiler must in that case be invoked twice by the Makefile for each source code file. See for instance http://www.microhowto.info/howto/automatically_generate_makefile_dependencies.html for how to use -MD/-MMD.
Related
I have a project, where I want to link one of system libraries statically. The project uses GNU build system.
In configure.ac I have:
AC_CHECK_LIB(foobar, foobar_init)
On development machine this library is installed in /usr/lib/x86_64-linux-gnu. It is detected, but it is linked dynamically, which causes issues, as it is not present on some machines. Linking it statically (-Wl,-Bstatic etc.) works fine, but I don't know how to set this up in autotools. I tried forcing this into Makefile.am link flags for the project, but it still gives a preference to dynamic library.
I also tried using --enable-static with ./configure, but it seems to have no effect on system libraries.
If you want to link the whole program statically, then you should pass the --disable-shared option to configure. You might or might not need also to pass --enable-static, depending on the default value for that option (which you can influence via your configure.ac file). You really should consider doing this.
You should also consider making this the installer's problem, not the build system's. Let it be the installer's responsibility to ensure that all the shared libraries needed by the program are provided by the systems where it is installed. This is very common; in fact, it is one of the inspirations for package-management systems such as yum / dnf and apt, and their underlying packaging formats.
If you insist on linking only one library statically, while linking everything else dynamically, then you'll need to jump through a few more hoops. The objective will be to emit link options that cause just that library to be linked statically, without changing other libraries' linking. With the GNU toolchain, and supposing that the program is otherwise being linked dynamically, that would be this combination of options:
-Wl,-Bstatic -lfoobar -Wl,-Bdynamic
Now consider the documentation of the AC_CHECK_LIB() macro:
Macro: AC_CHECK_LIB (library, function, [action-if-found],
[action-if-not-found], [other-libraries])
[...] action-if-found is a list of shell commands to run if the link with the library succeeds; action-if-not-found is a list of shell
commands to run if the link fails. If action-if-found is not
specified, the default action prepends -llibrary to LIBS and defines
'HAVE_LIBlibrary' (in all capitals). [...]
Note in particular the default behavior in the event that the optional arguments are not provided (your present case) -- that's not quite what you want, at least not by itself. I suggest providing at least an alternative behavior for action-if-found case, and you could consider also making configure fail in the action-if-not-found case. The latter is left as an exercise; implementing just the former might look like this:
AC_CHECK_LIB([foobar], [foobar_init], [
LIBS="-Wl,-Bstatic -lfoobar -Wl,-Bdynamic $LIBS"
AC_DEFINE([HAVE_LIBFOOBAR], [1], [Define to 1 if you have libfoobar.])
])
You should also pay attention to the order of your AC_CHECK_LIB() invocations. As its docs go on to say:
This macro is intended to support building LIBS in a right-to-left
(least-dependent to most-dependent) fashion such that library
dependencies are satisfied as a natural side effect of consecutive
tests. Linkers are sensitive to library ordering so the order in which
LIBS is generated is important to reliable detection of libraries.
If you find that you still aren't getting what you want, then have a look at the link commands that make actually executes. You need to understand what's wrong about them before you can determine how to fix the problem.
With all that said, I observe that the above treatment is basically a hack, and it makes your build system much less resilient. It introduces dependencies on GNU toolchain options (which some other toolchains may nevertheless accept), and it assumes dynamic linking is being performed overall. It may be possible to resolve those issues with additional Autoconf code, but I urge you to instead go with one of the first two alternatives I described.
I am learning compilers and want to make changes of my own to GCC parser and lexer. Is there any testing tool or some another way available which let me change gcc code and test it accordingly.
I tried changing the lexical analysis file but now I am stuck because I don't know how to compile these files. I tried the compilation using other GCC compiler but show errors. I even tried configure and make but doing this with every change does not seems efficient.
The purpose of these changes is just learning and I have to consider GCC only as this is the only compiler my instructor allowed.
I even tried configure and make but doing that wit every change is not at all efficient.
That is exactly what you should be doing. (Well, you don't need to re-configure after every change, just run make again.) However, by default GCC configures itself in bootstrap mode, which means not only does your host compiler compile GCC, that compiled GCC then compiles GCC again (and again). That is overkill for your purposes, and you can prevent that from happening by adding --disable-bootstrap to the configuration options.
Another option that can help significantly reduce build times is enabling only the languages you're interested in. Since you're experimenting, you'll likely be very happy if you create something that works for C or for C++, even if for some obscure reason Java happens to break. Testing other languages becomes relevant when you make your changes available for a larger audience, but that isn't the case just yet. The configuration option that covers this is --enable-languages=c,c++.
Most of the configuration options are documented on the Installing GCC: Configuration page. Throroughly testing your changes is documented on the Contributing to GCC page, but that's likely something for later: you should know how to make your own simpler tests pass, by simply trying code that makes use of your new feature.
You make changes (which are made "permanent" by saving the files you modify), compile the code, and run the test suite.
You typically write additional tests or remove those that are invalidated by your changes and that's it.
If your changes don't contribute anything "positive" to the compiler upstream will probably never accept them, and the only "permanence" you can get is the modifications in your local copy.
With GCC and clang, I've been able to use SCons 'TryCompile' feature to build a simple configure check to determine if the currently configured compiler supports a given compile flag. Basically, clone the env, add the flag in question to CFLAGS, CCFLAGS, or CXXFLAGS, as appropriate, execute TryCompile, and if the TryCompile succeeds, then the flag is supported and we can add it to the real env.
This works perfectly with gcc, because unknown flags are errors and the compiler exits with a non-zero status.
With clang, it works reasonably well too: clang by default treats unknown errors as warnings, but if you pass it -Werror it will turn unknown flags into errors. So my wrapper around TryCompile just always passes -Werror along with the flag to be tested if it knows we are using clang.
However, this all falls over with the Microsoft toolchain because as far as I can discover, there is no way to convince the compiler to treat unknown flags as errors: they are always warnings, even if you pass the flag to make warnings errors. Since the compile exits cleanly wither or not the flag is accepted, TryCompile always succeeds. See this question for details on the various attempts I have made to get MSVC to exit with a non-zero status.
Any ideas on how I can make this work? Is there another SCons facility that I'm overlooking that can do this job for me? Should I interpose on TryCompile on MS platforms and parse the compiler output rather than examining the exit status. I'm really happy with using TryCompile for configure time flag detection with clang and gcc, but if I can't get MSVC to cooperate I'm going to need to abandon this whole approach, and I'm pretty loathe to do that since it is working so well so far.
Leave it to Windows to rain on the parade once again :) Obviously the Windows compiler always returns success, irregardless of what happens.
I can think of several options that you could try.
First of all, SCons provides a Multi-Platform Configuration (Autoconf Functionality) which may help you achieve the same result. It doesnt include anything for compiler options, but does at least includes the following:
Checking for the Existence of Header Files
Checking for the Availability of a Function
Checking for the Availability of a Library
Checking for the Availability of a typedef
Adding Your Own Custom Checks
Another option would be to build some sort of a dictionary with the Microsoft compilation options. You would probably need one dictionary per compiler version. This particular option would probably take a long time to prepare, and probably wouldnt be worth it.
Another option would be to use the Object() or Program() builder instead of the TryCompile() builder, and try to catch the failure and react accordingly. Im not sure if SCons allows you to catch compilation failures as an exception and carry on if it fails, but its worth checking into.
There is a bug in RHEL5's gcc-4.3.2 with which we are stuck. As a work-around we have extracted the missing object and put it in an object file. Adding this object file to every link makes the problem go away.
While adding it directly to LDFLAGS seems like a good solution, this doesn't work since e.g. libtool cannot cope with non-la files in there.
A slightly more portable solution seems to be to directly patch the gcc spec to add this to every link. I came up with
*startfile:
+ %{shared-libgcc:%{O*:%{!O0:/PATH/TO/ostream-inst.o}}}
where ostream-inst.o is added to the list of startfiles used in the link when compiling a shared library with optimizations.
Trying to compile boost with this spec gives some errors though since its build directly sets some objects with ld's --startgroup/--endgroup.
How should I update that spec to cover that case as well, or even better, all cases?
Go through this URL Specifying subprocesses and the switches to pass to them and GCC Command Options
If this help you, thats great.
I know this is not the answer you want to hear (since you specified otherwise in your question), but you are running into trouble here and are likely to run into more since your compiler is buggy. You should find a way of replacing it, since you'll find yourself writing even more work-around code the next time some obscure build system comes along. There's not only bjam out there.
Sorry I can't help you more. You might try simply writing a .lo file by hand (it's a two-liner, after all) and insert it into your LDFLAGS.
If it is a bug of GCC 4.3, did you try to build (by compiling from sources) and use a newer GCC. GCC 4.6.2 is coming right now. Did you consider using it?
When using enable_language in cmake, it always search for compilers in a certain default sequence. I wonder how I can change this sequence. For example, if my system has both ifort (icc) and gfortran (g++) installed, and I want to use ifort (icc) instead of the gfortran (g++), how could I set up this?
CLARIFICATION: I know we can switch the compiler explicitly by changing the variable CMAKE_Fortran_Compiler, but what I want to do is rather to modify the default sequence that cmake searches for available compilers if the user does not specify such a preference.
From what I currently found, a work-around is to set CMAKE_Fortran_Compiler before project(xxx), so that this variable can never gets overridden later, but clearly this is not the best way, since I will need gfortran if there turns out to be no ifort available.
By the way, what's the best place to look for this kind of information? The documentation does not seem to be very complete..
Thanks!
The right place to look is the CMake FAQ, which answers your question.
Omegaice's answer will work, as will CC=/path/to/icc cmake ..., see also this discussion thread.
Setting CMAKE_Fortran_Compiler before the project call is strongly discouraged (as the FAQ will tell you).
Note that manually calling enable_language is no different from specifying the languages with the project call (or indeed not specifying them, in which case they default to C and CXX), since that calls enable_language internally.
You can probably specify which compiler to use by doing ccmake .. -DCMAKE_Fortran_Compiler=<executable> (where <executable> is either the name of the compiler or the full path to the compiler) instead of setting it in the CMakeLists.txt.