gcc macro when -fprofile-generate is used - gcc

Does gcc define a macro of some sort when the flag -fprofile-generate is specified? Basically, I want to disable multithreading when I'm profiling--it seems to have a way of corrupting the .gcda files.

This unanswered question is quite old, but I was having similar issues, so I hope this can be useful to someone.
You should try enabling the -fprofile-correction GCC compiler flag when using the profile information generated by a multi-threaded application. According to the GCC documentation relative to this flag:
Profiles collected using an instrumented binary for multi-threaded programs may be inconsistent due to missed counter updates. When this option is specified, GCC uses heuristics to correct or smooth out such inconsistencies. By default, GCC emits an error message when an inconsistent profile is detected.
It will get rid of the errors indicating that the .gcda files are corrupted by correcting inconsistent profile values due to multi-threading.

Related

Is there any way to blacklist a folder from address sanitizer when using GCC?

So I have enabled -fsantize=address to write a good program.
However, quite a bit of issues was being caught by other libraries that are not written by me (for ex, /lib64/...so)
I looked into -fsantize-blacklist option but seems like it's only available for clang not for GCC.
I know you can blacklist specific functions in your source code. But to be honest, that is not the ideal way as I wouldn't know which function will cause the issue ahead of time.
Is there any way to prevent GCC from processing address sanitizer for files under a specific folder?
Please help :(
Unfortunately fsanitize-blacklist has been rejected by GCC maintainers several times and there's no equivalent option. You could add Clang support or use -fsanitize-recover=address together with export ASAN_OPTIONS=log_path=path/to/logs to collect errors from all libraries and then filter the ones that are relevant.

Ccache doesn't work with gcc -M flag?

I'm trying to use ccache to speed up my rebuilds and I noticed this in the log:
[2015-04-17T10:41:45.845545 27682] Compiler option -M is unsupported
[2015-04-17T10:41:45.845584 27682] Failed; falling back to running the real compiler
In my experience you need something like the -M flag in order to have make or its equivalent trigger rebuilds correctly. It seems weird that ccache would be tripped up by an option that must be in almost every project's build. Am I missing something? Is there a more preferred option?
This is w/ ccache-3.2.1.
Edit: I tried with -MM as well, no luck.
It is correct that ccache currently doesn't support the compiler options -M and -MM (and it never has supported them).
Some reasons for why the options in question are unupported:
The options tell the compiler to let the preprocessor output make rules instead of the preprocessed source code. This is not a good match for how ccache works; it needs to get hold of the "real" preprocessed output for each compiler invocation (see https://ccache.dev/manual/3.7.11.html#_how_ccache_works).
Nobody has implemented support for the mentioned options, simply put.
It would most likely be possible to implement support by letting ccache run the compiler command twice: one without -M/-MM to retrieve the preprocessed source code (with which the result should be associated) and one with -M/-MM to retrieve the result (make rules).
However, I (speaking as the ccache maintainer for the last six years) have not heard anybody missing support for -M/-MM until now, so my impression is that -M/-MM actually aren't used much.
Am I missing something? Is there a more preferred option?
Yes, I would say that the standard way is to use -MD/-MMD (which are supported by ccache) instead of -M/-MM. -MD/-MMD are superior because they produce both the .o and the .d file in one go, whereas -M/-MM only produce the .d file, so the compiler must in that case be invoked twice by the Makefile for each source code file. See for instance http://www.microhowto.info/howto/automatically_generate_makefile_dependencies.html for how to use -MD/-MMD.

Using SCons TryCompile to examine compiler flag support on Windows

With GCC and clang, I've been able to use SCons 'TryCompile' feature to build a simple configure check to determine if the currently configured compiler supports a given compile flag. Basically, clone the env, add the flag in question to CFLAGS, CCFLAGS, or CXXFLAGS, as appropriate, execute TryCompile, and if the TryCompile succeeds, then the flag is supported and we can add it to the real env.
This works perfectly with gcc, because unknown flags are errors and the compiler exits with a non-zero status.
With clang, it works reasonably well too: clang by default treats unknown errors as warnings, but if you pass it -Werror it will turn unknown flags into errors. So my wrapper around TryCompile just always passes -Werror along with the flag to be tested if it knows we are using clang.
However, this all falls over with the Microsoft toolchain because as far as I can discover, there is no way to convince the compiler to treat unknown flags as errors: they are always warnings, even if you pass the flag to make warnings errors. Since the compile exits cleanly wither or not the flag is accepted, TryCompile always succeeds. See this question for details on the various attempts I have made to get MSVC to exit with a non-zero status.
Any ideas on how I can make this work? Is there another SCons facility that I'm overlooking that can do this job for me? Should I interpose on TryCompile on MS platforms and parse the compiler output rather than examining the exit status. I'm really happy with using TryCompile for configure time flag detection with clang and gcc, but if I can't get MSVC to cooperate I'm going to need to abandon this whole approach, and I'm pretty loathe to do that since it is working so well so far.
Leave it to Windows to rain on the parade once again :) Obviously the Windows compiler always returns success, irregardless of what happens.
I can think of several options that you could try.
First of all, SCons provides a Multi-Platform Configuration (Autoconf Functionality) which may help you achieve the same result. It doesnt include anything for compiler options, but does at least includes the following:
Checking for the Existence of Header Files
Checking for the Availability of a Function
Checking for the Availability of a Library
Checking for the Availability of a typedef
Adding Your Own Custom Checks
Another option would be to build some sort of a dictionary with the Microsoft compilation options. You would probably need one dictionary per compiler version. This particular option would probably take a long time to prepare, and probably wouldnt be worth it.
Another option would be to use the Object() or Program() builder instead of the TryCompile() builder, and try to catch the failure and react accordingly. Im not sure if SCons allows you to catch compilation failures as an exception and carry on if it fails, but its worth checking into.

Disable all gcc warnings

I'm working on a project that will read compiler error messages of a particular variety and do useful things with them. The sample codebase I'm testing this on (a random open-source application), and hence rebuilding frequently, contains a few bits that generate warnings, which are of no interest to me.
How do I disable all warnings from GCC, so I can just see error messages if there are any?
-w is the GCC-wide option to disable warning messages.

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Resources