Is there any information on type-based alias analysis (strict-aliasing rule) improving performance, for either gcc or clang compiler, on Spec2006, Spec2017 or other commonly used benchmark suite?
Related
I had a project for my university, where I had to improve the compiler that we had written throughout the course in Racket. I was recently reading about GCC -O optimizations, and -O3 and -Ofast were able to significantly speed up my runtime program, which is written in C. Is there some sort of quick way I could boost performance at the cost of compile-time? My class only cared about program runtime, but the testing used I/O so I couldn't have the compiler run the program and just compile the answer. We are using raco to compile the compiler, so I was wondering if there are any raco or even NASM options or optimizations I could have used? Thanks!
The short answer is no. There is no flag you can give raco to get it to optimize your code. (At least not at the time of writing this answer.)
However, if you use Pycket (a Racket compiler & runtime written with pypy), your code may run faster. If memory serves, Pycket is not 100% compatible with Racket, but does a pretty good job for everything but the most complex programs.
I am focusing on the CPU/memory consumption of compiled programs by GCC.
Executing code compiled with O3 is it always so greedy in term of resources ?
Is there any scientific reference or specification that shows the difference of Mem/cpu consumption of different levels?
People working on this problem often focus on the impact of these optimizations on the execution time, compiled code size, energy. However, I can't find too much work talking about resource consumption (by enabling optimizations).
Thanks in advance.
No, there is no absolute way, because optimization in compilers is an art (and is even not well defined, and might be undecidable or intractable).
But some guidelines first:
be sure that your program is correct and has no bugs before optimizing anything, so do debug and test your program
have well designed test cases and representative benchmarks (see this).
be sure that your program has no undefined behavior (and this is tricky, see this), since GCC will optimize strangely (but very often correctly, according to C99 or C11 standards) if you have UB in your code; use the -fsanitize=style options (and gdb and valgrind ....) during debugging phase.
profile your code (on various benchmarks), in particular to find out what parts are worth optimization efforts; often (but not always) most of the CPU time happens in a small fraction of the code (rule of thumb: 80% of time spent in 20% of code; on some applications like the gcc compiler this is not true, check with gcc -ftime-report to ask gcc to show time spent in various compiler modules).... Most of the time "premature optimization is the root of all evil" (but there are exceptions to this aphorism).
improve your source code (e.g. use carefully and correctly restrict and const, add some pragmas or function or variable attributes, perhaps use wisely some GCC builtins __builtin_expect, __builtin_prefetch -see this-, __builtin_unreachable...)
use a recent compiler. Current version (october 2015) of GCC is 5.2 (and GCC 8 in june 2018) and continuous progress on optimization is made ; you might consider compiling GCC from its source code to have a recent version.
enable all warnings (gcc -Wall -Wextra) in the compiler, and try hard to avoid all of them; some warnings may appear only when you ask for optimization (e.g. with -O2)
Usually, compile with -O2 -march=native (or perhaps -mtune=native, I assume that you are not cross-compiling, if you do add the good -march option ...) and benchmark your program with that
Consider link-time optimization by compiling and linking with -flto and the same optimization flags. E.g., put CC= gcc -flto -O2 -march=native in your Makefile (then remove -O2 -mtune=native from your CFLAGS there)...
Try also -O3 -march=native, usually (but not always, you might sometimes has slightly faster code with -O2 than with -O3 but this is uncommon) you might get a tiny improvement over -O2
If you want to optimize the generated program size, use -Os instead of -O2 or -O3; more generally, don't forget to read the section Options That Control Optimization of the documentation. I guess that both -O2 and -Os would optimize the stack usage (which is very related to memory consumption). And some GCC optimizations are able to avoid malloc (which is related to heap memory consumption).
you might consider profile-guided optimizations, -fprofile-generate, -fprofile-use, -fauto-profile options
dive into the documentation of GCC, it has numerous optimization & code generation arguments (e.g. -ffast-math, -Ofast ...) and parameters and you could spend months trying some more of them; beware that some of them are not strictly C standard conforming!
recent GCC and Clang can emit DWARF debug information (somehow "approximate" if strong optimizations have been applied) even when optimizing, so passing both -O2 and -g could be worthwhile (you still would be able, with some pain, to use the gdb debugger on optimized executable)
if you have a lot of time to spend (weeks or months), you might customize GCC using MELT (or some other plugin) to add your own new (application-specific) optimization passes; but this is difficult (you'll need to understand GCC internal representations and organization) and probably rarely worthwhile, except in very specific cases (those when you can justify spending months of your time for improving optimization)
you might want to understand the stack usage of your program, so use -fstack-usage
you might want to understand the emitted assembler code, use -S -fverbose-asm in addition of optimization flags (and look into the produced .s assembler file)
you might want to understand the internal working of GCC, use various -fdump-* flags (you'll get hundred of dump files!).
Of course the above todo list should be used in an iterative and agile fashion.
For memory leaks bugs, consider valgrind and several -fsanitize= debugging options. Read also about garbage collection (and the GC handbook), notably Boehm's conservative garbage collector, and about compile-time garbage collection techniques.
Read about the MILEPOST project in GCC.
Consider also OpenMP, OpenCL, MPI, multi-threading, etc... Notice that parallelization is a difficult art.
Notice that even GCC developers are often unable to predict the effect (on CPU time of the produced binary) of such and such optimization. Somehow optimization is a black art.
Perhaps gcc-help#gcc.gnu.org might be a good place to ask more specific & precise and focused questions about optimizations in GCC
You could also contact me on basileatstarynkevitchdotnet with a more focused question... (and mention the URL of your original question)
For scientific papers on optimizations, you'll find lots of them. Start with ACM TOPLAS, ACM TACO etc... Search for iterative compiler optimization etc.... And define better what resources you want to optimize for (memory consumption means next to nothing....).
what is compiler feedback(not linker feedback) based optimization? How to get this feedback file for arm gcc compiler?
Read the chapter of the GCC documentation dedicated to optimizations (and also the section about ARM in GCC: ARM options)
You can use:
link-time optimization (LTO) by compiling and linking with -flto in addition of other optimization flags (so make CC='gcc -flto -O2'): the linking phase also do optimizations (so the compiler is linking files containing not only object code, but also intermediate GIMPLE internal compiler representation)
profile-guided optimization (PGO, with -fprofile-generate, -fprofile-use, -fauto-profile etc...): you first generate code with profiling instructions, you run some representative benchmarks to get profiling information, and you compile a second time using these profiling information.
You could mix both approaches and give a lot of other optimization flags. Be sure to be consistent with them.
On x86 & x86-64 (and ARM natively) you might also use -mtune=native and there are lots of other -mtune possibilities.
Some people call profile-based optimization compiler feedback optimization (because dynamic runtime profile information is given back into the compiler). I prefer the "profile-guided optimization" term. See also this old question.
Can someone tell me the simple question why we need to put -o3 associates with -o in gcc to compile c program, simply it will help to increase the speed of compilation or reduced the time lapsed spending in compilation?
Thanks!!!
It can potentially increase the performance of the generated code.
In principle, compilation usually takes longer because this requires (much) more analysis by the compiler.
For typical modern C++ code, the effect of -O2 and higher can be very dramatic (an order of magnitude, depending on the nature of the program).
Precisely which optimizations are performed at the various optimization levels is documented in the manual pages: http://linux.die.net/man/1/gcc
Keep in mind, though that the various optimizations can potentially make latent bugs manifest, because compilers are allowed to exploit Undefined Behaviour¹ to achieve more efficient target code.
Undefined Behaviour lurks in places where the language standard(s) do not specify exactly what needs to happen. These can be extremely subtle.
So I recommend on using anything higher than -O2 unless you have rigid quality controls in place that guard against such hidden undefined behaviours (think of valgrind/purify, statical analysis tools and (stress) testing in general).
¹ A very profound blog about undefined behaviour in optimizing compilers is here: http://blog.regehr.org/archives/213 . In particular it let's you take the perspective of a compiler writer, whose only objective is to generate the fastest possible code that still satisfies the specifications.
Does gcc (C, C++ and Fortran compilers in particular) support interprocedural analysis to improve performance?
If yes, which are the relevant flags?
http://gcc.gnu.org/wiki/InterProcedural says the gcc is going to implement IPA, but that page is quite outdated.
Yes, it supports. Take a look at options started with -fipa here. Recent gfortran version (4.5+) supports even more sophisticated type of optimization - link-time optimization (LTO) which is interprocedural optimizations across files. The corresponding compiler flag is -flto.
P.S. I wrote a small series of posts about LTO at my blog. You're welcome! :-)