Testing FPU on arm processor - gcc

I am using a Wandboard-Quad that contains an i.MX6 ARM processor. This processor has an FPU that I would like to utilize. Before I do, I want to test how much improvement I will get. I have a benchmark algorithm and have tried it with no optimization, and with -mfpu=vfp and there appears to be no improvement -- I do get improvement with optimization = 3.
I am using arm-linux-gnueabi libraries -- Any thoughts on what is incorrect and how I can tell if I am using the FPU?
Thanks,
Adam

Look at the assembler output with a -S flag and see if there are any fpu instructions being generated. That's probably the easiest thing.
Beyond that, there is a chance that your algorithm was using floating point so rarely that any use would be masked by loading and unloading the FPU registers. In that case, O3 optimizations in your other parts of the code would show you gains separate of the FPU usage.

-mfpu option works only when GCC is performing vectorization. Vectorization itself requires reasonable optimization level (minimum is -O2 with -ftree-vectorize option on). So try -O3 -ftree-vectorize -mfpu=vfp to utilize FPU and measure difference against simple -O3 level.
Also see ARM GCC docs for cases where -funsafe-math-optimizations may be required.

Without any optimisation the output from GCC is so inefficient that you might actually not be able to measure the difference between software and hardware floating point.
To see the benefits that the FPU adds, you need to test with a consistent optimisation level, and then use either -msoft-float or -mhard-float.
This will force the compiler to link against different libraries and make function calls for the floating-point operations rather than using native instructions. It is still possible that the underlying library uses hardware floating point, but I wouldn't worry about that too much.
You can select different sets of FP instructions using -mfpu=. For i.MX6 I think you want -mfpu=neon, as that should enable all applicable floating-point instructions (not just the NEON ones).

Related

GCC Auto Vectorization

In gcc compiler is there a way to enable auto vectorization only? I do know that -ftree-vectorize flag enables auto vectorization. But it requires at least -O2 optimization level. Is there a way to enable auto vectorization without using the -O2 optimization flag?
Thanks in advance.
You could actually get decent auto vectorization with -ftree-vectorize combined with -O1, for example: Godbolt.
With -O0, however, vectorized code won't be generated, even for very simple examples.
I suspect that gcc's tree vectorizer isn't even called with -O0, or called and bails out, but that has to be verified in the gcc source code.
Generally, -O0 and auto vectorization don't mix very well. In compilers, optimizations happen in phases, where each optimization phase prepares the ground for the next one.
For auto vectorization to occur, at least on non trivial examples, the compiler has to perform some optimizations beforehand. For example, loops that contain jumps usually cannot be vectorized, unless branches are eliminated and replaced with predicated instructions by an optimization called if-conversion - resulting in a flat block of code, which could be vectorized more conviniently.
Footnote - I came across this nice presentation about GCC auto vectorization, which you may find interesting - it gives a good introduction to auto vectorization with gcc, compiler flags and basic concepts.

GCC optimization levels. Which is better?

I am focusing on the CPU/memory consumption of compiled programs by GCC.
Executing code compiled with O3 is it always so greedy in term of resources ?
Is there any scientific reference or specification that shows the difference of Mem/cpu consumption of different levels?
People working on this problem often focus on the impact of these optimizations on the execution time, compiled code size, energy. However, I can't find too much work talking about resource consumption (by enabling optimizations).
Thanks in advance.
No, there is no absolute way, because optimization in compilers is an art (and is even not well defined, and might be undecidable or intractable).
But some guidelines first:
be sure that your program is correct and has no bugs before optimizing anything, so do debug and test your program
have well designed test cases and representative benchmarks (see this).
be sure that your program has no undefined behavior (and this is tricky, see this), since GCC will optimize strangely (but very often correctly, according to C99 or C11 standards) if you have UB in your code; use the -fsanitize=style options (and gdb and valgrind ....) during debugging phase.
profile your code (on various benchmarks), in particular to find out what parts are worth optimization efforts; often (but not always) most of the CPU time happens in a small fraction of the code (rule of thumb: 80% of time spent in 20% of code; on some applications like the gcc compiler this is not true, check with gcc -ftime-report to ask gcc to show time spent in various compiler modules).... Most of the time "premature optimization is the root of all evil" (but there are exceptions to this aphorism).
improve your source code (e.g. use carefully and correctly restrict and const, add some pragmas or function or variable attributes, perhaps use wisely some GCC builtins __builtin_expect, __builtin_prefetch -see this-, __builtin_unreachable...)
use a recent compiler. Current version (october 2015) of GCC is 5.2 (and GCC 8 in june 2018) and continuous progress on optimization is made ; you might consider compiling GCC from its source code to have a recent version.
enable all warnings (gcc -Wall -Wextra) in the compiler, and try hard to avoid all of them; some warnings may appear only when you ask for optimization (e.g. with -O2)
Usually, compile with -O2 -march=native (or perhaps -mtune=native, I assume that you are not cross-compiling, if you do add the good -march option ...) and benchmark your program with that
Consider link-time optimization by compiling and linking with -flto and the same optimization flags. E.g., put CC= gcc -flto -O2 -march=native in your Makefile (then remove -O2 -mtune=native from your CFLAGS there)...
Try also -O3 -march=native, usually (but not always, you might sometimes has slightly faster code with -O2 than with -O3 but this is uncommon) you might get a tiny improvement over -O2
If you want to optimize the generated program size, use -Os instead of -O2 or -O3; more generally, don't forget to read the section Options That Control Optimization of the documentation. I guess that both -O2 and -Os would optimize the stack usage (which is very related to memory consumption). And some GCC optimizations are able to avoid malloc (which is related to heap memory consumption).
you might consider profile-guided optimizations, -fprofile-generate, -fprofile-use, -fauto-profile options
dive into the documentation of GCC, it has numerous optimization & code generation arguments (e.g. -ffast-math, -Ofast ...) and parameters and you could spend months trying some more of them; beware that some of them are not strictly C standard conforming!
recent GCC and Clang can emit DWARF debug information (somehow "approximate" if strong optimizations have been applied) even when optimizing, so passing both -O2 and -g could be worthwhile (you still would be able, with some pain, to use the gdb debugger on optimized executable)
if you have a lot of time to spend (weeks or months), you might customize GCC using MELT (or some other plugin) to add your own new (application-specific) optimization passes; but this is difficult (you'll need to understand GCC internal representations and organization) and probably rarely worthwhile, except in very specific cases (those when you can justify spending months of your time for improving optimization)
you might want to understand the stack usage of your program, so use -fstack-usage
you might want to understand the emitted assembler code, use -S -fverbose-asm in addition of optimization flags (and look into the produced .s assembler file)
you might want to understand the internal working of GCC, use various -fdump-* flags (you'll get hundred of dump files!).
Of course the above todo list should be used in an iterative and agile fashion.
For memory leaks bugs, consider valgrind and several -fsanitize= debugging options. Read also about garbage collection (and the GC handbook), notably Boehm's conservative garbage collector, and about compile-time garbage collection techniques.
Read about the MILEPOST project in GCC.
Consider also OpenMP, OpenCL, MPI, multi-threading, etc... Notice that parallelization is a difficult art.
Notice that even GCC developers are often unable to predict the effect (on CPU time of the produced binary) of such and such optimization. Somehow optimization is a black art.
Perhaps gcc-help#gcc.gnu.org might be a good place to ask more specific & precise and focused questions about optimizations in GCC
You could also contact me on basileatstarynkevitchdotnet with a more focused question... (and mention the URL of your original question)
For scientific papers on optimizations, you'll find lots of them. Start with ACM TOPLAS, ACM TACO etc... Search for iterative compiler optimization etc.... And define better what resources you want to optimize for (memory consumption means next to nothing....).

How can I determine whether my program is using SSE2 (via gcc optimization)?

I have a C++ program which is compiled under gcc (gcc version 4.5.1) with the -O3 flag. I'm thinking about whether or not it would be worthwhile making an SSE2 version of this program (or at least, the busiest of it). However, I'm worried that the compiler has already done this through automatic vectorization.
Question: How do I determine (a) whether or not my program is using SSE/SSE2 and (b) how much time is spent using SSE/SSE2 (i.e. profiling)?
The easiest way to tell if you are gaining any benefit from compiler vectorization is to run the code with and without the -ftree-vectorize flag and compare the results.
-O3 will automatically enable that option. So you might want to try it under -O2 instead.
To see which loops were vectorized, which were not, and why, you can add the -ftree-vectorizer-verbose option.
The last option, of course, is to look at the assembly. It's very easy to identify vectorized code in assembly.

GCC support for XMM registers badly broken?

Whenever I examine the assembly code produced by GCC for code that uses the __m128i type, I see what looks like a catastrophe. There's tons of redundant instructions that serve no purpose.
And yet, as an assembly programmer I'd rather use asm{} but GCC prevents me from using XMM registers in asm {}.
Is there some trick to getting GCC to use XMM or do I need to wait for a future release?
I've got 4.3.4.
Are you compiling with optimisation enabled, e.g. -O3 ? If so then gcc usually generates pretty decent SSE code from intrinsics. Most intrinsics map to exactly one SSE instruction. Can you give an example that you consider to be particularly inefficient ?
Also, I'm not sure what you mean about "GCC prevents me from using XMM registers in asm {}" - again, if you provide a specific example then perhaps there's an easy solution.

Compile time comparison between Windows GCC and MSVC compiler

We are working on reducing compile times on Windows and are therefore considering all options. I've tried to look on Google for a comparison between compile time using GCC (MinGW or Cygwin) and MSVC compiler (CL) without any luck. Of course, making a comparison would not be to hard, but I'd rather avoid reinventing the wheel if I can.
Does anyone know of such an comparison out there? Or maybe anyone has some hands-on-experience?
Input much appreciated :)
Comparing compiler is not trivial:
It may vary from processor to processor. GCC may better optimize for i7 and MSVC for Core 2 Duo or vice versa. Performance may be affected by cache etc. (Unroll loops or don't unroll loops, that is the question ;) ).
It depends very largely on how code is written. Certain idioms (equivalent to each other) may be preferred by one compiler.
It depends on how the code is used.
It depends on flags. For example gcc -O3 is known to often produce slower code then -O2 or -Os.
It depends on what assumption can be made about code. Can you allow strict aliasing or no (-fno-strict-aliasing/-fstrict-aliasing in gcc). Do you need full IEEE 754 or can you bent floating pointer calculation rules (-ffast-math).
It also depends on particular processor extensions. Do you enable MMX/SSE or not. Do you use intrinsics or no. Do you depend that code is i386 compatible or not.
Which version of gcc? Which version of msvc?
Do you use any of the gcc/msvc extensions?
Do you use microbenchmarking or macrobenchmarking?
And at the end you find out that the result was less then statistical error ;)
Even if the single application is used the result may be inconclusive (function A perform better in gcc but B in msvc).
PS. I would say cygwin will be slowest as it has additional level of indirection between POSIX and WinAPI.

Resources