Is there any reliable data for benchmarking math library? - performance

I'm going to test the performance for some math function like pow, exp and log etc. Is there any reliable test data for that?
As those functions were highly optimized in exist modern system library like glibm or in OpenJDK, the general random inputs may lead to a quick convergence, or triggered some short path.

Call them in a loop to test throughput or latency, depending on whether the input to the next call depends on the output of the previous or not. Probably with data from a small to medium sized array of random values for the throughput test.
You want your compiler to produce an asm loop that does minimal work beyond the function call, so use appropriate techniques for whatever language and compiler you choose. (Idiomatic way of performance evaluation?)
You might disassemble or single-step through their execution to look for data-dependent branching to figure out which ranges of inputs might be faster or slower. (Or for open-source math libraries like glibc, the commented source could be good to look at.)

Related

Do compilers take the "status quo" when optimizations produced worse results?

To my knowledge, when using optimizations there is a risk to face the "maybe will be worse" case (i.e. the performance will be degraded, or the code size will be higher, or both). However do compilers able to detect such cases and return to the "status quo" (i.e. fall back to the original non-optimized code) when optimizations produced worse results? Can someone give (if possible) a particular examples of what compilers (for example, gcc, Clang (LLVM), etc.) do in this case?
In JIT compilers there is a thing called Deoptimization. Normally the compiler will optimize heavily assuming something, but during execution some of the assumption may fail. For example the compiler will assume the inmput of a function is always an integer and produce a highly efficient code for integer manipulation, but if, and such things happen in dynamic languages, the input is suddenly and array or a string, the code should revert. See v8 turbofan speculative optimizator for example.
For non JIT there is no way to deoptimize during runtime, but the compiler may create multiple execution paths. Your question is not fully logical because how would compiler know if it created unoptimal code? It can only use the same algorithm it used to do the optimization itself. That's probably why you are downwoted.

swap two variables. which way is faster?

Let's say we have two integers a and b. which way is faster for swapping their values?
c=a;
a=b;
b=c;//(edited typo)
or
a=a+b;
b=a-b;
a=a-b;
or bitwise xor
a=a^b;
b=a^b;
a=a^b;
I'll test its performance differences when I'll be able but I'd like to know it now. Is it bitwise?
Firstly, you cannot quantify the speed of an algorithm independent of the program language, the compiler and the platform on which it is run. An algorithm is a mathematical abstraction.
Having said that:
for a typical programming language,
and a typical compiler, and
a typical execution platform,
the first version will typically be faster because it will typically compile to fewer native instructions that take less clock cycles to execute. The first version only requires load and save operations. The other two versions have (at least) the same number of loads and saves, and some additional arithmetic or bit manipulation instructions.
However, even that is not cut-and-dry.
The 2nd and 3rd examples are performing the swap without using a temporary variable. This is something you might do if using an extra temporary variable was expensive. This might happen on a machine which didn't provide enough general purpose registers, and the relative cost of loading / saving to memory was large. In some circumstances, the native code equivalents could be optimal.
However ... and this is the real point ... the best strategy is to leave this kind of decision to the compiler. Unless you are prepared to put a huge amount of effort into micro-optimizing, the compiler is likely to be able to a better job than you can. Indeed, writing code in "cunning ways" is liable to make it harder for the compiler to optimize. (In the 3rd case for example, the compiler would need to figure out that that sequence is actually swapping 2 variables before it can substitute the optimal instruction sequence. Chances are that the optimizer won't be able to do that.)

If or function pointers in fortran

as it is so common with Fortran, I'm writing a massively parallel scientific code. In the beginning of my code I read my configuration file which tells me which type of solver I want to use. Now that means that in a subroutine (during the main run) I have
if(solver.eq.1)then
call solver1()
elseif(solver.eq.2)then
call solver2()
else
call solver3()
endif
Edit to avoid some confusion: This if is inside my time integration loop and I have one that is inside 3 nested loops.
Now my question is, wouldn't it be more efficient to use function pointers instead as the solver variable will not change during execution, except at the initialisation procedure.
Obviously function pointers are F2003. That shouldn't be a problem as long as I use gfortran 4.6. But I'm mainly using a BlueGene P, there is a f2003 compiler, so I suppose it's going to work there as well although I couldn't find any conclusive evidence on the web.
Knowing nothing about Fortran, this is my answer: The main problem with branching is that a CPU potentially cannot speculatively execute code across them. To mitigate this problem, branch prediction was introduced (which is very sophisticated in modern CPUs).
Indirect calls through a function pointer can be a problem for the prediction unit of the CPU. If it can't predict where the call will actually go, this will stall the pipeline.
I am quite sure that the CPU will correctly predict that your branch will always be taken or not taken because it is a trivial case of prediction.
Maybe the CPU can speculate across the indirect call, maybe it can't. This is why you need to test which is better.
If it cannot, you will certainly notice in your benchmark.
In addition, maybe you can hoist the if test out of your inner loop so it won't be called often. This will make the actual performance of the branch irrelevant.
If you only plan to use the function pointers once, at initialisation, and you are running codes on a BlueGene, isn't your concern for the efficiency mis-directed ? Generally, any initialisation which works is OK, if it takes 1sec instead of 1msec it's probably going to have 0 impact on total execution time.
Code initialisation routines for clarity, ease of modification, that sort of thing.
EDIT
My guess is that using function pointers rather than your current code will have no impact on execution speed. But it's just a (educated perhaps) guess and I'll be very interested in any data you gather on this question.
If you solver routines take a non-trivial runtime, then the trivial runtime of the IF statements is likely to be immaterial. If the sovler routines have a comparable runtine to the IF statement, then the total runtime is very short, so why do your care? This seems an optimization unlikely to pay off.
The first rule of runtime optimization is to profile your code is see what portions are consuming the runtime. Otherwise you are likely to optimize portions that are unimportant, which will accomplish nothing.
For what its worth, someone else recently had a very similar concern: Fortran Subroutine Pointers for Mismatching Array Dimensions
After a brief search I couldn't find the answer to the question, so I ran a little benchmark myself (see this link for the Makefile & dependencies). The benchmark consists of:
Draw random number to select method a, b, or c, which all perform a simple addition to their single integer argument
Call the chosen method 100 million times, using either a procedure pointer or if-statements
Repeat the above 5 times
The result with gfortran 4.8.5 on an CPU E5-2630 v3 # 2.40GHz is:
Time per call (proc. pointer): 1.89 ns
Time per call (if statement): 1.89 ns
In other words, there is not much of a performance difference!

Does profile-guided optimization done by compiler notably hurt cases not covered with profiling dataset?

This question is not specific to C++, AFAIK certain runtimes like Java RE can do profiled-guided optimization on the fly, I'm interested in that too.
MSDN describes PGO like this:
I instrument my program and run it under profiler, then
the compiler uses data gathered by profiler to automatically reorganize branching and loops in such way that branch misprediction is reduced and most often run code is placed compactly to improve its locality
Now obviously profiling result will depend on a dataset used.
With normal manual profiling and optimization I'd find some bottlenecks and improve those bottlenecks and likely leave all the other code untouched. PGO seems to improve often run code at expense of making rarely run code slower.
Now what if that slowered code is run often on another dataset that the program will see in real world? Will the program performance degrade compared to a program compiled without PGO and how bad will the degradation likely be? In other word, does PGO really improve my code performance for the profiling dataset and possibly worsen it for other datasets? Are there any real examples with real data?
Disclaimer: I have not done more with PGO than read up on it and tried it once with a sample project for fun. A lot of the following is based on my experience with the "non-PGO" optimizations and educated guesses. TL;DR below.
This page lists the optimizations done by PGO. Lets look at them one-by-one (grouped by impact):
Inlining – For example, if there exists a function A that frequently calls function B, and function B is relatively small, then profile-guided optimizations will inline function B in function A.
Register Allocation – Optimizing with profile data results in better register allocation.
Virtual Call Speculation – If a virtual call, or other call through a function pointer, frequently targets a certain function, a profile-guided optimization can insert a conditionally-executed direct call to the frequently-targeted function, and the direct call can be inlined.
These apparently improves the prediction whether or not some optimizations pay off. No direct tradeoff for non-profiled code paths.
Basic Block Optimization – Basic block optimization allows commonly executed basic blocks that temporally execute within a given frame to be placed in the same set of pages (locality). This minimizes the number of pages used, thus minimizing memory overhead.
Function Layout – Based on the call graph and profiled caller/callee behavior, functions that tend to be along the same execution path are placed in the same section.
Dead Code Separation – Code that is not called during profiling is moved to a special section that is appended to the end of the set of sections. This effectively keeps this section out of the often-used pages.
EH Code Separation – The EH code, being exceptionally executed, can often be moved to a separate section when profile-guided optimizations can determine that the exceptions occur only on exceptional conditions.
All of this may reduce locality of non-profiled code paths. In my experience, the impact would be noticable or severe if this code path has a tight loop that does exceed L1 code cache (and maybe even thrashes L2). That sounds exactly like a path that should have been included in a PGO profile :)
Dead Code separation can have a huge impact - both ways - because it can reduce disk access.
If you rely on exceptions being fast, you are doing it wrong.
Size/Speed Optimization – Functions where the program spends a lot of time can be optimized for speed.
The rule of thumb nowadays is to "optimize for size by default, and only optimize for speed where needed (and verify it helps). The reason is again code cache - in most cases, the smaller code will also be the faster code, because of code cache. So this kind of automates what you should do manually. Compared to a global speed optimization, this would slow down non-profiled code paths only in very atypical cases ("weird code" or a target machine with unusual cache behavior).
Conditional Branch Optimization – With the value probes, profile-guided optimizations can find if a given value in a switch statement is used more often than other values. This value can then be pulled out of the switch statement. The same can be done with if/else instructions where the optimizer can order the if/else so that either the if or else block is placed first depending on which block is more frequently true.
I would file that under "improved prediction", too, unless you feed the wrong PGO information.
The typical case where this can pay a lot are run time parameter / range validation and similar paths that should never be taken in a normal execution.
The breaking case would be:
if (x > 0) DoThis() else DoThat();
in a relevant tight loop and profiling only the x > 0 case.
Memory Intrinsics – The expansion of intrinsics can be decided better if it can be determined if an intrinsic is called frequently. An intrinsic can also be optimized based on the block size of moves or copies.
Again, mostly better informaiton with a small possibility of penalizing untested data.
Example: - this is all an "educated guess", but I think it's quite illustrativefor the entire topic.
Assume you have a memmove that is always called on well aligned non-overlapping buffers with a length of 16 bytes.
A possible optimization is verifying these conditions and use inlined MOV instructions for this case, calling to a general memmove (handling alignment, overlap and odd length) only when the conditions are not met.
The benefits can be significant in a tight loop of copying structs around, as you improve locality, reduce expected path instruction, likely with more chances for pairing/reordering.
The penalty is comparedly small, though: in the general case without PGO, you would either always call the full memmove, or nline the full memmove implementation. The optimization adds a few instructions (including a conditional jump) to something rather complex, I'd assume a 10% overhead at most. In most cases, these 10% will be below the noise due to cache access.
However, there is a very slight slight chance for significant impact if the unexpected branch is taken frequently and the additional instructions for the expected case together with the instructions for the default case push a tight loop out of the L1 code cache
Note that you are already at the limits of what the compiler could do for you. The additional instructions can be expected to be a few bytes, compared to a few K in code cache. A static optimizer could hit the same fate depending on how well it can hoist invariants - and how much you let it.
Conclusion:
Many of the optimizations are neutral.
Some optimizations can have slight negative impact on non-profiled code paths
The impact is usually much smaller than the possible gains
Very rarely, a small impact can be emphasized by other contributing pathological factors
Few optimizations (namely, layout of code sections) can have large impact, but again the possible gains signidicantly outweight that
My gut feel would further claim that
A static optimizer, on a whole, would be at least equally likely to create a pathological case
It would be pretty hard to actually destroy performance even with bad PGO input.
At that level, I would be much more afraid of PGO implementation bugs/shortcomings than of failed PGO optimizations.
PGO can most certainly affect the run time of the code that is run less frequently. After all you are modifying the locality of some functions/blocks and that will make the blocks that are now together to be more cache friendly.
What I have seen is that teams identify their high priority scenarios. Then they run those to train the optimization profiler and measure the improvement. You don't want to run all the scenarios under PGO because if you do you might as well not run any.
As in everything related to performance you need to measure before you apply it. Masure your most common scenarios to see if they improved at all by using PGO training. And also measure the less common scenarios to see if they regressed at all.

Does more human-logical source code tend to produce more optimized compiled code?

I'm working on a large performance-critical project that is very branch heavy. In the process of designing algorithms for this product, my employer often reminds me to write code that is more "human logical", or written in a manner that more closely aligns with the way we logically think.
While this makes sense to me from a few different perspectives (e.g. ease of understanding/remembering, code maintenance, etc.), I'm also wondering whether this approach could also ever be expected to lead to a more optimized compiled output.
Could this be the case due to the fact that compilers are written by humans, and optimizers are often designed to recognize familiar code blocks?
I would love to hear some thoughts on why this could/not be the case.
Consider two different kinds of code, library code and application code.
Library code (like a string class library) is likely to own the program counter a lot of the time, like this:
while(some test){
massage some data, while seldom calling sub-functions
}
That kind of code will benefit from compiler optimization.
(So to answer your question, people write benchmark functions like this, and the compiler-writers use those as test cases.)
On the other hand, application code tends to look like this:
if (some test){
do a bunch of things, including many function calls
} else if (some other test){
do a bunch of things, including many function calls
} else {
do a bunch of things, including many function calls
}
In this case, the time you save by branch prediction or cycle-shaving might be 1 time unit, say, while the do a bunch of things... might spend from 10^2 to 10^8 time units, with or without I/O.
So the benefit of compiler optimization of this code tends to be completely lost in the noise.
That's not to say it can't be optimized.
It's just that the compiler can't do it - it's your job.
If you want to make the latter kind of code run fast, the best way is to find out which lines of code are on the call stack a high percent of time, and if possible, finding a way to avoid doing them.
(Here's an example of a 43x speedup.)
What is "human logical" probably varies from human to human.
For instance, if I am a newbie performing tasks according to written instructions I will (usually), over time, learn some tasks by heart whereas for others I will return to the instructions simply because the tasks are not performed often enough/are too boring or both. Others in the same situation may or may not function similiarly and it is not certain that the tasks they'll learn will be the ones I learn.
For programming it works similarly. Some may construct a loop in one manner and perform a test inside it for the sake of readability while I might do the test outside for performance reasons. What is more wrong and what is more right?
There is a widespread belief that compilers will optimize anything. This is true but as I've written (drastically) in another post, GIGO (Garbage In = Garbage Out) applies. Compilers don't operate in a vacuum: given a set of rules they'll perform safe optimizations on source code to the extent of their (the compilers') constructors' imagination and competence in code optimizations. Bloat source code will become optimized bloat machine code. In the same manner lean and mean source code will become optimized lean and mean machine code. In critical places it is possible to feed the compiler source code that it "feels" (YES! they do have personalities) absolutely comfortable in optimizing and the resulting machine code will fly.
We've all experienced poorly performing software. If we're lucky we've experienced software that performs incredibly well. One developer can learn to write a piece of code that performs well in the same amount of time that another writes code that performs poorly.

Resources