Compile time optimization of Math.pow - performance

I've read that it's possible to optimize multiplication by a known constant at compile time by generating code which makes clever use of bit shifting and compiler-generated magic constants.
I'm interested in possibilities for optimizing exponentiation in a similar hacky manner. I know about exponentiation by squaring, so I guess you could aggressively optimize
pow(CONSTANT, n)
by embedding precomputed successive squares of CONSTANT into the executable. I'm not sure whether this is actually a good idea.
But when it comes to
pow(n, CONSTANT)
I can't think of anything. Is there a known way to do this efficiently? Do the minds of StackOverflow have ideas, on either problem?

Assuming pow(a,b) is implemented as exp(b * log(a)) (which it probably is), if a is a constant then you can precompute its log. If b is a constant, it only helps if it is also an integer.

Exponentiation by squaring is ideal for the second case, just basically unroll the loop and embed the constants. But only if CONSTANT is an integer of course.

You could use addition chain exponentiation, this is a smart way to reduce the number of CPU cycles for known integer exponents : https://en.wikipedia.org/wiki/Addition-chain_exponentiation

Related

Big Theta of Factorial Multiplied by a Coefficient

For a function with a run time of (cn)! where c is a coefficient >= 0 and c != n, would the tight bound of the run be Θ(n!) or Θ((cn)!)? Right now, I believe it would be Θ((cn)!) since they would differ by a coefficent >= n since cn != n.
Thanks!
Edit: A more specific example to clarify what I'm asking:
Will (7n)!, (5n/16)! and n! all be Θ(n!)?
You can use Stirling's approximation to get that if c>1 then (cn)! is asymptotically larger than pow(c,n)*n!, which is not O(n!) since the quotient diverges. As a more elementary approach consider this example for c=2: (2n)!=(2n)(2n-1)...(n+1)n!>n!n! and (n!n!)/n!=n! diverges, so (2n)! is NOT O(n!).
Will (7n)!, (5n/16)! and n! all be Θ(n!)?
I think there are two answers to your question.
The shorter one is from the purely theoretical point of view. Of those 3 only the n! lies in the class of Θ(n!). The second lies in the O(n!) (note big-O instead of big-Theta) and (7n)! is slower than Θ(n!), it lies in Θ((7n)!)
There is also a longer but more practical answer. And to get to it we first need to understand what is the big deal with this whole big-O and big-Theta business in the first place?
The thing is that for many practical tasks there are many algorithms and not all of them are equally or even similarly efficient. So the practical question is: can we somehow capture this difference in performance in an easy to understand and compare way? And this is the problem that big-O/big-Theta are trying to solve. The idea behind this method is that if we look at some algorithm with some complicated real formula for the exact time, there is only 1 term that grows faster than all others and thus dominates the time as the problem gets bigger. So let's compress this big formula to that dominant term. Then we can compare those terms and if they are different, we can easily say which is the better algorithm (7*n^2 is clearly better than 2*n^3).
Another idea is that the term "operation" is usually not that well defined at the level people usually think about algorithms. Which "operation" actually maps to a single CPU instruction and which to a few depends on many factors such as particular hardware. Also the instructions themselves can take different time to execute. Moreover sometimes the algorithm's working time is dominated by memory access than CPU instructions and those components are not easily additive. The morale of this story is that if two algorithms are different only in a scalar coefficient, you can't really compare those algorithms just theoretically. You need to compare some implementations in some particular environment. This is why algorithms complexity measure typically boils down to something like O(n^k) where k is a constant.
There is one more consideration: practicality. If the algorithm is some polynomial, there is a huge practical difference between cases a=3 and a=4 in O(n^a). But if it is something like O(2^(n^a)), then there is not much difference what exactly the a as along as a>1. This is because 2^n grows fast enough to make it impractical for almost any realistic n irrespective of a. So in practical terms it is often good enough approximation to put all such algorithms into a single "exponential algorithms" bucket and say they are all impractical even despite the fact there is a huge difference between them. This is where some mathematically unconventional notations like 2^O(n) come from.
From this last practical perspective the difference between Θ(n!) and Θ((7n)!) is also very little: both are totally impractical because both lie beyond even the exponential bucket of 2^O(n) (see Stirling's formula that shows that n! grows a bit faster than (n/e)^n). So it makes sense to put all such algorithms in another bucket of "factorial complexity" and mark them as impractical as well.

Why is division more expensive than multiplication?

I am not really trying to optimize anything, but I remember hearing this from programmers all the time, that I took it as a truth. After all they are supposed to know this stuff.
But I wonder why is division actually slower than multiplication? Isn't division just a glorified subtraction, and multiplication is a glorified addition? So mathematically I don't see why going one way or the other has computationally very different costs.
Can anyone please clarify the reason/cause of this so I know, instead of what I heard from other programmer's that I asked before which is: "because".
CPU's ALU (Arithmetic-Logic Unit) executes algorithms, though they are implemented in hardware. Classic multiplications algorithms includes Wallace tree and Dadda tree. More information is available here. More sophisticated techniques are available in newer processors. Generally, processors strive to parallelize bit-pairs operations in order the minimize the clock cycles required. Multiplication algorithms can be parallelized quite effectively (though more transistors are required).
Division algorithms can't be parallelized as efficiently. The most efficient division algorithms are quite complex (The Pentium FDIV bug demonstrates the level of complexity). Generally, they requires more clock cycles per bit. If you're after more technical details, here is a nice explanation from Intel. Intel actually patented their division algorithm.
But I wonder why is division actually slower than multiplication? Isn't division just a glorified subtraction, and multiplication is a glorified addition?
The big difference is that in a long multiplication you just need to add up a bunch of numbers after shifting and masking. In a long division you have to test for overflow after each subtraction.
Lets consider a long multiplication of two n bit binary numbers.
shift (no time)
mask (constant time)
add (neively looks like time proportional to n²)
But if we look closer it turns out we can optimise the addition by using two tricks (there are further optimisations but these are the most important).
We can add the numbers in groups rather than sequentially.
Until the final step we can add three numbers to produce two rather than adding two to produce one. While adding two numbers to produce one takes time proportional to n, adding three numbers to produce two can be done in constant time because we can eliminate the carry chain.
So now our algorithm looks like
shift (no time)
mask (constant time)
add numbers in groups of three to produce two until there are only two left (time proportional to log(n))
perform the final addition (time proportional to n)
In other words we can build a multiplier for two n bit numbers in time roughly proportional to n (and space roughly proportional to n²). As long as the CPU designer is willing to dedicate the logic multiplication can be almost as fast as addition.
In long division we need to know whether each subtraction overflowed before we can decide what inputs to use for the next one. So we can't apply the same parallising tricks as we can with long multiplication.
There are methods of division that are faster than basic long division but still they are slower than multiplication.

why is integer factorization a non-polynomial time?

I am just a beginner of computer science. I learned something about running time but I can't be sure what I understood is right. So please help me.
So integer factorization is currently not a polynomial time problem but primality test is. Assume the number to be checked is n. If we run a program just to decide whether every number from 1 to sqrt(n) can divide n, and if the answer is yes, then store the number. I think this program is polynomial time, isn't it?
One possible way that I am wrong would be a factorization program should find all primes, instead of the first prime discovered. So maybe this is the reason why.
However, in public key cryptography, finding a prime factor of a large number is essential to attack the cryptography. Since usually a large number (public key) is only the product of two primes, finding one prime means finding the other. This should be polynomial time. So why is it difficult or impossible to attack?
Casual descriptions of complexity like "polynomial factoring algorithm" generally refer to the complexity with respect to the size of the input, not the interpretation of the input. So when people say "no known polynomial factoring algorithm", they mean there is no known algorithm for factoring N-bit natural numbers that runs in time polynomial with respect to N. Not polynomial with respect to the number itself, which can be up to 2^N.
The difficulty of factorization is one of those beautiful mathematical problems that's simple to understand and takes you immediately to the edge of human knowledge. To summarize (today's) knowledge on the subject: we don't know why it's hard, not with any degree of proof, and the best methods we have run in more than polynomial time (but also significantly less that exponential time). The result that primality testing is even in P is pretty recent; see the linked Wikipedia page.
The best heuristic explanation I know for the difficulty is that primes are randomly distributed. One of the easier-to-understand results is Dirichlet's theorem. This theorem say that every arithmetic progression contains infinitely many primes, in other words, you can think of primes as being dense with respect to progressions, meaning you can't avoid running into them. This is the simplest of a rather large collection of such results; in all of them, primes appear in ways very much analogous to random numbers.
The difficult of factoring is thus analogous to the impossibility of reversing a one-time pad. In a one-time pad, there's a bit we don't know XOR with another one we don't. We get zero information about an individual bit knowing the result of the XOR. Replace "bit" with "prime" and multiplication with XOR, and you have the factoring problem. It's as if you've multiplied two random numbers together, and you get very little information from product (instead of zero information).
If we run a program just to decide whether every number from 1 to sqrt(n) can divide n, and if the answer is yes, then store the number.
Even ignoring that the divisibility test will take longer for bigger numbers, this approach takes almost twice as long if you just add a single (binary) digit to n. (Actually it will take twice as long if you add two digits)
I think that is the definition of exponential runtime: Make n one bit longer, the algorithm takes twice as long.
But note that this observation applies only to the algorithm you proposed. It is still unknown if integer factorization is polynomial or not. The cryptographers sure hope that it is not, but there are also alternative algorithms that do not depend on prime factorization being hard (such as elliptic curve cryptography), just in case...

Complexity of arbitrary matrix multiplications

I've got a simple question concerning the implementation of matrix multiplications. I know that there are algorithms for matrices of equal size (n x n) that have a complexity of O(n^2.xxx). But if I have two matrices A and B of different sizes (p x q, q x r), what would be the minimal complexity of the implementation to date? I would guess it is O(pqr) since I would implement a multiplication with 3 nested loops with p, q and r iterations. In particular, does anyone now how the Eigen library implements a multiplication?
A common technique is to pad matrices with size (p*q, q*r), so that their sizes becomes (n*n). Then, you can apply Strassen's algorithm.
You're correct about it being O(pqr) for exactly the reasons you stated.
I'm not sure how Eigen implements it, but there are many ways to optimize matrix multiplication algorithms, such as optimizing cache performance by tiling and being aware of whether the language you're using is row major or column major (you want the inner loops to access memory in as small steps as possible to prevent cache misses). Some other optimization techniques are detailed here.
As Yu-Had Lyu mentioned you can pad them with zeroes, but unless p,q and r are close the complexity degenerates (time to perform the padding).
To answer your other question about how Eigen implements it:
The way numerics packages implement matrix multiplication is normally by using the typical O(pqr) algorithm, but heavily optimised in `non-mathematical' ways: blocking for better cache locality, using special processor instructions (SIMD etc.)
Some packages (MATLAB, Octave, ublas) use two libraries called BLAS and LAPACK which provide linear algebra primitives (like matrix multiplication) heavily optimised in this way (sometimes using hardware-specific optimisations).
AFAIK, Eigen simply uses blocking and SIMD instructions.
Few common numeric libraries (Eigen included) use Strassen Algorithm. The reason for this is actually very interesting: while the complexity is better (O(n^(log2 7))) the hidden constants behind the big Oh are very large due to all the additions performed - in other words the algorithm is only useful in practice for very large matrices.
Note: There is an even more efficient (in terms of asymptotic complexity) algorithm than Strassen's algorithm: the Coppersmith–Winograd algorithm with O(n^(2.3727)), but for which the constants are so large, that it is unlikely that it will ever be used in practice. It is in fact believed that there exists an algorithm that runs in O(n^2) (which is the trivial lower-bound, since any algorithm needs to at least read the n^2 elements of the matrices).

How to compute exact complexity of an algorithm?

Without resorting to asymptotic notation, is tedious step counting the only way to get the time complexity of an algorithm? And without step count of each line of code can we arrive at a big-O representation of any program?
Details: trying to find out the complexity of several numerical analysis algorithms to decide which will be best suited for solving a particular problem.
E.g. - from among Regula-Falsi or Newton-Rhapson method for solving eqns, intention is to evaluate the exact complexity of each method and then decide (putting value of 'n' or whatever arguments there are) which method is less complex.
The only way --- not the "easy" or hard way but the only reasonable way --- to find the exact complexity of a complicated algorithm is to profile it. A modern implementation of an algorithm has a complex interaction with numerical libraries and with the CPU and its floating point unit. For instance in-cache memory access is much faster than out-of-cache memory access, and on top of that there may be more than one level of cache. Counting steps is really much more suitable to the asymptotic complexity that you say isn't enough for your purpose.
But, if you did want to count steps automatically, there are also ways to do that. You can add a counter increment command (like "bloof++;" in C) to every line of code, and then display the value at the end.
You should also know about the more refined time complexity expression, f(n)*(1+o(1)), that is also useful for analytical calculations. For instance n^2+2*n+7 simplifies to n^2*(1+o(1)). If the constant factor is what bothers you about usual asymptotic notation O(f(n)), this refinement is a way to keep track of it and still throw out negligible terms.
The 'easy way' is to simulate it. Try your algorithms with lots of values of n and lots of different data, plot the results then match the curve on the graph to an equation.
Your results may not be strictly correct and they're only as valid as your ability to generate good test data but for most cases this will work.
E.g. - from among Regula-Falsi or Newton-Rhapson method for solving eqns, intention is to evaluate the exact complexity of each method and then decide (putting value of 'n' or whatever arguments there are) which method is less complex.
I don't think it's possible to answer this question in general for nonlinear solvers. You could an exact number of computations per iteration, but you're never going to know in general how many iterations it will take for each solver to converge. There are other complications like needing the Jacobian for Newton's which could make computing the complexity even more difficult.
To sum up, the most efficient nonlinear solver is always dependent on the problem you're solving. If the variety of problems you're solving is very limited, doing a bunch of experiments with different solvers and measuring the number of iterations and CPU time will probably give you more useful information.

Resources