Why is naive multiplication n^2 time? - big-o

I've read that operations such as addition/subtraction were linear time, and that "grade-school" long multiplication is n^2 time. Why is this true?
Isn't addition floor(log n) times, when n is the smaller operand? The same argument goes for subtraction, and for multiplication, if we make a program to do long multiplication instead of adding integers together, shouldn't the complexity be floor(log a) * floor(log b) where a and b are the operands?

The answer depends on what is "n." When they say that addition is O(n) and multiplication (with the naïve algorithm) is O(n^2), n is the length of the number, either in bits or some other unit. This definition is used because arbitrary precision arithmetic is implemented as operations on lists of "digits" (not necessarily base 10).
If n is the number being added or multiplied, the complexities would be log n and (log n)^2 for positive n, as long as the numbers are stored in log n space.

The naive approach to multiplication of (for example) 273 x 12 is expanded out (using the distributive rule) as (200 + 70 + 3) x (10 + 2) or:
200 x 10 + 200 x 2
+ 70 x 10 + 70 x 2
+ 3 x 10 + 3 x 2
The idea of this simplification is to reduce the multiplications to something that can be done easily. For your primary school math, that would be working with digits, assuming you know the times tables from zero to nine. For bignum libraries where each "digit" may be a value from 0 to 9999 (for ease of decimal printing), the same rules apply, being able to multiply numbers less than 10,000 relatively constantly).
Hence, if n is the number of digits, the complexity is indeed O(n2) since the number of "constant" operations tends to rise with the product of the "digit" counts.
This is true even if your definition of digit varies slightly (such as being a value from 0 to 9999 or even being one of the binary digits 0 or 1).

Related

What is the runtime complexity of isPrime(n) if you iterate up to sqrt(n)

What would the Big O for the following method?
boolean isPrime( num )
i = 2
while i <= sqrt(num)
if num % i == 0
return false
i += 1
return true
My thought is O(sqrt(n)), which isn't a typical answer.
Below are a couple of tables to clarify my reasoning:
In this table, every time N is quadrupled, the number of iterations only doubles.
N
iterations = sqrt(N)
4
2
16
4
64
8
256
16
1024
32
To contrast this behavior with a linear function,
if we looped while i <= num/2 instead, the table would be:
N
iterations = N/2
4
2
16
8
64
32
256
128
1024
512
Now every time N is quadrupled, the number of iterations also quadruples.
I.e. the runtime varies directly with N.
You are right - checking up to √n gives you a much better complexity. And if you define num = n and assume the modulo operator is constant time, then your time complexity is correct.
The reason this is not a 'typical answer', as you state, is that typically we measure time complexity on the size of the input. To encode num in binary, we need log2(num) bits (and similarly with any other nontrivial base) so the actual input size is n = log2(num).
If we define n this way, then you will find that num = O(2n), so your overall time complexity becomes O(√(2n)) or O(20.5n).
However, this is not more correct than your expression, it is just a more common (and more useful) way to express it, and it might clarify why your answer doesn't seem typical when you search.
Ultimately what matters is that you define your n. If you don't, it is probably assumed by the reader that it's the logarithm of num and they might think you falsely claim your algorithm is the fastest in the world.

Is this number a power of two?

I have a number (in base 10) represented as a string with up to 10^6 digits. I want to check if this number is a power of two. One thing I can think of is binary search on exponents and using FFT and fast exponentiation algorithm, but it is quite long and complex to code. Let n denote the length of the input (i.e., the number of decimal digits in the input). What's the most efficient algorithm for solving this problem, as a function of n?
There are either two or three powers of 2 for any given size of a decimal number, and it is easy to guess what they are, since the size of the decimal number is a good approximation of its base 10 logarithm, and you can compute the base 2 logarithm by just multiplying by an appropriate constant (log210). So a binary search would be inefficient and unnecessary.
Once you have a trial exponent, which will be on the order of three million, you can use the squaring exponentiation algorithm with about 22 bugnum decimal multiplications. (And up to 21 doublings, but those are relatively easy.)
Depending on how often you do this check, you might want to invest in fast bignum code. But if it is infrequent, simple multiplication should be ok.
If you don't expect the numbers to be powers of 2, you could first do a quick computation mod 109 to see if the last 9 digits match. That will eliminate all but a tiny percentage of random numbers. Or, for an even faster but slightly weaker filter, using 64-bit arithmetic check that the last 20 digits are divisible by 220 and not by 10.
Here is an easy probabilistic solution.
Say your number is n, and we want to find k: n = 2^k. Obviously, k = log2(n) = log10(n) * log2(10). We can estimate log10(n) ~ len(n) and find k' = len(n) * log2(10) with a small error (say, |k - k'| <= 5, I didn't check but this should be enough). Probably you'll need this part in any solutions that can come in mind, it was mentioned in other answers as well.
Now let's check that n = 2^k for some known k. Select a random prime number P with from 2 to k^2. If remainders are not equal that k is definitely not a match. But what if they are equal? I claim that false positive rate is bounded by 2 log(k)/k.
Why it is so? Because if n = 2^k (mod P) then P divides D = n-2^k. The number D has length about k (because n and 2^k has similar magnitude due to the first part) and thus cannot have more than k distinct prime divisors. There are around k^2 / log(k^2) primes less than k^2, so a probability that you've picked a prime divisor of D at random is less than k / (k^2 / log(k^2)) = 2 log(k) / k.
In practice, primes up to 10^9 (or even up to log(n)) should suffice, but you have to do a bit deeper analysis to prove the probability.
This solution does not require any long arithmetics at all, all calculations could be made in 64-bit integers.
P.S. In order to select a random prime from 1 to T you may use the following logic: select a random number from 1 to T and increment it by one until it is prime. In this case the distribution on primes is not uniform and the former analysis is not completely correct, but it can be adapted to such kind of random as well.
i am not sure if its easy to apply, but i would do it in the following way:
1) show the number in binary. now if the number is a power of two, it would look like:
1000000....
with only one 1 and the rest are 0. checking this number would be easy. now the question is how is the number stored. for example, it could have leading zeroes that will harden the search for the 1:
...000010000....
if there are only small number of leading zeroes, just search from left to right. if the number of zeroes is unknown, we will have to...
2) binary search for the 1:
2a) cut in the middle.
2b) if both or neither of them are 0 (hopefully you can check if a number is zero in reasonable time), stop and return false. (false = not power of 2)
else continue with the non-zero part.
stop if the non-zero part = 1 and return true.
estimation: if the number is n digits (decimal), then its 2^n digits binary.
binary search takes O(log t), and since t = 2^n, log t = n. therefore the algorithm should take O(n).
assumptions:
1) you can access the binary view of the number.
2) you can compare a number to zero in a reasonable time.

Algorithm on exponential with irrational base

I know there is an O(logn) algorithm on calculating a^n where a is an integer, and n is a huge integer (probably the result need to modular another prime MOD).
I wondering whether there is still an O(logn) algorithm to calculate
(a+sqrt(b))^n + (a-sqrt(b))^n (mod MOD)
The irrational part sqrt(b) looks not easy to handle in the exponential calculation. All I can do is to calculate a+sqrt(b) and a-sqrt(b) part separately and add them together then do the modular, but if n is huge, it is easy to overflow. Any ideas?
You can do that by computing (in ZM[x] / &langle;x²-b&rangle;)
(a+x)^n+(a-x)^n mod (M, x^2-b)
where again you can use modular halving-and-squaring for the powers, where the intermediate results now are linear polynomials (over modular integers). Actually, you will only need one of the powers, the result is twice the constant coefficient.
Alternatively, these power combinations are the solution of the linear recursion of order 2
u[n+2]-2*a*u[n+1]+(a^2-b)*u[n]
where
u[0]=2 and u[1]=2*a
so that you can use fast matrix exponentiation of the system matrix of this recursion, again obtaining an O(log(n)) algorithm (disregarding bitsize).
Example: As per the comment, take a=3, b=8, n=2 (and integers mod M=10^9+7, example is not large enough for that to matter)
In the first variant, compute u[n]=(a+x)^n mod (M, x^2-b), so
u[0]=1
u[1]=3+x
u[2]=(3+x)^2 mod (x^2-8)=9+6x+8=17+6x
and twice the constant term is 2*17=34
In the second variant, the recursion is (with 2*a=6, a^2-b=1)
u[n+2]-6*u[n+1]+u[n]=0
so that the first sequence elements are
u[0]=2
u[1]=6
u[2]=6*u[1]-u[0]=34
If you expand (a+sqrt(b))^n + (a-sqrt(b))^n you get
( a + nC1 a^(n-1) √b + nC2 a^(n-2) b + nC3 a^(n-3) √b b + ... )
+( a - nC1 a^(n-1) √b + nC2 a^(n-2) b - nC3 a^(n-3) √b b + ... )
= 2 a + 0 + 2 nC2 a^(n-2) b + 0 + ... + 2 nC4 a^(n-4) b^2 + ...
so the terms involving the possibly irrational parts cancel. (nC2 etc are binomial coefficients).
The RHS of the above could be calculated fairly efficiently using integer arithmetic as you can relate each term in the sequence to the previous one. However there are n/2 terms so the calculation is O(n).
As we know the result will be an integer we can try running through the Exponentiation by squaring algorithm keeping track of the integer a fractional components. Write a+sqrt(b) = x + y where x is an integer an y is the fractional part.
Finding the square of this we have x^2 + 2 x y + y^2. Even though we are only interested in the integer part we have some problems as there is an integer part of 2 x y+ y^2. This causes problems as to effectively calculate the integer part we are going to know a lot of digits of y. When we come to higher powers you need more an more digits of y to get the integer part.
I don't think normal floating point multiplication would be good enough to calculate the terms for very large n.

Exponentiation algorithm analysis

Following text is provided about exponentation
We have obvious algorithm to compute X to power N uses N-1 multiplications. A recursive algorithm can do better. N<=1 is the base case of recursion. Otherwise, if n is even, we have xn = xn/2 . xn/2, and if n is
odd, x to power of n = x(n-1)/2 x(n-1)/2 x.
Specifically, a 200-digit number is raised to a large power (usually
another 200-digit number), with only the low 200 or so digits retained
after each multiplication. Since the calculations require dealing with
200-digit numbers, efficiency is obviously important. The
straightforward algorithm for exponentiation would require about 10 to
power of 200 multiplications, whereas recursive algorithm presented
requires only about 1,200.
My questions regarding above text
1. How does author came with 10 to power of 200 multiplications for simple alogorithm and recursive algorithm only about 1, 200? How author came with above numbers
Thanks!
Because complexity of the first algorithm is linear and of the second is logarithmic (due to N).
200-digit number is about 10^200 and log_2(10^200) is about 1,200.
The exponent has 200 digits, thus it is about 10 to power of 200. If using naive exponentiation you'll have to do this amount of multiplications.
On the other hand, if you use the recursive exponentiation, the number of multiplications depends on exponent's number of bits. Since the exponent is almost 10^200, it has log(10^200) = 200*log(10) bits. This is 600, the 2 in there stems from the fact that if you have a 1 bit you'll have to do two multiplications.
Here are the 2 possible algorithms :
algo gives a^N
SimpleExp(a,N):
return a*simpleExp(a,N-1)
so it's N operation, so for a^(10^200) it's 10^200
OptimizedAlgo(a,N):
if N == 0:
return 1
if (N mod 2) == 0:
return OptimizedAlgo(a,N/2)*OptimizedAlgo(a,N/2) // 1 operation
else:
return a*OptimizedAlgo(a,(N-1)/2)*OptimizedAlgo(a,(N-1)/2) //2 operations
here for a^(10^200) you have between log2(N) and 2* log2(N) operations (2^(log2(N) = N )
and log2(10^200) = 200 * log2(10) ~ 664.3856189774724
and 2*log2(10^200) =1328.771237954945
so the number of operations lies between 664 and 1328

Why is squaring a number faster than multiplying two random numbers?

Multiplying two binary numbers takes n^2 time, yet squaring a number can be done more efficiently somehow. (with n being the number of bits) How could that be?
Or is it not possible? This is insanity!
There exist algorithms more efficient than O(N^2) to multiply two numbers (see Karatsuba, Pollard, Schönhage–Strassen, etc.)
The two problems "multiply two arbitrary N-bit numbers" and "Square an arbitrary N-bit number" have the same complexity.
We have
4*x*y = (x+y)^2 - (x-y)^2
So if squaring N-bit integers takes O(f(N)) time, then the product of two arbitrary N-bit integers can be obtained in O(f(N)) too. (that is 2x N-bit sums, 2x N-bit squares, 1x 2N-bit sum, and 1x 2N-bit shift)
And obviously we have
x^2 = x * x
So if multiplying two N-bit integers takes O(f(N)), then squaring a N-bit integer can be done in O(f(N)).
Any algorithm computing the product (resp the square) provides an algorithm to compute the square (resp the product) with the same asymptotic cost.
As noted in other answers, the algorithms used for fast multiplication can be simplified in the case of squaring. The gain will be on the constant in front of the f(N), and not on f(N) itself.
Squaring an n digit number may be faster than multiplying two random n digit numbers. Googling I found this article. It is about arbitrary precision arithmetic but it may be relevant to what your asking. In it the authors say this:
In squaring a large integer, i.e. X^2
= (xn-1, xn-2, ... , x1, x0)^2 many cross-product terms of the form xi *
xj and xj * xi are equivalent. They
need to be computed only once and then
left shifted in order to be doubled.
An n-digit squaring operation is
performed using only (n^2 + n)/2
single-precision multiplications.
Like others have pointed out, squaring can only be about 1.5X or 2X faster than regular multiplication between arbitrary numbers. Where does the computational advantage come from? It's symmetry. Let's calculate the square of 1011 and try to spot a pattern that we can exploit. u0:u3 represent the bits in the number from the most significant to the least significant.
1011 // u3 * u0 : u3 * u1 : u3 * u2 : u3 * u3
1011 // u2 * u0 : u2 * u1 : u2 * u2 : u2 * u3
0000 // u1 * u0 : u1 * u1 : u1 * u2 : u1 * u3
1011 // u0 * u0 : u0 * u1 : u0 * u2 : u0 * u3
If you consider the elements ui * ui for i=0, 1, ..., 4 to form the diagonal and ignore them, you'll see that the elements ui * uj for i ≠ j are repeated twice.
Therefore, all you need to do is calculate the product sum for elements below the diagonal and double it, with a left shift. You'd finally add the diagonal elements. Now you can see where the 2X speed up comes from. In practice, the speed-up is about 1.5X because of the diagonal and extra operations.
I believe you may be referring to exponentiation by squaring . This technique isn't used for multiplying, but for raising to a power x^n, where n may be large. Rather than multiply x
times itself N times, one performs a series of squaring and adding operations which can be mapped to the binary representation of N. The number of multiplication operations (which are more expensive than additions for large numbers) is reduced from N to log(N) with respect to the naive exponentiation algorithm.
Do you mean multiplying a number by a power of 2? This is usually quicker than multiplying any two random numbers since the result can be calculated by simple bit shifting. However, bear in mind that modern microprocessors dedicate lots of brute force silicon to these types of calculations and most arithmetic is performed with blinding speed compared to older microprocessors
I have it!
2 * 2
is more expensive than
2 << 1
(The caveat being it only works for one case.)
Suppose you want to expand out the multiplication (a+b)×(c+d). It splits up into four individual multiplications: a×c + a×d + b×c + b×d.
But if you want to expand out (a+b)², then it only needs three multiplications (and a doubling): a² + 2ab + b².
(Note also that two of the multiplications are themselves squares.)
Hopefully this just begins to give an insight into some of the speedups that are possible when performing a square over a regular multiplication.
First of all great question! I wish there were more questions like this.
So it turns out that the method I came up with is O(n log n) for general multiplication in the arithmetic complexity only. You can represent any number X as
X = x_{n-1} 2^{n-1} + ... + x_1 2^1 + x_0 2^0
Y = y_{m-1} 2^{m-1} + ... + y_1 2^1 + y_0 2^0
where
x_i, y_i \in {0,1}
then
XY = sum _ {k=0} ^ m+n r_k 2^k
where
r_k = sum _ {i=0} ^ k x_i y_{k-i}
which is just a straight forward application of FFT to find the values of r_k for each k in (n +m) log( n + m) time.
Then for each r_k you must determine how big the overflow is and add it up accordingly. For squaring a number this means O(n log n) arithmetic operations.
You can add up the r_k values more efficiently using the Schönhage–Strassen algorithm to obtain a O(n log n log log n) bit operation bound.
The exact answer to your question is already posted by Eric Bainville.
However, you can get a much better bound than O(n^2) for squaring a number simply because there exist much better bounds for multiplying integers!
If you assume fixed length to the word size of the machine and that the number to be squared is in memory, a squaring operation requires only one load from memory, so could be faster.
For arbitrary length integers, multiplication is typically O(N²) but there are algorithms which reduce this for large integers.
If you assume the simple O(N²) approach to multiply a by b, then for each bit in a you have to shift b and add it to an accumulator if that bit is one. For each bit in a you need 3N shifts and additions.
Note that
( x - y )² = x² - 2 xy + y²
Hence
x² = ( x - y )² + 2 xy - y²
If each y is the largest power of two not greater than x, this gives a reduction to a lower square, two shifts and two additions. As N is reduced on each iteration, you may get an efficiency gain ( the symmetry means it visits each point in a triangle rather than a rectangle ), but it's still O(N²).
There may be another better symmetry to exploit.
a^2
(a+b)*(a+b)+b^2 eg. 66^2 = (66+6)(66-6)+6^2 = 72*60+36= 4356
for a^n just use the power rule
66^4 = 4356^2
I would want to solve the problem by N bit multiplication
for a number
A the bits be A(n-1)A(n-2)........A(1)A(0).
B the bits be B(n-1)B(n-2)........B(1)B(0).
for the square of number A the unique multiplication bits generated will be
for A(0)->A(0)....A(n-1)
A(1)->A(1)....A(n-1) and so on
so the total operations will be
OP = n + n-1 + n-2 ....... + 1
Therefore OP = n^2+n/2;
so the Asymptotic notation will be O(n^2)
and for multiplication of A and B n^2 unique multiplications will be generated
so the Asymptotic notation will be O(n^2)
The square root of 2n is 2n / 2 or 2n >> 1, so if your number is a power of two everything is totally simple once you know the power. To multiply is even simplier: 24 * 28 is 24+8. There's no sense in this statements you've done.
If you have a binary number A, it can (always, proof left to the eager reader) be expressed as (2^n + B), this can be squared as 2^2n + 2^(n+1)B + B^2. We can then repeat the expansion, until such a point that B equals zero. I haven't looked too hard at it, but intuitively, it feels as if you should be able to make a squaring function take fewer algorithmical steps than a general-purpose multiplication.
I think that you are completely wrong in your statements
Multiplying two binary numbers takes
n^2 time
Multiplying two 32bit numbers take exactly one clock cycle. On a 64 bit processor, I would assume that multiplying two 64 bit numbers take exactly 1 clock cycle. It wouldn't even surprise my that a 32bit processor can multiply two 64bit numbers in 1 clock cycle.
yet squaring a number can be done more efficiently somehow.
Squaring a number is just multiplying the number with itself, so that is just a simple multiplication. There is no "square" operation in the CPU.
Maybe you are confusing "squaring" with "multiplying by a power of 2". Multiplying by 2 can be implemeted by shifting all the bits one position to the "left". Multiplying by 4 is shifting all the bits two positions to the "left". By 8, 3 positions. But this trick only applies to a power of two.

Resources