I know there is an O(logn) algorithm on calculating a^n where a is an integer, and n is a huge integer (probably the result need to modular another prime MOD).
I wondering whether there is still an O(logn) algorithm to calculate
(a+sqrt(b))^n + (a-sqrt(b))^n (mod MOD)
The irrational part sqrt(b) looks not easy to handle in the exponential calculation. All I can do is to calculate a+sqrt(b) and a-sqrt(b) part separately and add them together then do the modular, but if n is huge, it is easy to overflow. Any ideas?
You can do that by computing (in ZM[x] / ⟨x²-b⟩)
(a+x)^n+(a-x)^n mod (M, x^2-b)
where again you can use modular halving-and-squaring for the powers, where the intermediate results now are linear polynomials (over modular integers). Actually, you will only need one of the powers, the result is twice the constant coefficient.
Alternatively, these power combinations are the solution of the linear recursion of order 2
u[n+2]-2*a*u[n+1]+(a^2-b)*u[n]
where
u[0]=2 and u[1]=2*a
so that you can use fast matrix exponentiation of the system matrix of this recursion, again obtaining an O(log(n)) algorithm (disregarding bitsize).
Example: As per the comment, take a=3, b=8, n=2 (and integers mod M=10^9+7, example is not large enough for that to matter)
In the first variant, compute u[n]=(a+x)^n mod (M, x^2-b), so
u[0]=1
u[1]=3+x
u[2]=(3+x)^2 mod (x^2-8)=9+6x+8=17+6x
and twice the constant term is 2*17=34
In the second variant, the recursion is (with 2*a=6, a^2-b=1)
u[n+2]-6*u[n+1]+u[n]=0
so that the first sequence elements are
u[0]=2
u[1]=6
u[2]=6*u[1]-u[0]=34
If you expand (a+sqrt(b))^n + (a-sqrt(b))^n you get
( a + nC1 a^(n-1) √b + nC2 a^(n-2) b + nC3 a^(n-3) √b b + ... )
+( a - nC1 a^(n-1) √b + nC2 a^(n-2) b - nC3 a^(n-3) √b b + ... )
= 2 a + 0 + 2 nC2 a^(n-2) b + 0 + ... + 2 nC4 a^(n-4) b^2 + ...
so the terms involving the possibly irrational parts cancel. (nC2 etc are binomial coefficients).
The RHS of the above could be calculated fairly efficiently using integer arithmetic as you can relate each term in the sequence to the previous one. However there are n/2 terms so the calculation is O(n).
As we know the result will be an integer we can try running through the Exponentiation by squaring algorithm keeping track of the integer a fractional components. Write a+sqrt(b) = x + y where x is an integer an y is the fractional part.
Finding the square of this we have x^2 + 2 x y + y^2. Even though we are only interested in the integer part we have some problems as there is an integer part of 2 x y+ y^2. This causes problems as to effectively calculate the integer part we are going to know a lot of digits of y. When we come to higher powers you need more an more digits of y to get the integer part.
I don't think normal floating point multiplication would be good enough to calculate the terms for very large n.
Related
What is the time complexity for multiplying two complex numbers?
For example (35 + 12i) *(45 +23i)
The asymptotic complexity is the same as for multiplying the components.
(35 + 12i) * (45 + 23i) == 35*45 + 45*12i + 35*23i - 12*23
== (35*45 - 12*23) + (45*12 + 35*23)i
You just have 4 real multiplications and 2 real additions.
So, if real multiplication is O(1), so is complex multiplication.
If real multiplication is not constant (as is the case for arbitrary precision values), then neither is complex multiplication.
If you multiply two complex numbers (a + bi) and (c + di), the calculation works out to (ac - bd, adi + bci), which requires a total of four multiplications and two subtractions. Additions and subtractions take less time than multiplications, so the main cost is the four multiplications done here. Since four is a constant, this doesn't change the big-O runtime of doing the muliplications compared to the real number case.
Let's imagine you have two numbers n1 and n2, each of which is d digits long. If you use the grade-school method for multiplying these numbers together, you'd do the following:
for each digit d1 of n2, in reverse:
let carry = 0
for each digit d2 of n1, in reverse:
let product = d1 * d2 + carry
write down product mod 10
set carry = product / 10, rounding down
add up all d of the d-digit numbers you wrote in step 1
That first loop runs in time Θ(d2), since each digit in n2 is paired and multiplied with each digit of n1, doing O(1) work apiece. The result is d different d-digit numbers. Adding up those numbers will take time Θ(d2), since you have to scan each number of each digit exactly once. Overall, this takes time Θ(d2).
Notice that this runtime is a function of how many digits are in n1 and n2, rather than n1 and n2 themselves. The number of digits in a number n is Θ(log n), so this runtime is actually O((log max{n1, n2})2) if you're multiplying two numbers n1 and n2.
This is not the fastest way to do multiplications, though for a while there was a conjecture that it was. Karatsuba's algorithm runs in time O((log max{n1, n2})log3 4), where the exponent is around 1.7ish. There are more modern algorithms that run even faster of this, and it's an open problem whether it can be done in time O(log d) with no exponent!
Multiplying two complex numbers only requires three real multiplications.
Let p = a * c, q = b * d, and r = (a + b) * (c + d).
Then (a + bi) * (c + di) = (p - q) + i(r - p - q).
See also Complex numbers product using only three multiplications.
I'm looking to calculate the following sum efficiently:
sum (i=0..max) (i * A mod B)
One may assume that max, A < B and that A and B are co-prime (otherwise an easy reduction is possible). Numbers are large, so simple iteration is way too inefficient.
So far I haven't been able to come up with a polynomial-time algorithm (i.e., polynomial in log(B)), best I could find is O(sqrt(max)). Is this a known hard problem, or does anyone know of a polynomial-time algorithm?
To be clear, the "mod B" only applies to the i*A, not to the overall sum. So e.g.
sum(i=0..3) (i*7 mod 11) = 0 + 7 + 3 + 10 = 20.
You can shift things around a bit to get
A*(sum(i=0..max)) mod B
which simplifies to
A*(max*(max+1)/2) mod B
Now you only need to do one (possibly big-int) multiplication (assuming max itself isn't too big) followed by one (big-int) mod operation.
I've read that operations such as addition/subtraction were linear time, and that "grade-school" long multiplication is n^2 time. Why is this true?
Isn't addition floor(log n) times, when n is the smaller operand? The same argument goes for subtraction, and for multiplication, if we make a program to do long multiplication instead of adding integers together, shouldn't the complexity be floor(log a) * floor(log b) where a and b are the operands?
The answer depends on what is "n." When they say that addition is O(n) and multiplication (with the naïve algorithm) is O(n^2), n is the length of the number, either in bits or some other unit. This definition is used because arbitrary precision arithmetic is implemented as operations on lists of "digits" (not necessarily base 10).
If n is the number being added or multiplied, the complexities would be log n and (log n)^2 for positive n, as long as the numbers are stored in log n space.
The naive approach to multiplication of (for example) 273 x 12 is expanded out (using the distributive rule) as (200 + 70 + 3) x (10 + 2) or:
200 x 10 + 200 x 2
+ 70 x 10 + 70 x 2
+ 3 x 10 + 3 x 2
The idea of this simplification is to reduce the multiplications to something that can be done easily. For your primary school math, that would be working with digits, assuming you know the times tables from zero to nine. For bignum libraries where each "digit" may be a value from 0 to 9999 (for ease of decimal printing), the same rules apply, being able to multiply numbers less than 10,000 relatively constantly).
Hence, if n is the number of digits, the complexity is indeed O(n2) since the number of "constant" operations tends to rise with the product of the "digit" counts.
This is true even if your definition of digit varies slightly (such as being a value from 0 to 9999 or even being one of the binary digits 0 or 1).
I have a series
S = i^(m) + i^(2m) + ............... + i^(km) (mod m)
0 <= i < m, k may be very large (up to 100,000,000), m <= 300000
I want to find the sum. I cannot apply the Geometric Progression (GP) formula because then result will have denominator and then I will have to find modular inverse which may not exist (if the denominator and m are not coprime).
So I made an alternate algorithm making an assumption that these powers will make a cycle of length much smaller than k (because it is a modular equation and so I would obtain something like 2,7,9,1,2,7,9,1....) and that cycle will repeat in the above series. So instead of iterating from 0 to k, I would just find the sum of numbers in a cycle and then calculate the number of cycles in the above series and multiply them. So I first found i^m (mod m) and then multiplied this number again and again taking modulo at each step until I reached the first element again.
But when I actually coded the algorithm, for some values of i, I got cycles which were of very large size. And hence took a large amount of time before terminating and hence my assumption is incorrect.
So is there any other pattern we can find out? (Basically I don't want to iterate over k.)
So please give me an idea of an efficient algorithm to find the sum.
This is the algorithm for a similar problem I encountered
You probably know that one can calculate the power of a number in logarithmic time. You can also do so for calculating the sum of the geometric series. Since it holds that
1 + a + a^2 + ... + a^(2*n+1) = (1 + a) * (1 + (a^2) + (a^2)^2 + ... + (a^2)^n),
you can recursively calculate the geometric series on the right hand to get the result.
This way you do not need division, so you can take the remainder of the sum (and of intermediate results) modulo any number you want.
As you've noted, doing the calculation for an arbitrary modulus m is difficult because many values might not have a multiplicative inverse mod m. However, if you can solve it for a carefully selected set of alternate moduli, you can combine them to obtain a solution mod m.
Factor m into p_1, p_2, p_3 ... p_n such that each p_i is a power of a distinct prime
Since each p is a distinct prime power, they are pairwise coprime. If we can calculate the sum of the series with respect to each modulus p_i, we can use the Chinese Remainder Theorem to reassemble them into a solution mod m.
For each prime power modulus, there are two trivial special cases:
If i^m is congruent to 0 mod p_i, the sum is trivially 0.
If i^m is congruent to 1 mod p_i, then the sum is congruent to k mod p_i.
For other values, one can apply the usual formula for the sum of a geometric sequence:
S = sum(j=0 to k, (i^m)^j) = ((i^m)^(k+1) - 1) / (i^m - 1)
TODO: Prove that (i^m - 1) is coprime to p_i or find an alternate solution for when they have a nontrivial GCD. Hopefully the fact that p_i is a prime power and also a divisor of m will be of some use... If p_i is a divisor of i. the condition holds. If p_i is prime (as opposed to a prime power), then either the special case i^m = 1 applies, or (i^m - 1) has a multiplicative inverse.
If the geometric sum formula isn't usable for some p_i, you could rearrange the calculation so you only need to iterate from 1 to p_i instead of 1 to k, taking advantage of the fact that the terms repeat with a period no longer than p_i.
(Since your series doesn't contain a j=0 term, the value you want is actually S-1.)
This yields a set of congruences mod p_i, which satisfy the requirements of the CRT.
The procedure for combining them into a solution mod m is described in the above link, so I won't repeat it here.
This can be done via the method of repeated squaring, which is O(log(k)) time, or O(log(k)log(m)) time, if you consider m a variable.
In general, a[n]=1+b+b^2+... b^(n-1) mod m can be computed by noting that:
a[j+k]==b^{j}a[k]+a[j]
a[2n]==(b^n+1)a[n]
The second just being the corollary for the first.
In your case, b=i^m can be computed in O(log m) time.
The following Python code implements this:
def geometric(n,b,m):
T=1
e=b%m
total = 0
while n>0:
if n&1==1:
total = (e*total + T)%m
T = ((e+1)*T)%m
e = (e*e)%m
n = n/2
//print '{} {} {}'.format(total,T,e)
return total
This bit of magic has a mathematical reason - the operation on pairs defined as
(a,r)#(b,s)=(ab,as+r)
is associative, and the rule 1 basically means that:
(b,1)#(b,1)#... n times ... #(b,1)=(b^n,1+b+b^2+...+b^(n-1))
Repeated squaring always works when operations are associative. In this case, the # operator is O(log(m)) time, so repeated squaring takes O(log(n)log(m)).
One way to look at this is that the matrix exponentiation:
[[b,1],[0,1]]^n == [[b^n,1+b+...+b^(n-1))],[0,1]]
You can use a similar method to compute (a^n-b^n)/(a-b) modulo m because matrix exponentiation gives:
[[b,1],[0,a]]^n == [[b^n,a^(n-1)+a^(n-2)b+...+ab^(n-2)+b^(n-1)],[0,a^n]]
Based on the approach of #braindoper a complete algorithm which calculates
1 + a + a^2 + ... +a^n mod m
looks like this in Mathematica:
geometricSeriesMod[a_, n_, m_] :=
Module[ {q = a, exp = n, factor = 1, sum = 0, temp},
While[And[exp > 0, q != 0],
If[EvenQ[exp],
temp = Mod[factor*PowerMod[q, exp, m], m];
sum = Mod[sum + temp, m];
exp--];
factor = Mod[Mod[1 + q, m]*factor, m];
q = Mod[q*q, m];
exp = Floor[ exp /2];
];
Return [Mod[sum + factor, m]]
]
Parameters:
a is the "ratio" of the series. It can be any integer (including zero and negative values).
n is the highest exponent of the series. Allowed are integers >= 0.
mis the integer modulus != 0
Note: The algorithm performs a Mod operation after every arithmetic operation. This is essential, if you transcribe this algorithm to a language with a limited word length for integers.
Multiplying two binary numbers takes n^2 time, yet squaring a number can be done more efficiently somehow. (with n being the number of bits) How could that be?
Or is it not possible? This is insanity!
There exist algorithms more efficient than O(N^2) to multiply two numbers (see Karatsuba, Pollard, Schönhage–Strassen, etc.)
The two problems "multiply two arbitrary N-bit numbers" and "Square an arbitrary N-bit number" have the same complexity.
We have
4*x*y = (x+y)^2 - (x-y)^2
So if squaring N-bit integers takes O(f(N)) time, then the product of two arbitrary N-bit integers can be obtained in O(f(N)) too. (that is 2x N-bit sums, 2x N-bit squares, 1x 2N-bit sum, and 1x 2N-bit shift)
And obviously we have
x^2 = x * x
So if multiplying two N-bit integers takes O(f(N)), then squaring a N-bit integer can be done in O(f(N)).
Any algorithm computing the product (resp the square) provides an algorithm to compute the square (resp the product) with the same asymptotic cost.
As noted in other answers, the algorithms used for fast multiplication can be simplified in the case of squaring. The gain will be on the constant in front of the f(N), and not on f(N) itself.
Squaring an n digit number may be faster than multiplying two random n digit numbers. Googling I found this article. It is about arbitrary precision arithmetic but it may be relevant to what your asking. In it the authors say this:
In squaring a large integer, i.e. X^2
= (xn-1, xn-2, ... , x1, x0)^2 many cross-product terms of the form xi *
xj and xj * xi are equivalent. They
need to be computed only once and then
left shifted in order to be doubled.
An n-digit squaring operation is
performed using only (n^2 + n)/2
single-precision multiplications.
Like others have pointed out, squaring can only be about 1.5X or 2X faster than regular multiplication between arbitrary numbers. Where does the computational advantage come from? It's symmetry. Let's calculate the square of 1011 and try to spot a pattern that we can exploit. u0:u3 represent the bits in the number from the most significant to the least significant.
1011 // u3 * u0 : u3 * u1 : u3 * u2 : u3 * u3
1011 // u2 * u0 : u2 * u1 : u2 * u2 : u2 * u3
0000 // u1 * u0 : u1 * u1 : u1 * u2 : u1 * u3
1011 // u0 * u0 : u0 * u1 : u0 * u2 : u0 * u3
If you consider the elements ui * ui for i=0, 1, ..., 4 to form the diagonal and ignore them, you'll see that the elements ui * uj for i ≠ j are repeated twice.
Therefore, all you need to do is calculate the product sum for elements below the diagonal and double it, with a left shift. You'd finally add the diagonal elements. Now you can see where the 2X speed up comes from. In practice, the speed-up is about 1.5X because of the diagonal and extra operations.
I believe you may be referring to exponentiation by squaring . This technique isn't used for multiplying, but for raising to a power x^n, where n may be large. Rather than multiply x
times itself N times, one performs a series of squaring and adding operations which can be mapped to the binary representation of N. The number of multiplication operations (which are more expensive than additions for large numbers) is reduced from N to log(N) with respect to the naive exponentiation algorithm.
Do you mean multiplying a number by a power of 2? This is usually quicker than multiplying any two random numbers since the result can be calculated by simple bit shifting. However, bear in mind that modern microprocessors dedicate lots of brute force silicon to these types of calculations and most arithmetic is performed with blinding speed compared to older microprocessors
I have it!
2 * 2
is more expensive than
2 << 1
(The caveat being it only works for one case.)
Suppose you want to expand out the multiplication (a+b)×(c+d). It splits up into four individual multiplications: a×c + a×d + b×c + b×d.
But if you want to expand out (a+b)², then it only needs three multiplications (and a doubling): a² + 2ab + b².
(Note also that two of the multiplications are themselves squares.)
Hopefully this just begins to give an insight into some of the speedups that are possible when performing a square over a regular multiplication.
First of all great question! I wish there were more questions like this.
So it turns out that the method I came up with is O(n log n) for general multiplication in the arithmetic complexity only. You can represent any number X as
X = x_{n-1} 2^{n-1} + ... + x_1 2^1 + x_0 2^0
Y = y_{m-1} 2^{m-1} + ... + y_1 2^1 + y_0 2^0
where
x_i, y_i \in {0,1}
then
XY = sum _ {k=0} ^ m+n r_k 2^k
where
r_k = sum _ {i=0} ^ k x_i y_{k-i}
which is just a straight forward application of FFT to find the values of r_k for each k in (n +m) log( n + m) time.
Then for each r_k you must determine how big the overflow is and add it up accordingly. For squaring a number this means O(n log n) arithmetic operations.
You can add up the r_k values more efficiently using the Schönhage–Strassen algorithm to obtain a O(n log n log log n) bit operation bound.
The exact answer to your question is already posted by Eric Bainville.
However, you can get a much better bound than O(n^2) for squaring a number simply because there exist much better bounds for multiplying integers!
If you assume fixed length to the word size of the machine and that the number to be squared is in memory, a squaring operation requires only one load from memory, so could be faster.
For arbitrary length integers, multiplication is typically O(N²) but there are algorithms which reduce this for large integers.
If you assume the simple O(N²) approach to multiply a by b, then for each bit in a you have to shift b and add it to an accumulator if that bit is one. For each bit in a you need 3N shifts and additions.
Note that
( x - y )² = x² - 2 xy + y²
Hence
x² = ( x - y )² + 2 xy - y²
If each y is the largest power of two not greater than x, this gives a reduction to a lower square, two shifts and two additions. As N is reduced on each iteration, you may get an efficiency gain ( the symmetry means it visits each point in a triangle rather than a rectangle ), but it's still O(N²).
There may be another better symmetry to exploit.
a^2
(a+b)*(a+b)+b^2 eg. 66^2 = (66+6)(66-6)+6^2 = 72*60+36= 4356
for a^n just use the power rule
66^4 = 4356^2
I would want to solve the problem by N bit multiplication
for a number
A the bits be A(n-1)A(n-2)........A(1)A(0).
B the bits be B(n-1)B(n-2)........B(1)B(0).
for the square of number A the unique multiplication bits generated will be
for A(0)->A(0)....A(n-1)
A(1)->A(1)....A(n-1) and so on
so the total operations will be
OP = n + n-1 + n-2 ....... + 1
Therefore OP = n^2+n/2;
so the Asymptotic notation will be O(n^2)
and for multiplication of A and B n^2 unique multiplications will be generated
so the Asymptotic notation will be O(n^2)
The square root of 2n is 2n / 2 or 2n >> 1, so if your number is a power of two everything is totally simple once you know the power. To multiply is even simplier: 24 * 28 is 24+8. There's no sense in this statements you've done.
If you have a binary number A, it can (always, proof left to the eager reader) be expressed as (2^n + B), this can be squared as 2^2n + 2^(n+1)B + B^2. We can then repeat the expansion, until such a point that B equals zero. I haven't looked too hard at it, but intuitively, it feels as if you should be able to make a squaring function take fewer algorithmical steps than a general-purpose multiplication.
I think that you are completely wrong in your statements
Multiplying two binary numbers takes
n^2 time
Multiplying two 32bit numbers take exactly one clock cycle. On a 64 bit processor, I would assume that multiplying two 64 bit numbers take exactly 1 clock cycle. It wouldn't even surprise my that a 32bit processor can multiply two 64bit numbers in 1 clock cycle.
yet squaring a number can be done more efficiently somehow.
Squaring a number is just multiplying the number with itself, so that is just a simple multiplication. There is no "square" operation in the CPU.
Maybe you are confusing "squaring" with "multiplying by a power of 2". Multiplying by 2 can be implemeted by shifting all the bits one position to the "left". Multiplying by 4 is shifting all the bits two positions to the "left". By 8, 3 positions. But this trick only applies to a power of two.