How is exponentiation by squaring faster? - exponentiation

Suppose you want to calculate 5^65537 instead of multiplying 5 65537 times, it is recommended to do ((5^2)^16)*5. This results in 16 times squaring and one multiplication.
But my question is aren't you not compensating the the number of squaring times by squaring very large numbers? how is this faster when you go down to the basic bit multiplication in computers.
After reading the comments, I have got this doubt:
How is the cost of each multiplication not dependant on the size. because when
multiplying the number of bits of the multiplier will increase and this will increase the
number of additions and the number of left shifts.

Count the multiplication operations:
5^65537 = 65537 multiplications
((5^2)^16)*5 = (2 + 16 + 1) = 19 multiplications.
From this, you can see that this is much less work, despite the multiplications working on larger numbers. The algorithm is called Square and Multiply.
In practice, cryptosytems that need to calculate large numbers like this use a technique called Modular Exponentiation to avoid massive intermediate numbers.

Related

Why is exponentiation not atomic?

In calculating the efficiency of algorithms, I have read that the exponentiation operation is not considered to be an atomic operation (like multiplication).
Is it because exponentiation is the same as the multiplication operation repeated several times over?
In principle, you can pick any set of "core" operations on numbers that you consider to take a single time unit to evaluate. However, there are a couple of reasons, though, why we typically don't count exponentiation as one of them.
Perhaps the biggest has to do with how large of an output you produce. Suppose you have two numbers x and y that are each d digits long. Then their sum x + y has (at most) d + 1 digits - barely bigger than what we started with. Their product xy has at most 2d digits - larger than what we started with, but not by a huge amount. On the other hand, the value xy has roughly yd digits, which can be significantly bigger than what we started with. (A good example of this: think about computing 100100, which has about 200 digits!) This means that simply writing down the result of the exponentiation would require a decent amount of time to complete.
This isn't to say that you couldn't consider exponentiation to be a constant-time operation. Rather, I've just never seen it done.
(Fun fact: some theory papers don't consider multiplication to be a constant-time operation, since the complexity of a hardware circuit to multiply two b-bit numbers grows quadratically with the size of b. And some theory papers don't consider addition to be constant-time either, especially when working with variable-length numbers! It's all about context. If you're dealing with "smallish" numbers that fit into machine words, then we can easily count addition and multiplication as taking constant time. If you have huge numbers - say, large primes for RSA encryption - then the size of the numbers starts to impact the algorithm's runtime and implementation.)
This is a matter of definition. For example in hardware-design and biginteger-processing multiplication is not considered an atomic operation (see e.g. this analysis of the karatsuba-algorithm).
On the level that is relevant for general purpose software-design on the other hand, multiplication can be considered as a fairly fast operation on fixed-digit numbers implemented in hardware. Exponentiation on the other hand is rarely implemented in hardware and an upper bound for the complexity can only be given in terms of the exponent, rather than the number of digits.

Fermat vs Mersenne as modulus

So there are some number theory applications where we need to do modulo with big numbers, and we can choose the modulus. There's two groups that can get huge optimizations - Fermat and Mersenne.
So let's call an N bit sequence a chunk. N is often not a multiple of the word size.
For Fermat, we have M=2^N+1, so 2^N=-1 mod M, so we take the chunks of the dividend and alternate adding and subtracting.
For Mersenne, we have M=2^N-1, so 2^N=1 mod M, so we sum the chunks of the dividend.
In either case, we will likely end up with a number that takes up 2 chunks. We can apply this algorithm again if needed and finally do a general modulo algorithm.
Fermat will make the result smaller on average due to the alternating addition and subtraction. A negative result isn't that computationally expensive, you just keep track of the sign and fix it in the final modulo step. But I'd think bignum subtraction is a little slower than bignum addition.
Mersenne sums all chunks, so the result is a little larger, but that can be fixed with a second iteration of the algorithm at next to no extra cost.
So in the end, which is faster?
Schönhage–Strassen uses Fermat. There might be some other factors other than performance that make Fermat better than Mersenne - or maybe it's just straight up faster.
If you need a prime modulus, you're going to make the decision based on the convenience of the size.
For example, 2^31-1 is often convenient on 64-bit architectures, since it fits pretty snugly into 32 bits and and the product of two of them fits into a 64-bit word, either signed or unsigned.
On 32-bit architectures, 2^16+1 has similar advantages. It doesn't quite fit unto 16 bits, of course, but if you treat 0s a special case, then it's still pretty easy to multiply them in a 32-bit word.

what is the time complexity to divide two numbers?

Assume that I have two numbers a and b (a>b), and if I divide a by b (i.e. calculate a/b). How much time I need to provide?
Well, People are commenting about the instruction set as well architecture. so here is the assumption.
Assume a and b are two integers each of them has n bits and we have standard x86_64 machine with standard instruction set.
A request was made to provide an answer rather than just a link, so I will have a go at this. As pointed out by phs above, there is a good link at https://en.wikipedia.org/wiki/Division_algorithm#Newton.E2.80.93Raphson_division.
Division is one of a number of operations which, as far as computational complexity theory is concerned, are no more expensive than multiplication. One of the reasons for this is that computational complexity theory only really cares about how the cost of an algorithm grows as the amount of data to it gets large, which in this case means multi-precision division. Another is that there is a faster algorithm for division than pen-and-paper long division - this algorithm is in fact good enough to influence the design of computer hardware - famous examples being the Cray-1 reciprocal iteration and the Pentium bug.
The fast way to do division is, instead of dividing a by by, multiply a by 1/b, reducing the problem to computing a reciprocal. To compute 1/b, you first of all scale the problem by powers of two to get b in the range [1, 2), and make a first guess of the answer, typically from a lookup table - the Pentium bug had errors in the lookup table. Now you have an answer with lots of error - you have 1/b + x, where x is the error, which is unknown to you, but small if your lookup table was of a decent size.
The theory of Newton-Raphson iteration for solving equations tells you that if c = 1/b + x is a guess for 1/b, then c(2-bc) is a better guess. If c = 1/b + x then some algebra will tell you that the better guess works out as 1/b -bx^2. You have squared the error x, and since x was small (say 0.1 to start off with) you have roughly doubled the number of bits correct.
You are doubling the number of bits you have correct every time you do this, so it doesn't take many iterations to get a (good enough) answer. Now (here comes the neat part) because you know each iteration is only an approximation anyway, you need only calculate it to the accuracy that you reckon the approximation will give, not the full accuracy of the answer you want. Most of the underlying work is the multiplication in c(2-b) and this grows faster than linear in the number of bits of accuracy you work to. When you sit down and work out the cost of all of this, you find that it grows rapidly enough with the number of digits that you get a sum that looks like 1 = 1/2 + 1/4 + 1/8 +... - lots of terms but converging to answer not too far off the very first one - and the cost of a multi-precision divide is not more than a constant factor more than the cost of a multi-precision multiply.

n/2 bit multiplication on an n bit cpu

I was looking at an algorithm which multiplied 2 n bit numbers with 3 multiplications of n/2 bits. This algorithm is considered efficient. While I understand that space is obviously conserved, if I were working on an n bit machine , how would n/2 bit multiplications be better. Those n/2 bit multiplications would be converted to n bit multiplications because the CPU can only understand n bit numbers.
Thank you in advance.
Algorithms like Karatsuba multiplication or Toom-Cook are typically used in the implementation of "bignums" -- computation with numbers of unlimited size. Generally speaking, the more sophisticated the algorithm, the larger numbers need to be to make it worthwhile doing.
There are a variety of bignum packages; one of the more commonly used ones is the Gnu Multiprecision library, gmplib, which includes a large number of different multiplication algorithms, selecting the appropriate one based on the length of the multiplicands. (According to wikipedia, the Schönhage–Strassen algorithm, which is based on the fast Fourier transform algorithm, isn't used until the multiplicands reach 33,000 decimal digits. Such computations are relatively rare, but when you have to do such a computation, you probably care about it being done efficiently.)

Why is Sieve of Eratosthenes more efficient than the simple "dumb" algorithm?

If you need to generate primes from 1 to N, the "dumb" way to do it would be to iterate through all the numbers from 2 to N and check if the numbers are divisable by any prime number found so far which is less than the square root of the number in question.
As I see it, sieve of Eratosthenes does the same, except other way round - when it finds a prime N, it marks off all the numbers that are multiples of N.
But whether you mark off X when you find N, or you check if X is divisable by N, the fundamental complexity, the big-O stays the same. You still do one constant-time operation per a number-prime pair. In fact, the dumb algorithm breaks off as soon as it finds a prime, but sieve of Eratosthenes marks each number several times - once for every prime it is divisable by. That's a minimum of twice as many operations for every number except primes.
Am I misunderstanding something here?
In the trial division algorithm, the most work that may be needed to determine whether a number n is prime, is testing divisibility by the primes up to about sqrt(n).
That worst case is met when n is a prime or the product of two primes of nearly the same size (including squares of primes). If n has more than two prime factors, or two prime factors of very different size, at least one of them is much smaller than sqrt(n), so even the accumulated work needed for all these numbers (which form the vast majority of all numbers up to N, for sufficiently large N) is relatively insignificant, I shall ignore that and work with the fiction that composite numbers are determined without doing any work (the products of two approximately equal primes are few in number, so although individually they cost as much as a prime of similar size, altogether that's a negligible amount of work).
So, how much work does the testing of the primes up to N take?
By the prime number theorem, the number of primes <= n is (for n sufficiently large), about n/log n (it's n/log n + lower order terms). Conversely, that means the k-th prime is (for k not too small) about k*log k (+ lower order terms).
Hence, testing the k-th prime requires trial division by pi(sqrt(p_k)), approximately 2*sqrt(k/log k), primes. Summing that for k <= pi(N) ~ N/log N yields roughly 4/3*N^(3/2)/(log N)^2 divisions in total. So by ignoring the composites, we found that finding the primes up to N by trial division (using only primes), is Omega(N^1.5 / (log N)^2). Closer analysis of the composites reveals that it's Theta(N^1.5 / (log N)^2). Using a wheel reduces the constant factors, but doesn't change the complexity.
In the sieve, on the other hand, each composite is crossed off as a multiple of at least one prime. Depending on whether you start crossing off at 2*p or at p*p, a composite is crossed off as many times as it has distinct prime factors or distinct prime factors <= sqrt(n). Since any number has at most one prime factor exceeding sqrt(n), the difference isn't so large, it has no influence on complexity, but there are a lot of numbers with only two prime factors (or three with one larger than sqrt(n)), thus it makes a noticeable difference in running time. Anyhow, a number n > 0 has only few distinct prime factors, a trivial estimate shows that the number of distinct prime factors is bounded by lg n (base-2 logarithm), so an upper bound for the number of crossings-off the sieve does is N*lg N.
By counting not how often each composite gets crossed off, but how many multiples of each prime are crossed off, as IVlad already did, one easily finds that the number of crossings-off is in fact Theta(N*log log N). Again, using a wheel doesn't change the complexity but reduces the constant factors. However, here it has a larger influence than for the trial division, so at least skipping the evens should be done (apart from reducing the work, it also reduces storage size, so improves cache locality).
So, even disregarding that division is more expensive than addition and multiplication, we see that the number of operations the sieve requires is much smaller than the number of operations required by trial division (if the limit is not too small).
Summarising:
Trial division does futile work by dividing primes, the sieve does futile work by repeatedly crossing off composites. There are relatively few primes, but many composites, so one might be tempted to think trial division wastes less work.
But: Composites have only few distinct prime factors, while there are many primes below sqrt(p).
In the naive method, you do O(sqrt(num)) operations for each number num you check for primality. Ths is O(n*sqrt(n)) total.
In the sieve method, for each unmarked number from 1 to n you do n / 2 operations when marking multiples of 2, n / 3 when marking those of 3, n / 5 when marking those of 5 etc. This is n*(1/2 + 1/3 + 1/5 + 1/7 + ...), which is O(n log log n). See here for that result.
So the asymptotic complexity is not the same, like you said. Even a naive sieve will beat the naive prime-generation method pretty fast. Optimized versions of the sieve can get much faster, but the big-oh remains unchanged.
The two are not equivalent like you say. For each number, you will check divisibility by the same primes 2, 3, 5, 7, ... in the naive prime-generation algorithm. As you progress, you check divisibility by the same series of numbers (and you keep checking against more and more as you approach your n). For the sieve, you keep checking less and less as you approach n. First you check in increments of 2, then of 3, then 5 and so on. This will hit n and stop much faster.
Because with the sieve method, you stop marking mutiples of the running primes when the running prime reaches the square root of N.
Say, you want to find all primes less than a million.
First you set an array
for i = 2 to 1000000
primetest[i] = true
Then you iterate
for j=2 to 1000 <--- 1000 is the square root of 10000000
if primetest[j] <--- if j is prime
---mark all multiples of j (except j itself) as "not a prime"
for k = j^2 to 1000000 step j
primetest[k] = false
You don't have to check j after 1000, because j*j will be more than a million.
And you start from j*j (you don't have to mark multiples of j less than j^2 because they are already marked as multiples of previously found, smaller primes)
So, in the end you have done the loop 1000 times and the if part only for those j's that are primes.
Second reason is that with the sieve, you only do multiplication, not division. If you do it cleverly, you only do addition, not even multiplication.
And division has larger complexity than addition. The usual way to do division has O(n^2) complexity, while addition has O(n).
Explained in this paper: http://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf
I think it's quite readable even without Haskell knowledge.
the first difference is that division is much more expensive than addition. Even if each number is 'marked' several times, it's trivial when compared with the huge number of divisions needed for the 'dumb' algorithm.
A "naive" Sieve of Eratosthenes will mark non-prime numbers multiple times.
But, if you have your numbers on a linked list and remove numbers taht are multiples (you will still need to walk the remainder of the list), the work left to do after finding a prime is always smaller than it was before finding the prime.
http://en.wikipedia.org/wiki/Prime_number#Number_of_prime_numbers_below_a_given_number
the "dumb" algorithm does i/log(i) ~= N/log(N) work for each prime number
the real algorithm does N/i ~= 1 work for each prime number
Multiply by roughly N/log(N) prime numbers.
It was a time when I was trying to find an efficient way of finding sum of primes less than x:
There I decided to use N by N square table and started checking if numbers with a unit digits in [1,3,7,9]
But Eratosthenes Method of prime made it a little easier: How
Let you want to know if N is prime or Not
You started finding factorization. So you will realize when N is factorized
when you divide N with the highest factor quotient will be less.
So, You take a number: int(sqrt(N)) = K(say) divides N you get somewhat same and close number to K
Now let's say you divide N with u<K, but if "U" is not prime and one of the prime factors of U is V(prime) then will obviously be less than U (V<U) and V will also divide N
then
why not divide and check if N is prime or not by DIVIDING 'N' WITH ONLY PRIMES LESS THAN K=int(sqrt(N))
Number of times for which loop Keeps executing = π(√n)
This is how the brilliant idea of Eratosthenes starts taking pictures and will start giving you intuition behind this all.
Btw using the Sieve of Eratosthenes one can find sum of primes less than a multiple of 10.
because for a given column you just check need to check their unit digits[1,3,7,9] and for how many times a particular unit digit is repeating.
Being new to Stack Overflow Community! Would like to know suggestions on the same if anything is wrong.

Resources