I am aware that there are a number of primality testing algorithms used in practice (Sieve of Eratosthenes, Fermat's test, Miller-Rabin, AKS, etc). However, they are either slow (e.g. sieve), probabalistic (Fermat and Miller-Rabin), or relatively difficult to implement (AKS).
What is the best deterministic solution to determine whether or not a number is prime?
Note that I am primarily (pun intended) interested in testing against numbers on the order of 32 (and maybe 64) bits. So a robust solution (applicable to larger numbers) is not necessary.
Up to ~2^30 you could brute force with trial-division.
Up to 3.4*10^14, Rabin-Miller with the first 7 primes has been proven to be deterministic.
Above that, you're on your own. There's no known sub-cubic deterministic algorithm.
EDIT : I remembered this, but I didn't find the reference until now:
http://reference.wolfram.com/legacy/v5_2/book/section-A.9.4
PrimeQ first tests for divisibility using small primes, then uses the
Miller-Rabin strong pseudoprime test base 2 and base 3, and then uses
a Lucas test.
As of 1997, this procedure is known to be correct only for n < 10^16,
and it is conceivable that for larger n it could claim a composite
number to be prime.
So if you implement Rabin-Miller and Lucas, you're good up to 10^16.
If I didn't care about space, I would try precomputing all the primes below 2^32 (~4e9/ln(4e9)*4 bytes, which is less than 1GB), store them in the memory and use a binary search. You can also play with memory mapping of the file containing these precomputed primes (pros: faster program start, cons: will be slow until all the needed data is actually in the memory).
If you can factor n-1 it is easy to prove that n is prime, using a method developed by Edouard Lucas in the 19th century. You can read about the algorithm at Wikipedia, or look at my implementation of the algorithm at my blog. There are variants of the algorithm that require only a partial factorization.
If the factorization of n-1 is difficult, the best method is the elliptic curve primality proving algorithm, but that requires more math, and more code, than you may be willing to write. That would be much faster than AKS, in any case.
Are you sure that you need an absolute proof of primality? The Baillie-Wagstaff algorithm is faster than any deterministic primality prover, and there are no known counter-examples.
If you know that n will never exceed 2^64 then strong pseudo-prime tests using the first twelve primes as bases are sufficient to prove n prime. For 32-bit integers, strong pseudo-prime tests to the three bases 2, 7 and 61 are sufficient to prove primality.
Use the Sieve of Eratosthenes to pre-calculate as many primes as you have space for. You can fit in a lot at one bit per number and halve the space by only sieving odd numbers (treating 2 as a special case).
For numbers from Sieve.MAX_NUM up to the square of Sieve.MAX_NUM you can use trial division because you already have the required primes listed. Judicious use of Miller-Rabin on larger unfactored residues can speed up the process a lot.
For numbers larger than that I would use one of the probabilistic tests, Miller-Rabin is good and if repeated a few times can give results that are less likely to be wrong than a hardware failure in the computer you are running.
Related
My program is supposed to loop forever and give out via print every prime number it comes along. Doing this in x86-NASM btw.
My first attempt divided it by EVERY previous number until either the Carry is 0 (not a prime) or the result is 1.
MY second attempt improved this by only testing every second, so only odd numbers.
The third thing I am currently implementing is trying to not divide by EVERY previous number but rather all of the previous divided by 2, since you can't get an even number by dividing a number by something bigger than its half
Another thing that might help is to test it with only odd numbers, like the sieve of eratosthenes, but only excluding even numbers.
Anyway, if there is another thing I can do, all help welcome.
edit:
If you need to test an handful, possibly only one, of primes, the AKS primality test is polynomial in the length of n.
If you want to find a very big prime, of cryptographic size, then select a random range of odd numbers and sieve out all the numbers whose factors are small primes (e.g. less equal than 64K-240K) then test the remaining numbers for primality.
If you want to find the primes in a range then use a sieve, the sieve of Erathostenes is very easy to implement but run slower and require more memory.
The sieve of Atkin is faster, the wheels sieve requires far less memory.
The size of the problem is exponential if approached naively so before micro-optimising is mandatory to first macro-optimise.
More or less all prime numbers algorithms require confidence with Number theory, so take particular attention to the group/ring/field the algorithm is working on because mathematicians write operations like the inverse or the multiplication with the same symbol for all the algebraic structures.
Once you have a fast algorithm, you can start micro-optimising.
At this level it's really impossible to answer how to proceed with such optimisations.
So there are some number theory applications where we need to do modulo with big numbers, and we can choose the modulus. There's two groups that can get huge optimizations - Fermat and Mersenne.
So let's call an N bit sequence a chunk. N is often not a multiple of the word size.
For Fermat, we have M=2^N+1, so 2^N=-1 mod M, so we take the chunks of the dividend and alternate adding and subtracting.
For Mersenne, we have M=2^N-1, so 2^N=1 mod M, so we sum the chunks of the dividend.
In either case, we will likely end up with a number that takes up 2 chunks. We can apply this algorithm again if needed and finally do a general modulo algorithm.
Fermat will make the result smaller on average due to the alternating addition and subtraction. A negative result isn't that computationally expensive, you just keep track of the sign and fix it in the final modulo step. But I'd think bignum subtraction is a little slower than bignum addition.
Mersenne sums all chunks, so the result is a little larger, but that can be fixed with a second iteration of the algorithm at next to no extra cost.
So in the end, which is faster?
Schönhage–Strassen uses Fermat. There might be some other factors other than performance that make Fermat better than Mersenne - or maybe it's just straight up faster.
If you need a prime modulus, you're going to make the decision based on the convenience of the size.
For example, 2^31-1 is often convenient on 64-bit architectures, since it fits pretty snugly into 32 bits and and the product of two of them fits into a 64-bit word, either signed or unsigned.
On 32-bit architectures, 2^16+1 has similar advantages. It doesn't quite fit unto 16 bits, of course, but if you treat 0s a special case, then it's still pretty easy to multiply them in a 32-bit word.
I was looking at an algorithm which multiplied 2 n bit numbers with 3 multiplications of n/2 bits. This algorithm is considered efficient. While I understand that space is obviously conserved, if I were working on an n bit machine , how would n/2 bit multiplications be better. Those n/2 bit multiplications would be converted to n bit multiplications because the CPU can only understand n bit numbers.
Thank you in advance.
Algorithms like Karatsuba multiplication or Toom-Cook are typically used in the implementation of "bignums" -- computation with numbers of unlimited size. Generally speaking, the more sophisticated the algorithm, the larger numbers need to be to make it worthwhile doing.
There are a variety of bignum packages; one of the more commonly used ones is the Gnu Multiprecision library, gmplib, which includes a large number of different multiplication algorithms, selecting the appropriate one based on the length of the multiplicands. (According to wikipedia, the Schönhage–Strassen algorithm, which is based on the fast Fourier transform algorithm, isn't used until the multiplicands reach 33,000 decimal digits. Such computations are relatively rare, but when you have to do such a computation, you probably care about it being done efficiently.)
I am just a beginner of computer science. I learned something about running time but I can't be sure what I understood is right. So please help me.
So integer factorization is currently not a polynomial time problem but primality test is. Assume the number to be checked is n. If we run a program just to decide whether every number from 1 to sqrt(n) can divide n, and if the answer is yes, then store the number. I think this program is polynomial time, isn't it?
One possible way that I am wrong would be a factorization program should find all primes, instead of the first prime discovered. So maybe this is the reason why.
However, in public key cryptography, finding a prime factor of a large number is essential to attack the cryptography. Since usually a large number (public key) is only the product of two primes, finding one prime means finding the other. This should be polynomial time. So why is it difficult or impossible to attack?
Casual descriptions of complexity like "polynomial factoring algorithm" generally refer to the complexity with respect to the size of the input, not the interpretation of the input. So when people say "no known polynomial factoring algorithm", they mean there is no known algorithm for factoring N-bit natural numbers that runs in time polynomial with respect to N. Not polynomial with respect to the number itself, which can be up to 2^N.
The difficulty of factorization is one of those beautiful mathematical problems that's simple to understand and takes you immediately to the edge of human knowledge. To summarize (today's) knowledge on the subject: we don't know why it's hard, not with any degree of proof, and the best methods we have run in more than polynomial time (but also significantly less that exponential time). The result that primality testing is even in P is pretty recent; see the linked Wikipedia page.
The best heuristic explanation I know for the difficulty is that primes are randomly distributed. One of the easier-to-understand results is Dirichlet's theorem. This theorem say that every arithmetic progression contains infinitely many primes, in other words, you can think of primes as being dense with respect to progressions, meaning you can't avoid running into them. This is the simplest of a rather large collection of such results; in all of them, primes appear in ways very much analogous to random numbers.
The difficult of factoring is thus analogous to the impossibility of reversing a one-time pad. In a one-time pad, there's a bit we don't know XOR with another one we don't. We get zero information about an individual bit knowing the result of the XOR. Replace "bit" with "prime" and multiplication with XOR, and you have the factoring problem. It's as if you've multiplied two random numbers together, and you get very little information from product (instead of zero information).
If we run a program just to decide whether every number from 1 to sqrt(n) can divide n, and if the answer is yes, then store the number.
Even ignoring that the divisibility test will take longer for bigger numbers, this approach takes almost twice as long if you just add a single (binary) digit to n. (Actually it will take twice as long if you add two digits)
I think that is the definition of exponential runtime: Make n one bit longer, the algorithm takes twice as long.
But note that this observation applies only to the algorithm you proposed. It is still unknown if integer factorization is polynomial or not. The cryptographers sure hope that it is not, but there are also alternative algorithms that do not depend on prime factorization being hard (such as elliptic curve cryptography), just in case...
Problem
Given a number n, 2<=n<=2^63. n could be prime itself. Find the prime p that is closest to n.
Using the fact that for all primes p, p>2, p is odd and p is of the form 6k+1 or 6k+5, one could write a loop from n−1 to 2 to check if that number is prime. So instead of checking for all numbers I need to check for every odd of the two forms above. However, I wonder if there is a faster algorithm to solve this problem? i.e. some constraints that can restrict the range of numbers need to be checked? Any idea would be greatly appreciated.
In reality, the odds of finding a prime number are "high" so brute force checking while skipping "trivial" numbers (numbers divisible by small primes) is going to be your best approach given what we know about number theory to date.
[update] A mild optimization that you might do is similar to the Sieve of Eratosthenes where you define some small smooth bound and mark all numbers in a range about N as being composite and only test the numbers relatively prime to your smooth base. You will need to make your range and smoothness small enough as to not eclipse the runtime of the comparatively "expense" prime test.
The biggest optimization that you can do is to use a fast primality check before doing a full test. For instance see http://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test for a commonly used test that will quickly eliminate most numbers as "probably not prime". Only after you have good reason to believe that a number is prime should you attempt to properly prove primality. (For many purposes people are happy to just accept that if it passes a fixed number of trials of the Rabin-Miller test, it is so likely to be prime that you can just accept that fact.)