Ruby's digits method performance - ruby

I'm solving some Project Euler problems using Ruby, and specifically here I'm talking about problem 25 (What is the index of the first term in the Fibonacci sequence to contain 1000 digits?).
At first, I was using Ruby 2.2.3 and I coded the problem as such:
number = 3
a = 1
b = 2
while b.to_s.length < 1000
a, b = b, a + b
number += 1
end
puts number
But then I found out that version 2.4.2 has a method called digits which is exactly what I needed. I transformed to code to:
while b.digits.length < 1000
And when I compared the two methods, digits was much slower.
Time
./025/problem025.rb 0.13s user 0.02s system 80% cpu 0.190 total
./025/problem025.rb 2.19s user 0.03s system 97% cpu 2.275 total
Does anyone have an idea why?

Ruby's digits
... is implemented in rb_int_digits.
Which for non-tiny numbers (i.e., most of your numbers) uses rb_int_digits_bigbase.
Which extracts digit after digit naively with division/modulo by base.
So it should take quadratic time (at least with a small base such as 10).
Ruby's to_s
... is implemented in int_to_s.
Which uses rb_int2str.
Which for non-tiny numbers uses rb_big2str.
Which uses rb_big2str1.
Which might use big2str_gmp if available (which sounds/looks like it uses the fast GMP library) or ...
... uses big2str_generic.
Which uses big2str_karatsuba (sweet, I recognize that name!).
Which looks like it has something to do with ...
... Karatsuba's algorithm, which is a fast multiplication algorithm. If you multiply two n-digit numbers the naive way you learned in school, you take n2 single-digit products. Karatsuba on the other hand only needs about n1.585, which is quite a lot better. And I didn't read into this further, but I suspect what Ruby does here is also this efficient. Eric Lippert's answer with a base conversion algorithm uses Karatsuba multiplication and says "this [base conversion] algorithm is utterly dominated by the cost of the multiplication".
Comparing quadratic to n1.585 over the number lengths from 1 digit to 1000 digits gives factor 15:
(1..1000).sum { |i| i**2 } / (1..1000).sum { |i| i**1.585 }
=> 15.150583254950678
Which is roughly the factor you observed as well. Of course that's a rather naive comparison, but, well, why not.
GMP by the way apparently uses/used a "near O(n * log(n)) FFT-based multiplication algorithm".
Thanks to #Drenmi's answer for motivating me to dig into the source after all. I hope I did this right, no guarantees, I'm a Ruby beginner. But that's why I left all the links there for you to check for yourself :-P

Integer#digits doesn't just "split" the number. From the documentation:
Returns the array including the digits extracted by place-value
notation with radix base of int.
This extraction is done even if a base argument is omitted. The relevant source:
# ruby/numeric.c:4809
while (!FIXNUM_P(num) || FIX2LONG(num) > 0) {
VALUE qr = rb_int_divmod(num, base);
rb_ary_push(digits, RARRAY_AREF(qr, 1));
num = RARRAY_AREF(qr, 0);
}
As you can see, this process includes repeated modulo arithmetics, which likely accounts for the additional runtime.

Many ruby methods create objects (strins, arrays, etc.)
In ruby, object creation in ruby is "expensive".
For instance to_s creates a string and digits creates an array every time the while condition is evaluated.
If you want to optimize your example, you can do the following:
# create the smallest possible 1000 digits number
max = 10**999
number = 3
a = 1
b = 2
# do not create objects in while condition
while b < max
a, b = b, a + b
number += 1
end
puts number

I have not answered your question, but wish to suggest an improved algorithm for the problem you have addressed. For a given number of decimal digits, n, I have implemented the following algorithm.
estimate the number f of Fibonacci numbers ("FNs") that have n or fewer decimal digits.
compute the fth and (f-1)st FNs, and the number of digits m in the fth FN.
if m >= n back down from down from the (f-1)st FN until the (f-1)st FN has fewer than n decimal digits, at which time the fth FN is the smallest FN to have n decimal digits.
if m < n increase the fth FN until the it has n decimal digits, at which time it is the smallest FN to have n decimal digits.
The key is to compute a close estimate f in the first step.
Code
AVG_FNs_PER_DIGIT = 4.784971966781667
def first_fibonacci_with_n_digits(n)
return [1, 1] if n == 1
idx = (n * AVG_FNs_PER_DIGIT).round
fn, prev_fn = fib(idx)
fn.to_s.size >= n ? fib_down(n, fn, prev_fn, idx) : fib_up(n, fn, prev_fn, idx)
end
def fib(idx)
a = 1
b = 2
(idx - 2).times {a, b = b, a + b }
[b, a]
end
def fib_up(n, b, a, idx)
loop do
a, b = b, a + b
idx += 1
break [idx, b] if b.to_s.size == n
end
end
def fib_down(n, b, a, idx)
loop do
a, b = b - a, a
break [idx, b] if a.to_s.size == n - 1
idx -= 1
end
end
Benchmarks
In computing each Fibonacci number two operations are typically performed:
compute the number of digits in the last-computed Fibonacci number and if that number is equal to the target number of digits, terminate (for reasons made clear in the Explanation section below, it cannot be larger than the target number); else
compute the next number in the Fibonacci sequence.
By contrast, the method I have proposed performs the first step a relatively small number of times.
How important is the first step relative to the second and how does the use of n.digits.size compare with that of n.to_s.size in the first step? Let's run some benchmarks to find out.
def use_to_s(ndigits)
case ndigits
when 1
[1, 1]
else
a = 1
b = 2
idx = 3
loop do
break [idx, b] if b.to_s.length == ndigits
a, b = b, a + b
idx += 1
end
end
end
def use_digits(ndigits)
case ndigits
when 1
[1, 1]
else
a = 1
b = 2
idx = 3
loop do
break [idx, b] if b.digits.size == ndigits
a, b = b, a + b
idx += 1
end
end
end
require 'fruity'
def test(ndigits)
nfibs, last_fib = use_to_s(ndigits)
puts "\nndigits = #{ndigits}, nfibs=#{nfibs}, last_fib=#{last_fib}"
compare do
try_use_to_s { use_to_s(ndigits) }
try_use_digits { use_digits(ndigits) }
try_estimate { first_fibonacci_with_n_digits(ndigits) }
end
end
test 20
ndigits = 20, nfibs=93, last_fib=12200160415121876738
Running each test 128 times. Test will take about 1 second.
try_estimate is faster than try_use_to_s by 2x ± 0.1
try_use_to_s is faster than try_use_digits by 80.0% ± 10.0%
test 100
ndigits = 100, nfibs=476, last_fib=13447...37757 (90 digits omitted)
Running each test 16 times. Test will take about 4 seconds.
try_estimate is faster than try_use_to_s by 5x ± 0.1
try_use_to_s is faster than try_use_digits by 10x ± 1.0
test 500
ndigits = 500, nfibs=2390, last_fib=13519...63145 (490 digits omitted)
Running each test 2 times. Test will take about 27 seconds.
try_estimate is faster than try_use_to_s by 9x ± 0.1
try_use_to_s is faster than try_use_digits by 60x ± 1.0
test 1000
ndigits = 1000, nfibs=4782, last_fib=10700...27816 (990 digits omitted)
Running each test once. Test will take about 1 minute.
try_estimate is faster than try_use_to_s by 12x ± 10.0
try_use_to_s is faster than try_use_digits by 120x ± 100.0
There are two main take-aways from these results:
"try_estimate" is the fastest because it performs the first step relatively few times; and
the use of to_s is much faster than that of digits.
Further to the first of these observations note that the initial estimates of the index of the first FN having a given number of digits, compared to the actual index, are as follows:
for 20 digits: 96 est. vs 93 actual
for 100 digits: 479 est. vs 476 actual
for 500 digits: 2392 est. vs 2390 actual
for 1000 digits: 4785 est. vs 4782 actual
The deviation was at most 3, meaning numbers of digits had to be calculated for at most 3 FNs to obtain the desired result.
Explanation
The only explanation of the methods given in the section Code above is the derivation of the constant AVG_FNs_PER_DIGIT, which is used to calculate an estimate of the index of the first FN having the specified number of digits.
The derivation of this constant derives from the question and selected answer given here. (The Wiki for Fibonacci numbers provides a good overview of the mathematical properties of FNs.)
It is known that the first 7 FNs (including zero) have one digit; thereafter the FNs gain an additional digit every 4 or 5 FNs (i.e., sometimes 4, else 5). Therefore, as a very crude calculation, we see that to calculate the first FN with n digits, n >= 2, it will not be less than the 4*nth FN. For n = 1000, that would be 4,000. (In fact, the 4,782nd is the smallest to have 1,000 digits.) In other words, we don't need to calculate the number of digits in the first 4,000 FNs. We can improve on this estimate, however.
As n approaches infinity, the ratio of ranges 10**n...10**(n+1) (n-digit intervals) that contain 5 FNs to those that contain 4 FNs can be computed as follows.
LOG_10 = Math.log(10)
#=> 2.302585092994046
GR = (1 + Math.sqrt(5))/2
#=> 1.618033988749895
LOG_GR = Math.log(GR)
#=> 0.48121182505960347
RATIO_5to4 = (LOG_10 - 4*LOG_GR)/(5*LOG_GR - LOG_10)
#=> 3.6505564183095474
where GR is the Golden Ratio.
Over a large number of n-digit intervals let n4 be the number of those intervals containing 4 FNs and n5 be the number containing 5 FNs. The average number of FNs per interval is therefore (n4*4 + n5*5)/(n4 + n5). Since n5/n4 converges to RATIO_5to4, n5 approaches RATIO_5to4 * n4 in the limit (discarding roundoff error). If we substitute out n5, and let
b = 1/(1 + RATIO_5to4)
#=> 0.21502803321833364
we find the average number of FNs per n-digit interval converges to
avg = b * 4 + (1-b) *5
#=> 4.784971966781667
If fn is the first FN to have n decimal digits, the number of FNs in the sequence up to an including fn can therefore be approximated to be
n * avg
If, for example, the estimate of the index of the first FN to have 1000 decimal digits would be 1000 * 4.784971966781667).round #=> 4785.

Related

A faster alternative to all(a(:,i)==a,1) in MATLAB

It is a straightforward question: Is there a faster alternative to all(a(:,i)==a,1) in MATLAB?
I'm thinking of a implementation that benefits from short-circuit evaluations in the whole process. I mean, all() definitely benefits from short-circuit evaluations but a(:,i)==a doesn't.
I tried the following code,
% example for the input matrix
m = 3; % m and n aren't necessarily equal to those values.
n = 5000; % It's only possible to know in advance that 'm' << 'n'.
a = randi([0,5],m,n); % the maximum value of 'a' isn't necessarily equal to
% 5 but it's possible to state that every element in
% 'a' is a positive integer.
% all, equal solution
tic
for i = 1:n % stepping up the elapsed time in orders of magnitude
%%%%%%%%%% all and equal solution %%%%%%%%%
ax_boo = all(a(:,i)==a,1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
toc
% alternative solution
tic
for i = 1:n % stepping up the elapsed time in orders of magnitude
%%%%%%%%%%% alternative solution %%%%%%%%%%%
ax_boo = a(1,i) == a(1,:);
for k = 2:m
ax_boo(ax_boo) = a(k,i) == a(k,ax_boo);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
toc
but it's intuitive that any "for-loop-solution" within the MATLAB environment will be naturally slower. I'm wondering if there is a MATLAB built-in function written in a faster language.
EDIT:
After running more tests I found out that the implicit expansion does have a performance impact in evaluating a(:,i)==a. If the matrix a has more than one row, all(repmat(a(:,i),[1,n])==a,1) may be faster than all(a(:,i)==a,1) depending on the number of columns (n). For n=5000 repmat explicit expansion has proved to be faster.
But I think that a generalization of Kenneth Boyd's answer is the "ultimate solution" if all elements of a are positive integers. Instead of dealing with a (m x n matrix) in its original form, I will store and deal with adec (1 x n matrix):
exps = ((0):(m-1)).';
base = max(a,[],[1,2]) + 1;
adec = sum( a .* base.^exps , 1 );
In other words, each column will be encoded to one integer. And of course adec(i)==adec is faster than all(a(:,i)==a,1).
EDIT 2:
I forgot to mention that adec approach has a functional limitation. At best, storing adec as uint64, the following inequality must hold base^m < 2^64 + 1.
Since your goal is to count the number of columns that match, my example converts the binary encoding to integer decimals, then you just loop over the possible values (with 3 rows that are 8 possible values) and count the number of matches.
a_dec = 2.^(0:(m-1)) * a;
num_poss_values = 2 ^ m;
num_matches = zeros(num_poss_values, 1);
for i = 1:num_poss_values
num_matches(i) = sum(a_dec == (i - 1));
end
On my computer, using 2020a, Here are the execution times for your first 2 options and the code above:
Elapsed time is 0.246623 seconds.
Elapsed time is 0.553173 seconds.
Elapsed time is 0.000289 seconds.
So my code is 853 times faster!
I wrote my code so it will work with m being an arbitrary integer.
The num_matches variable contains the number of columns that add up to 0, 1, 2, ...7 when converted to a decimal.
As an alternative you can use the third output of unique:
[~, ~, iu] = unique(a.', 'rows');
for i = 1:n
ax_boo = iu(i) == iu;
end
As indicated in a comment:
ax_boo isolates the indices of the columns I have to sum in a row vector b. So, basically the next line would be something like c = sum(b(ax_boo),2);
It is a typical usage of accumarray:
[~, ~, iu] = unique(a.', 'rows');
C = accumarray(iu,b);
for i = 1:n
c = C(i);
end

math range error - is there a way to further limit this algorithm to avoid

Working on project Euler problem (26), and wanting to use an algorithm looking for the prime, p with the largest order of 10 modulo p. Essentially the problem is to look for the denominator which creates the longest repetend in a decimal. After a bunch of wikipedia reading, it looks like the prime described above would fulfill that. But, unfortunately, it looks like taking the very large powers of 10 results in an error. My question then is : is there a way of getting around this error (making the numbers smaller), or should I abandon this strategy and just do long division (with the plan being to focus on the primes).
[of note, in the order_ten method I can get it to run if I limit the powers of 10 to 300 and probably can go a bit long, which goes along with the length of a long]
import math
def prime_seive(limit):
seive_list = [True]*limit
seive_list[0] = seive_list[1] = False
for i in range(2, limit):
if seive_list[i] == True :
n = 2
while i*n < limit :
seive_list[i*n] = False #get rid of multiples
n = n+1
prime_numbers = [i for i,j in enumerate(seive_list) if j == True]
return prime_numbers
def order_ten(n) :
for k in range(1, n) :
if (math.pow(10,k) -1)%n == 0:
return k
primes = prime_seive(1000)
max_order = 0
max_order_d = -1
for x in reversed(primes) :
order = order_ten(x)
if order > max_order :
max_order = order
max_order_d = x
print max_order
print max_order_d
I suspect that the problem is that your numbers get to large when first taking a large power of ten and then computing the value mod n. (For instance If I asked you to compute 10^11 mod 11, you could remark than 10 mod 11 is (-1) and thus 10^11 mod 11 is just (-1)^11 mod 11 ie. -1.)
Maybe you could try programming your own exponentiation routine mod n, something like (in pseudo code)
myPow (int k, int n) {
if (k==0) return 1;
else return ((myPow(k-1,n)*10)%n);
}
This way you never deal with numbers larger than n.
The way it is written you will get a linear complexity in k for computing the power, and thus a quadratic complexity in n for your function order_ten(n). If this is too slow for you could improve the function myPow to use some smart exponentiation.

Better Algorithm to find the maximum number who's square divides K :

Given a number K which is a product of two different numbers (A,B), find the maximum number(<=A & <=B) who's square divides the K .
Eg : K = 54 (6*9) . Both the numbers are available i.e 6 and 9.
My approach is fairly very simple or trivial.
taking the smallest of the two ( 6 in this case).Lets say A
Square the number and divide K, if its a perfect division, that's the number.
Else A = A-1 ,till A =1.
For the given example, 3*3 = 9 divides K, and hence 3 is the answer.
Looking for a better algorithm, than the trivial solution.
Note : The test cases are in 1000's so the best possible approach is needed.
I am sure someone else will come up with a nice answer involving modulus arithmetic. Here is a naive approach...
Each of the factors can themselves be factored (though it might be an expensive operation).
Given the factors, you can then look for groups of repeated factors.
For instance, using your example:
Prime factors of 9: 3, 3
Prime factors of 6: 2, 3
All prime factors: 2, 3, 3, 3
There are two 3s, so you have your answer (the square of 3 divides 54).
Second example of 36 x 9 = 324
Prime factors of 36: 2, 2, 3, 3
Prime factors of 9: 3, 3
All prime factors: 2, 2, 3, 3, 3, 3
So you have two 2s and four 3s, which means 2x3x3 is repeated. 2x3x3 = 18, so the square of 18 divides 324.
Edit: python prototype
import math
def factors(num, dict):
""" This finds the factors of a number recursively.
It is not the most efficient algorithm, and I
have not tested it a lot. You should probably
use another one. dict is a dictionary which looks
like {factor: occurrences, factor: occurrences, ...}
It must contain at least {2: 0} but need not have
any other pre-populated elements. Factors will be added
to this dictionary as they are found.
"""
while (num % 2 == 0):
num /= 2
dict[2] += 1
i = 3
found = False
while (not found and (i <= int(math.sqrt(num)))):
if (num % i == 0):
found = True
factors(i, dict)
factors(num / i, dict)
else:
i += 2
if (not found):
if (num in dict.keys()):
dict[num] += 1
else:
dict[num] = 1
return 0
#MAIN ROUTINE IS HERE
n1 = 37 # first number (6 in your example)
n2 = 41 # second number (9 in your example)
dict = {2: 0} # initialise factors (start with "no factors of 2")
factors(n1, dict) # find the factors of f1 and add them to the list
factors(n2, dict) # find the factors of f2 and add them to the list
sqfac = 1
# now find all factors repeated twice and multiply them together
for k in dict.keys():
dict[k] /= 2
sqfac *= k ** dict[k]
# here is the result
print(sqfac)
Answer in C++
int func(int i, j)
{
int k = 54
float result = pow(i, 2)/k
if (static_cast<int>(result)) == result)
{
if(i < j)
{
func(j, i);
}
else
{
cout << "Number is correct: " << i << endl;
}
}
else
{
cout << "Number is wrong" << endl;
func(j, i)
}
}
Explanation:
First recursion then test if result is a positive integer if it is then check if the other multiple is less or greater if greater recursive function tries the other multiple and if not then it is correct. Then if result is not positive integer then print Number is wrong and do another recursive function to test j.
If I got the problem correctly, I see that you have a rectangle of length=A, width=B, and area=K
And you want convert it to a square and lose the minimum possible area
If this is the case. So the problem with your algorithm is not the cost of iterating through mutliple iterations till get the output.
Rather the problem is that your algorithm depends heavily on the length A and width B of the input rectangle.
While it should depend only on the area K
For example:
Assume A =1, B=25
Then K=25 (the rect area)
Your algorithm will take the minimum value, which is A and accept it as answer with a single
iteration which is so fast but leads to wrong asnwer as it will result in a square of area 1 and waste the remaining 24 (whatever cm
or m)
While the correct answer here should be 5. which will never be reached by your algorithm
So, in my solution I assume a single input K
My ideas is as follows
x = sqrt(K)
if(x is int) .. x is the answer
else loop from x-1 till 1, x--
if K/x^2 is int, x is the answer
This might take extra iterations but will guarantee accurate answer
Also, there might be some concerns on the cost of sqrt(K)
but it will be called just once to avoid misleading length and width input

Project Euler number 35 efficiency

https://projecteuler.net/problem=35
All problems on Project Euler are supposed to be solvable by a program in under 1 minute. My solution, however, has a runtime of almost 3 minutes. Other solutions I've seen online are similar to mine conceptually, but have runtimes that are exponentially faster. Can anyone help make my code more efficient/run faster?
Thanks!
#genPrimes takes an argument n and returns a list of all prime numbers less than n
def genPrimes(n):
primeList = [2]
number = 3
while(number < n):
isPrime = True
for element in primeList:
if element > number**0.5:
break
if number%element == 0 and element <= number**0.5:
isPrime = False
break
if isPrime == True:
primeList.append(number)
number += 2
return primeList
#isCircular takes a number as input and returns True if all rotations of that number are prime
def isCircular(prime):
original = prime
isCircular = True
prime = int(str(prime)[-1] + str(prime)[:len(str(prime)) - 1])
while(prime != original):
if prime not in primeList:
isCircular = False
break
prime = int(str(prime)[-1] + str(prime)[:len(str(prime)) - 1])
return isCircular
primeList = genPrimes(1000000)
circCount = 0
for prime in primeList:
if isCircular(prime):
circCount += 1
print circCount
Two modifications of your code yield a pretty fast solution (roughly 2 seconds on my machine):
Generating primes is a common problem with many solutions on the web. I replaced yours with rwh_primes1 from this article:
def genPrimes(n):
sieve = [True] * (n/2)
for i in xrange(3,int(n**0.5)+1,2):
if sieve[i/2]:
sieve[i*i/2::i] = [False] * ((n-i*i-1)/(2*i)+1)
return [2] + [2*i+1 for i in xrange(1,n/2) if sieve[i]]
It is about 65 times faster (0.04 seconds).
The most important step I'd suggest, however, is to filter the list of generated primes. Since each circularly shifted version of an integer has to be prime, the circular prime must not contain certain digits. The prime 23, e.g., can be easily spotted as an invalid candidate, because it contains a 2, which indicates divisibility by two when this is the last digit. Thus you might remove all such bad candidates by the following simple method:
def filterPrimes(primeList):
for i in primeList[3:]:
if '0' in str(i) or '2' in str(i) or '4' in str(i) \
or '5' in str(i) or '6' in str(i) or '8' in str(i):
primeList.remove(i)
return primeList
Note that the loop starts at the fourth prime number to avoid removing the number 2 or 5.
The filtering step takes most of the computing time (about 1.9 seconds), but reduces the number of circular prime candidates dramatically from 78498 to 1113 (= 98.5 % reduction)!
The last step, the circulation of each remaining candidate, can be done as you suggested. If you wish, you can simplify the code as follows:
circCount = sum(map(isCircular, primeList))
Due to the reduced candidate set this step is completed in only 0.03 seconds.

Find next prime given all prior

I'm writing a recursive infinite prime number generator, and I'm almost sure I can optimize it better.
Right now, aside from a lookup table of the first dozen primes, each call to the recursive function receives a list of all previous primes.
Since it's a lazy generator, right now I'm just filtering out any number that is modulo 0 for any of the previous primes, and taking the first unfiltered result. (The check I'm using short-circuits, so the first time a previous prime divides the current number evenly it aborts with that information.)
Right now, my performance degrades around when searching for the 400th prime (37,813). I'm looking for ways to use the unique fact that I have a list of all prior primes, and am only searching for the next, to improve my filtering algorithm. (Most information I can find offers non-lazy sieves to find primes under a limit, or ways to find the pnth prime given pn-1, not optimizations to find pn given 2...pn-1 primes.)
For example, I know that the pnth prime must reside in the range (pn-1 + 1)...(pn-1+pn-2). Right now I start my filtering of integers at pn-1 + 2 (since pn-1 + 1 can only be prime for pn-1 = 2, which is precomputed). But since this is a lazy generator, knowing the terminal bounds of the range (pn-1+pn-2) doesn't help me filter anything.
What can I do to filter more effectively given all previous primes?
Code Sample
#doc """
Creates an infinite stream of prime numbers.
iex> Enum.take(primes, 5)
[2, 3, 5, 7, 11]
iex> Enum.take_while(primes, fn(n) -> n < 25 end)
[2, 3, 5, 7, 11, 13, 17, 19, 23]
"""
#spec primes :: Stream.t
def primes do
Stream.unfold( [], fn primes ->
next = next_prime(primes)
{ next, [next | primes] }
end )
end
defp next_prime([]), do: 2
defp next_prime([2 | _]), do: 3
defp next_prime([3 | _]), do: 5
defp next_prime([5 | _]), do: 7
# ... etc
defp next_prime(primes) do
start = Enum.first(primes) + 2
Enum.first(
Stream.drop_while(
Integer.stream(from: start, step: 2),
fn number ->
Enum.any?(primes, fn prime ->
rem(number, prime) == 0
end )
end
)
)
end
The primes function starts with an empty array, gets the next prime for it (2 initially), and then 1) emits it from the Stream and 2) Adds it to the top the primes stack used in the next call. (I'm sure this stack is the source of some slowdown.)
The next_primes function takes in that stack. Starting from the last known prime+2, it creates an infinite stream of integers, and drops each integer that divides evenly by any known prime for the list, and then returns the first occurrence.
This is, I suppose, something similar to a lazy incremental Eratosthenes's sieve.
You can see some basic attempts at optimization: I start checking at pn-1+2, and I step over even numbers.
I tried a more verbatim Eratosthenes's sieve by just passing the Integer.stream through each calculation, and after finding a prime, wrapping the Integer.stream in a new Stream.drop_while that filtered just multiples of that prime out. But since Streams are implemented as anonymous functions, that mutilated the call stack.
It's worth noting that I'm not assuming you need all prior primes to generate the next one. I just happen to have them around, thanks to my implementation.
For any number k you only need to try division with primes up to and including √k. This is because any prime larger than √k would need to be multiplied with a prime smaller than √k.
Proof:
√k * √k = k so (a+√k) * √k > k (for all 0<a<(k-√k)). From this follows that (a+√k) divides k iff there is another divisor smaller than √k.
This is commonly used to speed up finding primes tremendously.
You don't need all prior primes, just those below the square root of your current production point are enough, when generating composites from primes by the sieve of Eratosthenes algorithm.
This greatly reduces the memory requirements. The primes are then simply those odd numbers which are not among the composites.
Each prime p produces a chain of its multiples, starting from its square, enumerated with the step of 2p (because we work only with odd numbers). These multiples, each with its step value, are stored in a dictionary, thus forming a priority queue. Only the primes up to the square root of the current candidate are present in this priority queue (the same memory requirement as that of a segmented sieve of E.).
Symbolically, the sieve of Eratosthenes is
P = {3,5,7,9, ...} \ &bigcup; {{p2, p2+2p, p2+4p, p2+6p, ...} | p in P}
Each odd prime generates a stream of its multiples by repeated addition; all these streams merged together give us all the odd composites; and primes are all the odd numbers without the composites (and the one even prime number, 2).
In Python (can be read as an executable pseudocode, hopefully),
def postponed_sieve(): # postponed sieve, by Will Ness,
yield 2; yield 3; # https://stackoverflow.com/a/10733621/849891
yield 5; yield 7; # original code David Eppstein / Alex Martelli
D = {} # 2002, http://code.activestate.com/recipes/117119
ps = (p for p in postponed_sieve()) # a separate Primes Supply:
p = ps.next() and ps.next() # (3) a Prime to add to dict
q = p*p # (9) when its sQuare is
c = 9 # the next Candidate
while True:
if c not in D: # not a multiple of any prime seen so far:
if c < q: yield c # a prime, or
else: # (c==q): # the next prime's square:
add(D,c + 2*p,2*p) # (9+6,6 : 15,21,27,33,...)
p=ps.next() # (5)
q=p*p # (25)
else: # 'c' is a composite:
s = D.pop(c) # step of increment
add(D,c + s,s) # next multiple, same step
c += 2 # next odd candidate
def add(D,x,s): # make no multiple keys in Dict
while x in D: x += s # increment by the given step
D[x] = s
Once a prime is produced, it can be forgotten. A separate prime supply is taken from a separate invocation of the same generator, recursively, to maintain the dictionary. And the prime supply for that one is taken from another, recursively as well. Each needs to be supplied only up to the square root of its production point, so very few generators are needed overall (on the order of log log N generators), and their sizes are asymptotically insignificant (sqrt(N), sqrt( sqrt(N) ), etc).
I wrote a program that generates the prime numbers in order, without limit, and used it to sum the first billion primes at my blog. The algorithm uses a segmented Sieve of Eratosthenes; additional sieving primes are calculated at each segment, so the process can continue indefinitely, as long as you have space to store the sieving primes. Here's pseudocode:
function init(delta) # Sieve of Eratosthenes
m, ps, qs := 0, [], []
sieve := makeArray(2 * delta, True)
for p from 2 to delta
if sieve[p]
m := m + 1; ps.insert(p)
qs.insert(p + (p-1) / 2)
for i from p+p to n step p
sieve[i] := False
return m, ps, qs, sieve
function advance(m, ps, qs, sieve, bottom, delta)
for i from 0 to delta - 1
sieve[i] := True
for i from 0 to m - 1
qs[i] := (qs[i] - delta) % ps[i]
p := ps[0] + 2
while p * p <= bottom + 2 * delta
if isPrime(p) # trial division
m := m + 1; ps.insert(p)
qs.insert((p*p - bottom - 1) / 2)
p := p + 2
for i from 0 to m - 1
for j from qs[i] to delta step ps[i]
sieve[j] := False
return m, ps, qs, sieve
Here ps is the list of sieving primes less than the current maximum and qs is the offset of the smallest multiple of the corresponding ps in the current segment. The advance function clears the bitarray, resets qs, extends ps and qs with new sieving primes, then sieves the next segment.
function genPrimes()
bottom, i, delta := 0, 1, 50000
m, ps, qs, sieve := init(delta)
yield 2
while True
if i == delta # reset for next segment
i, bottom := -1, bottom + 2 * delta
m, ps, qs, sieve := \textbackslash
advance(m, ps, qs, sieve, bottom, delta)
else if sieve[i] # found prime
yield bottom + 2*i + 1
i := i + 1
The segment size 2 * delta is arbitrarily set to 100000. This method requires O(sqrt(n)) space for the sieving primes plus constant space for the sieve.
It is slower but saves space to generate candidates with a wheel and test the candidates for primality.
function genPrimes()
w, wheel := 0, [1,2,2,4,2,4,2,4,6,2,6,4,2,4, \
6,6,2,6,4,2,6,4,6,8,4,2,4,2,4,8,6,4,6, \
2,4,6,2,6,6,4,2,4,6,2,6,4,2,4,2,10,2,10]
p := 2; yield p
repeat
p := p + wheel[w]
if w == 51 then w := 4 else w := w + 1
if isPrime(p) yield p
It may be useful to begin with a sieve and switch to a wheel when the sieve grows too large. Even better is to continue sieving with some fixed set of sieving primes, once the set grows too large, then report only those values bottom + 2*i + 1 that pass a primality test.

Resources