Can I reduce the computational complexity of this? - algorithm

Well, I have this bit of code that is slowing down the program hugely because it is linear complexity but called a lot of times making the program quadratic complexity. If possible I would like to reduce its computational complexity but otherwise I'll just optimize it where I can. So far I have reduced down to:
def table(n):
a = 1
while 2*a <= n:
if (-a*a)%n == 1: return a
a += 1
Anyone see anything I've missed? Thanks!
EDIT: I forgot to mention: n is always a prime number.
EDIT 2: Here is my new improved program (thank's for all the contributions!):
def table(n):
if n == 2: return 1
if n%4 != 1: return
a1 = n-1
for a in range(1, n//2+1):
if (a*a)%n == a1: return a
EDIT 3: And testing it out in its real context it is much faster! Well this question appears solved but there are many useful answers. I should also say that as well as those above optimizations, I have memoized the function using Python dictionaries...

Ignoring the algorithm for a moment (yes, I know, bad idea), the running time of this can be decreased hugely just by switching from while to for.
for a in range(1, n / 2 + 1)
(Hope this doesn't have an off-by-one error. I'm prone to make these.)
Another thing that I would try is to look if the step width can be incremented.

Take a look at http://modular.fas.harvard.edu/ent/ent_py .
The function sqrtmod does the job if you set a = -1 and p = n.
You missed a small point because the running time of your improved algorithm is still in the order of the square root of n. As long you have only small primes n (let's say less than 2^64), that's ok, and you should probably prefer your implementation to a more complex one.
If the prime n becomes bigger, you might have to switch to an algorithm using a little bit of number theory. To my knowledge, your problem can be solved only with a probabilistic algorithm in time log(n)^3. If I remember correctly, assuming the Riemann hypothesis holds (which most people do), one can show that the running time of the following algorithm (in ruby - sorry, I don't know python) is log(log(n))*log(n)^3:
class Integer
# calculate b to the power of e modulo self
def power(b, e)
raise 'power only defined for integer base' unless b.is_a? Integer
raise 'power only defined for integer exponent' unless e.is_a? Integer
raise 'power is implemented only for positive exponent' if e < 0
return 1 if e.zero?
x = power(b, e>>1)
x *= x
(e & 1).zero? ? x % self : (x*b) % self
end
# Fermat test (probabilistic prime number test)
def prime?(b = 2)
raise "base must be at least 2 in prime?" if b < 2
raise "base must be an integer in prime?" unless b.is_a? Integer
power(b, self >> 1) == 1
end
# find square root of -1 modulo prime
def sqrt_of_minus_one
return 1 if self == 2
return false if (self & 3) != 1
raise 'sqrt_of_minus_one works only for primes' unless prime?
# now just try all numbers (each succeeds with probability 1/2)
2.upto(self) do |b|
e = self >> 1
e >>= 1 while (e & 1).zero?
x = power(b, e)
next if [1, self-1].include? x
loop do
y = (x*x) % self
return x if y == self-1
raise 'sqrt_of_minus_one works only for primes' if y == 1
x = y
end
end
end
end
# find a prime
p = loop do
x = rand(1<<512)
next if (x & 3) != 1
break x if x.prime?
end
puts "%x" % p
puts "%x" % p.sqrt_of_minus_one
The slow part is now finding the prime (which takes approx. log(n)^4 integer operation); finding the square root of -1 takes for 512-bit primes still less than a second.

Consider pre-computing the results and storing them in a file. Nowadays many platforms have a huge disk capacity. Then, obtaining the result will be an O(1) operation.

(Building on Adam's answer.)
Look at the Wikipedia page on quadratic reciprocity:
x^2 ≡ −1 (mod p) is solvable if and only if p ≡ 1 (mod 4).
Then you can avoid the search of a root precisely for those odd prime n's that are not congruent with 1 modulo 4:
def table(n):
if n == 2: return 1
if n%4 != 1: return None # or raise exception
...

Based off OP's second edit:
def table(n):
if n == 2: return 1
if n%4 != 1: return
mod = 0
a1 = n - 1
for a in xrange(1, a1, 2):
mod += a
while mod >= n: mod -= n
if mod == a1: return a//2 + 1

It looks like you're trying to find the square root of -1 modulo n. Unfortunately, this is not an easy problem, depending on what values of n are input into your function. Depending on n, there might not even be a solution. See Wikipedia for more information on this problem.

Edit 2: Surprisingly, strength-reducing the squaring reduces the time a lot, at least on my Python2.5 installation. (I'm surprised because I thought interpreter overhead was taking most of the time, and this doesn't reduce the count of operations in the inner loop.) Reduces the time from 0.572s to 0.146s for table(1234577).
def table(n):
n1 = n - 1
square = 0
for delta in xrange(1, n, 2):
square += delta
if n <= square: square -= n
if square == n1: return delta // 2 + 1
strager posted the same idea but I think less tightly coded. Again, jug's answer is best.
Original answer: Another trivial coding tweak on top of Konrad Rudolph's:
def table(n):
n1 = n - 1
for a in xrange(1, n // 2 + 1):
if (a*a) % n == n1: return a
Speeds it up measurably on my laptop. (About 25% for table(1234577).)
Edit: I didn't notice the python3.0 tag; but the main change was hoisting part of the calculation out of the loop, not the use of xrange. (Academic since there's a better algorithm.)

Is it possible for you to cache the results?
When you calculate a large n you are given the results for the lower n's almost for free.

One thing that you are doing is repeating the calculation -a*a over and over again.
Create a table of the values once and then do look up in the main loop.
Also although this probably doesn't apply to you because your function name is table but if you call a function that takes time to calculate you should cache the result in a table and just do a table look up if you call it again with the same value. This save you the time of calculating all of the values when you first run but you don't waste time repeating the calculation more than once.

I went through and fixed the Harvard version to make it work with python 3.
http://modular.fas.harvard.edu/ent/ent_py
I made some slight changes to make the results exactly the same as the OP's function. There are two possible answers and I forced it to return the smaller answer.
import timeit
def table(n):
if n == 2: return 1
if n%4 != 1: return
a1=n-1
def inversemod(a, p):
x, y = xgcd(a, p)
return x%p
def xgcd(a, b):
x_sign = 1
if a < 0: a = -a; x_sign = -1
x = 1; y = 0; r = 0; s = 1
while b != 0:
(c, q) = (a%b, a//b)
(a, b, r, s, x, y) = (b, c, x-q*r, y-q*s, r, s)
return (x*x_sign, y)
def mul(x, y):
return ((x[0]*y[0]+a1*y[1]*x[1])%n,(x[0]*y[1]+x[1]*y[0])%n)
def pow(x, nn):
ans = (1,0)
xpow = x
while nn != 0:
if nn%2 != 0:
ans = mul(ans, xpow)
xpow = mul(xpow, xpow)
nn >>= 1
return ans
for z in range(2,n) :
u, v = pow((1,z), a1//2)
if v != 0:
vinv = inversemod(v, n)
if (vinv*vinv)%n == a1:
vinv %= n
if vinv <= n//2:
return vinv
else:
return n-vinv
tt=0
pri = [ 5,13,17,29,37,41,53,61,73,89,97,1234577,5915587277,3267000013,3628273133,2860486313,5463458053,3367900313 ]
for x in pri:
t=timeit.Timer('q=table('+str(x)+')','from __main__ import table')
tt +=t.timeit(number=100)
print("table(",x,")=",table(x))
print('total time=',tt/100)
This version takes about 3ms to run through the test cases above.
For comparison using the prime number 1234577
OP Edit2 745ms
The accepted answer 522ms
The above function 0.2ms

Related

Algorithm to solve for partitions of an Integer

Problem: x1+x2....xn=C where x1,x2....xn >= 0 and is a integer. Find an algorithm that finds every point (x1,x2...xn) that solves this.
Why: I am trying to iterate a multivariable polynomial's terms. The powers of each term can be described by the points above. (You do this operation for C = 0 to C = degree of the polynomial)
I am stuck trying to make an efficient algorithm that produced only the unique solutions (non duplicates) and wanted to see if there is any existing algorithm
After some thought on this problem (and alot of paper), here is my algorithm:
It finds every combination of array of length N that sum to k and the elements are greater then or equal to 0.
This does not do trial and error to get the solution however it does involve quite alot of loops. Greater optimization can be made by creating a generating function when k and n are known beforehand.
If anyone has a better algorithm or finds a problem with this one, please post below, but for now this solves my problem.
Thank you #kcsquared and #Kermit the Frog for leading me in the right direction
""" Function that iterates a n length vector such that the combination sum is always equal to k and all elements are the natural numbers
.Returns if it was stopped or not
Invokes lambda function on every iteration
iteration_lambda (index_vector::Vector{T}, total_iteration::T)::Bool
Return true when it should end
"""
function partition(k::T, n::T, iteration_lambda::Function; max_vector = nothing, sum_vector = nothing, index_vector = nothing)::Bool where T
if n > 0
max_vector = max_vector == nothing ? zeros(T, n) : max_vector
sum_vector = sum_vector == nothing ? zeros(T, n) : sum_vector
index_vector = index_vector == nothing ? zeros(T, n) : index_vector
current_index_index::T = 1
total_iteration::T = 1
max_vector[1] = k
index_vector[1] = max(0, -(k * (n - 1)))
#label reset
if index_vector[current_index_index] <= max_vector[current_index_index]
if current_index_index != n
current_index_index += 1
sum_vector[current_index_index] = sum_vector[current_index_index - 1] + index_vector[current_index_index - 1]
index_vector[current_index_index] = max(0, -(k * (n - current_index_index - 1) + sum_vector[current_index_index]))
max_vector[current_index_index] = k - sum_vector[current_index_index]
else
if iteration_lambda(index_vector, total_iteration)
return true
end
total_iteration += 1
index_vector[end] += 1
end
#goto reset
end
if current_index_index != 1
current_index_index -= 1
index_vector[current_index_index] += 1
#goto reset
end
end
return false
end

Python Codility Frog River One time complexity

So this is another approach to probably well-known codility platform, task about frog crossing the river. And sorry if this question is asked in bad manner, this is my first post here.
The goal is to find the earliest time when the frog can jump to the other side of the river.
For example, given X = 5 and array A such that:
A[0] = 1
A[1] = 3
A[2] = 1
A[3] = 4
A[4] = 2
A[5] = 3
A[6] = 5
A[7] = 4
the function should return 6.
Example test: (5, [1, 3, 1, 4, 2, 3, 5, 4])
Full task content:
https://app.codility.com/programmers/lessons/4-counting_elements/frog_river_one/
So that was my first obvious approach:
def solution(X, A):
lista = list(range(1, X + 1))
if X < 1 or len(A) < 1:
return -1
found = -1
for element in lista:
if element in A:
if A.index(element) > found:
found = A.index(element)
else: return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This solution is 100% correct and gets 0% in performance tests.
So I thought less lines + list comprehension will get better score:
def solution(X, A):
if X < 1 or len(A) < 1:
return -1
try:
found = max([ A.index(element) for element in range(1, X + 1) ])
except ValueError:
return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This one also works and has 0% performance but it's faster anyway.
I also found solution by deanalvero (https://github.com/deanalvero/codility/blob/master/python/lesson02/FrogRiverOne.py):
def solution(X, A):
# write your code in Python 2.6
frog, leaves = 0, [False] * (X)
for minute, leaf in enumerate(A):
if leaf <= X:
leaves[leaf - 1] = True
while leaves[frog]:
frog += 1
if frog == X: return minute
return -1
This solution gets 100% in correctness and performance tests.
My question arises probably because I don't quite understand this time complexity thing. Please tell me how the last solution is better from my second solution? It has a while loop inside for loop! It should be slow but it's not.
Here is a solution in which you would get 100% in both correctness and performance.
def solution(X, A):
i = 0
dict_temp = {}
while i < len(A):
dict_temp[A[i]] = i
if len(dict_temp) == X:
return i
i += 1
return -1
The answer already been told, but I'll add an optional solution that i think might help you understand:
def save_frog(x, arr):
# creating the steps the frog should make
steps = set([i for i in range(1, x + 1)])
# creating the steps the frog already did
froggy_steps = set()
for index, leaf in enumerate(arr):
froggy_steps.add(leaf)
if froggy_steps == steps:
return index
return -1
I think I got the best performance using set()
take a look at the performance test runtime seconds and compare them with yours
def solution(X, A):
positions = set()
seconds = 0
for i in range(0, len(A)):
if A[i] not in positions and A[i] <= X:
positions.add(A[i])
seconds = i
if len(positions) == X:
return seconds
return -1
The amount of nested loops doesn't directly tell you anything about the time complexity. Let n be the length of the input array. The inside of the while-loop needs in average O(1) time, although its worst case time complexity is O(n). The fast solution uses a boolean array leaves where at every index it has the value true if there is a leaf and false otherwise. The inside of the while-loop during the entire algotihm is excetuded no more than n times. The outer for-loop is also executed only n times. This means the time complexity of the algorithm is O(n).
The key is that both of your initial solutions are quadratic. They involve O(n) inner scans for each of the parent elements (resulting in O(n**2)).
The fast solution initially appears to suffer the same fate as it's obvious it contains a loop within a loop. But the inner while loop does not get fully scanned for each 'leaf'. Take a look at where 'frog' gets initialized and you'll note that the while loop effectively picks up where it left off for each leaf.
Here is my 100% solution that considers the sum of numeric progression.
def solution(X, A):
covered = [False] * (X+1)
n = len(A)
Sx = ((1+X)*X)/2 # sum of the numeric progression
for i in range(n):
if(not covered[A[i]]):
Sx -= A[i]
covered[A[i]] = True
if (Sx==0):
return i
return -1
Optimized solution from #sphoenix, no need to compare two sets, it's not really good.
def solution(X, A):
found = set()
for pos, i in enumerate(A, 0):
if i <= X:
found.add(i)
if len(found) == X:
return pos
return -1
And one more optimized solution for binary array
def solution(X, A):
steps, leaves = X, [False] * X
for minute, leaf in enumerate(A, 0):
if not leaves[leaf - 1]:
leaves[leaf - 1] = True
steps -= 1
if 0 == steps:
return minute
return -1
The last one is better, less resources. set consumes more resources compared to binary list (memory and CPU).
def solution(X, A):
# if there are not enough items in the list
if X > len(A):
return -1
# else check all items
else:
d = {}
for i, leaf in enumerate(A):
d[leaf] = i
if len(d) == X:
return i
# if all else fails
return -1
I tried to use as much simple instruction as possible.
def solution(X, A):
if (X > len(A)): # check for no answer simple
return -1
elif(X == 1): # check for single element
return 0
else:
std_set = {i for i in range(1,X+1)} # list of standard order
this_set = set(A) # set of unique element in list
if(sum(std_set) > sum(this_set)): # check for no answer complex
return -1
else:
for i in range(0, len(A) - 1):
if std_set:
if(A[i] in std_set):
std_set.remove(A[i]) # remove each element in standard set
if not std_set: # if all removed, return last filled position
return(i)
I guess this code might not fulfill runtime but it the simplest I could think of
I am using OrderedDict from collections and sum of first n numbers to check the frog will be able to cross or not.
def solution(X, A):
from collections import OrderedDict as od
if sum(set(A))!=(X*(X+1))//2:
return -1
k=list(od.fromkeys(A).keys())[-1]
for x,y in enumerate(A):
if y==k:
return x
This code gives 100% for correctness and performance, runs in O(N)
def solution(x, a):
# write your code in Python 3.6
# initialize all positions to zero
# i.e. if x = 2; x + 1 = 3
# x_positions = [0,1,2]
x_positions = [0] * (x + 1)
min_time = -1
for k in range(len(a)):
# since we are looking for min time, ensure that you only
# count the positions that matter
if a[k] <= x and x_positions[a[k]] == 0:
x_positions[a[k]] += 1
min_time = k
# ensure that all positions are available for the frog to jump
if sum(x_positions) == x:
return min_time
return -1
100% performance using sets
def solution(X, A):
positions = set()
for i in range(len(A)):
if A[i] not in positions:
positions.add(A[i])
if len(positions) == X:
return i
return -1

random-acess machine (RAM) - Test square-free n

I'm writing a random-acess machine (RAM) using a simulator that tests whether a given natural number is square-free. My goal is to then analyze its complexity.
At high-level I would use the following test function
def isSquareFree(n):
if n % 2 == 0:
n = n / 2
if n % 2 == 0:
return False
for i in range(3, int(sqrt(n) + 1)):
if n % i == 0:
n = n / i
if n % i == 0:
return False
return True
My problem is, I am not sure how to calculate the square root of n using RAM-commands and can't find much resources online. So I am reconsidering if this is actually the right way to do it.
What are alternative ways to test if a natural number is square-free, that can be implemented using RAM?
Thanks.
If you just want to avoid the sqrt in your code you can simply test for i*i<=n. This probably is a good idea anywise, as calculating the square root is a quite hard thing to do.
Thus I would change your code to:
def isSquareFree(n):
i=2
while i*i <= n:
if n % i == 0:
n = n / i
if n % i == 0:
return False
i = i+1
return True
The above only uses pretty atomic operations, so I hope this will help you. But I am not familiar with RAM programming so I am not sure whether this solves your problem.

faster n choose k for combination of array ruby

While trying to solve the "paths on a grid" problem, I have written the code
def paths(n, k)
p = (1..n+k).to_a
p.combination(n).to_a.size
end
The code works fine, for instance if n == 8 and k == 2 the code returns 45 which is the correct number of paths.
However the code is very slow when using larger numbers and I'm struggling to figure out how to quicken the process.
Rather than building the array of combinations just to count it, just write the function that defines the number of combinations. I'm sure there are also gems that include this and many other combinatorics functions.
Note that I am using the gem Distribution for the Math.factorial method, but that is another easy one to write. Given that, though, I'd suggest taking #stefan's answer, as it's less overhead.
def n_choose_k(n, k)
Math.factorial(n) / (Math.factorial(k) * Math.factorial(n - k))
end
n_choose_k(10, 8)
# => 45
Note that the n and k here refer to slightly different things than in your method, but I am keeping them as it is highly standard nomenclature in combinatorics for this function.
def combinations(n, k)
return 1 if k == 0 or k == n
(k + 1 .. n).reduce(:*) / (1 .. n - k).reduce(:*)
end
combinations(8, 2) #=> 28
Explanation about the math part
The original equation is
combinations(n, k) = n! / k!(n - k)!
Since n! / k! = (1 * 2 * ... * n) / (1 * 2 * ... * k), for any k <= n there is a (1 * 2 * ... * k) factor both in the numerator and in the denominator, so we can cancel this factor. This makes the equation become
combinations(n, k) = (k + 1) * (k + 2) * ... * (n) / (n - k)!
which is exactly what I did in my Ruby code.
The answers that suggest computing full factorials will generate lots of unnecessary overhead when working with big numbers. You should use the method below for calculating the binomial coefficient: n!/(k!(n-k)!)
def n_choose_k(n, k)
return 0 if k > n
result = 1
1.upto(k) do |d|
result *= n
result /= d
n -= 1
end
result
end
This will perform the minimum operations needed. Note that incrementing d while decrementing n guarantees that there will be no rounding errors. For example, {n, n+1} is guaranteed to have at least one element divisible by two, {n, n+1, n+2} is guaranteed to have at least one element divisible by three and so on.
Your code can be rewritten as:
def paths(x, y)
# Choice of x or y for the second parameter is arbitrary
n_choose_k(x + y, x)
end
puts paths(8, 2) # 45
puts paths(2, 8) # 45
I assume that n and k in the original version were meant to be dimensions so i labeled them x and y instead. There's no need to generate an array here.
Edit: Here is a benchmark script...
require 'distribution'
def puts_time
$stderr.puts 'Completed in %f seconds' % (Time.now - $start_time)
$start_time = Time.now
end
def n_choose_k(n, k)
return 0 if k > n
result = 1
1.upto(k) do |d|
result *= n
result /= d
n -= 1
end
result
end
def n_choose_k_distribution(n, k)
Math.factorial(n) / (Math.factorial(k) * Math.factorial(n - k))
end
def n_choose_k_inject(n, k)
(1..n).inject(:*) / ((1..k).inject(:*) * (1..n-k).inject(:*))
end
def benchmark(&callback)
100.upto(300) do |n|
25.upto(75) do |k|
callback.call(n, k)
end
end
end
$start_time = Time.now
puts 'Distribution gem...'
benchmark { |n, k| n_choose_k_distribution(n, k) }
puts_time
puts 'Inject method...'
benchmark { |n, k| n_choose_k_inject(n, k) }
puts_time
puts 'Answer...'
benchmark { |n, k| n_choose_k(n, k) }
puts_time
Output on my system is:
Distribution gem...
Completed in 1.141804 seconds
Inject method...
Completed in 1.106018 seconds
Answer...
Completed in 0.150989 seconds
Since you're interested in the count rather than the actual combination sets, you should do this with a choose function. The mathematical definition involves evaluating three different factorials, but there's a lot of cancellation going on so you can speed it up by using ranges to avoid the calculations that will be cancelled anyway.
class Integer
def choose(k)
fail 'k > n' if k > self
fail 'args must be positive' if k < 0 or self < 1
return 1 if k == n || k == 0
mm = [self - k, k].minmax
(mm[1]+1..self).reduce(:*) / (2..mm[0]).reduce(:*)
end
end
p 8.choose 6 # => 28
To solve your paths problem, you could then define
def paths(n, k)
(n + k).choose(k)
end
p paths(8, 2) # => 45
The reduce/inject versions are nice. But since speed seemed to be a bit of an issue, I'd suggest the n_choose_k versions from #google-fail.
It is quite insightful and suggests a ~10-fold speed increase.
I would suggest that the iteration use the Lesser of k and ( n - k ).
N-choose-K and N-choose-(N-K) produce the same result (the factors in the denominator are simply reversed). So something like a 52-choose-51 could be done in one iteration.
I usually do the following:
class Integer
def !
(2..self).reduce(1, :*)
end
def choose(k)
self.! / (k.! * (self-k).!)
end
end
Benchmarking:
k = 5
Benchmark.bm do |x|
[10, 100, 1000, 10000, 100000].each do |n|
x.report("#{n}") { n.choose(k) }
end
end
On my machine I get:
user system total real
10 0.000008 0.000001 0.000009 ( 0.000006)
100 0.000027 0.000003 0.000030 ( 0.000031)
1000 0.000798 0.000094 0.000892 ( 0.000893)
10000 0.045911 0.013201 0.059112 ( 0.059260)
100000 4.885310 0.229735 5.115045 ( 5.119902)
Not the fastest thing on the planet, but it's okay for my uses. If it ever becomes a problem, then I can think about optimizing

Large Exponents in Ruby?

I'm just doing some University related Diffie-Hellman exercises and tried to use ruby for it.
Sadly, ruby doesn't seem to be able to deal with large exponents:
warning: in a**b, b may be too big
NaN
[...]
Is there any way around it? (e.g. a special math class or something along that line?)
p.s. here is the code in question:
generator = 7789
prime = 1017473
alice_secret = 415492
bob_secret = 725193
puts from_alice_to_bob = (generator**alice_secret) % prime
puts from_bob_to_alice = (generator**bob_secret) % prime
puts bobs_key_calculation = (from_alice_to_bob**bob_secret) % prime
puts alices_key_calculation = (from_bob_to_alice**alice_secret) % prime
You need to do what is called, modular exponentiation.
If you can use the OpenSSL bindings then you can do rapid modular exponentiation in Ruby
puts some_large_int.to_bn.mod_exp(exp,mod)
There's a nice way to compute a^b mod n without getting these huge numbers.
You're going to walk through the exponentiation yourself, taking the modulus at each stage.
There's a trick where you can break it down into a series of powers of two.
Here's a link with an example using it to do RSA, from a course I took a while ago:
Specifically, on the second page, you can see an example:
http://www.math.uwaterloo.ca/~cd2rober/Math135/RSAExample.pdf
More explanation with some sample pseudocode from wikipedia: http://en.wikipedia.org/wiki/Modular_exponentiation#Right-to-left_binary_method
I don't know ruby, but even a bignum-friendly math library is going to struggle to evaluate such an expression the naive way (7789 to the power 415492 has approximately 1.6 million digits).
The way to work out a^b mod p without blowing up is to do the mod ping at every exponentiation - I would guess that the language isn't working this out on its own and therefore must be helped.
I've made some attempts of my own. Exponentiation by squaring works well so far, but same problem with bigNum. such a recursive thing as
def exponentiation(base, exp, y = 1)
if(exp == 0)
return y
end
case exp%2
when 0 then
exp = exp/2
base = (base*base)%##mod
exponentiation(base, exp, y)
when 1 then
y = (base*y)%##mod
exp = exp - 1
exponentiation(base, exp, y)
end
end
however, it would be, as I'm realizing, a terrible idea to rely on ruby's prime class for anything substantial. Ruby uses the Sieve of Eratosthenes for it's prime generator, but even worse, it uses Trial division for gcd's and such....
oh, and ##mod was a class variable, so if you plan on using this yourselves, you might want to add it as a param or something.
I've gotten it to work quite quickly for
puts a.exponentiation(100000000000000, 1222555345678)
numbers in that range.
(using ##mod = 80233)
OK, got the squaring method to work for
a = Mod.new(80233788)
puts a.exponentiation(298989898980988987789898789098767978698745859720452521, 12225553456987474747474744778)
output: 59357797
I think that should be sufficient for any problem you might have in your Crypto course
If you really want to go to BIG modular exponentiation, here is an implementation from the wiki page.
#base expantion number to selected base
def baseExpantion(number, base)
q = number
k = ""
while q > 0 do
a = q % base
q = q / base
k = a.to_s() + k
end
return k
end
#iterative for modular exponentiation
def modular(n, b, m)
x = 1
power = baseExpantion(b, 2) #base two
i = power.size - 1
if power.split("")[i] == "1"
x = x * n
x = x % m
end
while i > 0 do
n *= n
n = n % m
if power.split("")[i-1] == "1"
x *= n
x = x % m
end
i -= 1
end
return x
end
Results, where tested with wolfram alpha
This is inspired by right-to-left binary method example on Wikipedia:
def powmod(base, exponent, modulus)
return modulus==1 ? 0 : begin
result = 1
base = base % modulus
while exponent > 0
result = result*base%modulus if exponent%2 == 1
exponent = exponent >> 1
base = base*base%modulus
end
result
end
end

Resources