Minimum Jumps required from given number to 1 - algorithm

I am trying to solve the problem given below: (I solved it using recursion but I am having a hard time trying to use a cache to prevent a lot of the same steps being recalculated.
"""
Given a positive integer N, find the smallest number of steps it will take to reach 1.
There are two kinds of permitted steps:
You may decrement N to N - 1.
If a * b = N, you may decrement N to the larger of a and b.
For example, given 100, you can reach 1 in five steps with the following route:
100 -> 10 -> 9 -> 3 -> 2 -> 1.
"""
def multiples(num):
ret = []
start = 2
while start < num:
if num % start == 0:
ret.append(num // start)
start +=1
if ret and start >= ret[-1]:
break
return ret if ret else None
def min_jumps_no_cache(num, jumps=0):
if num == 1:
return jumps
mults = multiples(num)
res = []
res.append(min_jumps(num - 1, jumps +1))
if mults:
for mult in mults:
res.append(min_jumps(mult , jumps +1))
return min(res)
Now, I am trying to add a cache in here because of the obvious high runtime of this solution. But I have run into a similar issue before where I am overwriting the cache and I was curious if there is a solution to this.
def min_jumps_cache(num, jumps=0, cache={}):
if num == 1:
return jumps
if num in cache:
return cache[num]
mults = multiples(num)
res = []
temp = min_jumps_cache(num - 1, jumps +1, cache)
res.append(temp)
if mults:
for mult in mults:
res.append(min_jumps_cache(mult , jumps +1, cache))
temp = min(res)
cache[num] = min(temp, cache[num]) if num in cache else temp
return temp
It seems to me the issue here is you can't really cache an answer until you have calculated both its "left and right" solutions to find the answer. Is there something else I am missing here?

Your solution is fine so far as it goes.
However it will be more efficient to do a breadth-first solution from the bottom up. (This can trivially be optimized more. How is an exercise for the reader.)
def path (n):
path_from = {}
queue = [(1, None)]
while True:
value, prev = queue.pop(0)
value not in path_from:
path_from[value] = prev
if value == n:
break # EXIT HERE
queue.append((value+1, value))
for i in range(2, min(value+1, n//value + 1)):
queue.append((i*value, value))
answer = []
while n is not None:
answer.append(n)
n = path_from[n]
return(answer)
print(path(100))

Related

fastest way to distribute a quantity to elements of an array such that the difference of pairs is minimal?

given an array of numbers arr and an integer x, distribute x such that the difference between any pairs is minimum possible.
e.g. arr = [4,2,0] and x = 10;
the answer should be [6,5,5];
it is obligatory to use all of x.
Compute the final mean as (sum(arr) + x) / len(arr). That would be the ideal target for all numbers if we could also decrease.
The rounded down quotient tells us the minimum every number shall become, and the remainder tells us how many numbers shall get an additional 1 added. Do that after eliminating numbers already too large.
Total time O(n log n).
Python implementation:
def distribute(arr, x):
total = sum(arr) + x
I = sorted(range(len(arr)), key=arr.__getitem__)
while I:
minimum, additional = divmod(total, len(I))
if arr[I[-1]] <= minimum:
break
total -= arr[I.pop()]
for i in sorted(I):
arr[i] = minimum
if additional > 0:
arr[i] += 1
additional -= 1
Results from testing some hardcoded inputs, larger random inputs, and exhaustive small inputs:
433103 tests passed
0 tests failed
Full code (Try it online!):
from random import choices
from itertools import product
def distribute(arr, x):
total = sum(arr) + x
I = sorted(range(len(arr)), key=arr.__getitem__)
while I:
minimum, additional = divmod(total, len(I))
if arr[I[-1]] <= minimum:
break
total -= arr[I.pop()]
for i in sorted(I):
arr[i] = minimum
if additional > 0:
arr[i] += 1
additional -= 1
def naive(arr, x):
for _ in range(x):
arr[arr.index(min(arr))] += 1
passed = failed = 0
def test(arr, x):
expect = arr.copy()
naive(expect, x)
result = arr.copy()
distribute(result, x)
global passed, failed
if result == expect:
passed += 1
else:
failed += 1
print('failed:')
print(f'{arr = }')
print(f'{expect = }')
print(f'{result = }')
print()
# Tests from OP, me, and David
test([4, 2, 0], 10)
test([4, 2, 99, 0], 10)
test([20, 15, 10, 5, 0], 10)
# Random larger tests
for x in range(1000):
arr = choices(range(100), k=100)
test(arr, x)
# Exhaustive smaller tests
for n in range(5):
for arr in product(range(10), repeat=n):
arr = list(arr)
for x in range(n * 10):
test(arr, x)
print(f'{passed} tests passed')
print(f'{failed} tests failed')
For large inputs with smaller range, it can be more efficient to binary search the target minimum. I didn't expect it, but apparently this solution can be up to seven times faster than don't talk just code's answer even for medium size ranges. Here's an example with range 20 (seven times faster), and one with 100,000,000 (two times faster): https://ideone.com/X6GxFD. As we increase input length, this answer seems to be significantly faster even for the full 64 bit range.
Python code:
def f(A, x):
smallest = min(A)
lo = smallest
hi = smallest + x
while lo < hi:
mid = lo + (hi - lo) // 2
can_reach = True
temp = x
for a in A:
if a <= mid:
diff = mid - a
if diff > temp:
can_reach = False
break
else:
temp -= diff
if can_reach:
lo = mid + 1
else:
hi = mid
target = lo - 1
for i, a in enumerate(A):
if a < target:
x -= target - a
A[i] = target
if x:
for i, a in enumerate(A):
if a == target:
A[i] += 1
x -= 1
if x == 0:
break
return A
Here's a solution that can beat both my binary search answer, as well as don't talk just code's answer at some larger input lengths. The idea is to sort the array and find the largest minimum by accumulation, traversing from smaller to larger, with O(1) space for the latter, avoiding pop operations.
Test link.
Python code:
def g(A, x):
s = sorted(range(len(A)), key=lambda i: A[i])
total = x
count = 1
curr = A[s[0]]
to_add = 0
extra = 0
for i in range(1, len(A)):
diff = A[s[i]] - curr
needed = count * diff
if needed >= total:
break
curr = A[s[i]]
total -= needed
count += 1
if total:
extra, to_add = divmod(total, count)
for i in range(count):
A[s[i]] = curr + extra
if to_add:
A[s[i]] += 1
to_add -= 1
return A
Assuming the position of unchanged values does not need to be preserved:
convert into a min heap ("heapify", O(n))
repeat pop&count minimal values from the heap until
- empty: distribute rest: done -or-
- top is greater:
If there's not enough left to make all minimums equal top, distribute rest: done
else decrease rest and continue 1.
O(n+#increased_values*log(n))
Final write back of increased values left as an exercise (for now).
Assuming that you are to minimize the maximum difference between any pair of numbers, then this is the general approach:
Sort the numbers
Find the lowest number(s)
If there are Y lowest numbers, then decrement X by Y and add 1 to each of the lowest numbers until either X runs out, or the lowest numbers become equal to the next lowest numbers,
If X is used up then exit.
If not then got to step #2 and repeat.
Obviously, you can improve step #3 with a little bit of math.

Implementing iterative solution in a functionally recursive way with memoization

I am trying to solve the following problem on leetcode: Coin Change 2
Input: amount = 5, coins = [1, 2,5]
Output: 4 Explanation: there are four ways to make up the amount:
5=5
5=2+2+1
5=2+1+1+1
5=1+1+1+1+1
I am trying to implement an iterative solution which essentially simulates/mimic recursion using stack. I have managed to implement it and the solution works, but it exceeds time limit.
I have noticed that the recursive solutions make use of memoization to optimize. I would like to incorporate that in my iterative solution as well, but I am lost on how to proceed.
My solution so far:
# stack to simulate recursion
stack = []
# add starting indexes and sum to stack
#Tuple(x,y) where x is sum, y is index of the coins array input
for i in range(0, len(coins)):
if coins[i]<=amount:
stack.append((coins[i], i))
result = 0
while len(stack)!=0:
c = stack.pop()
currentsum = c[0]
currentindex = c[1]
# can't explore further
if currentsum >amount:
continue
# condition met, increment result
if currentsum == amount:
result = result+1
continue
# add coin at current index to sum if doesn't exceed amount (append call to stack)
if (currentsum+coins[currentindex])<=amount:
stack.append((currentsum+coins[currentindex], currentindex))
#skip coin at current index (append call to stack)
if (currentindex+1)<=len(coins)-1:
stack.append((currentsum, currentindex+1))
return result
I have tried using dictionary to record appends to the stack as follows:
#if the call has not already happened, add to dictionary
if dictionary.get((currentsum, currentindex+1), None) == None:
stack.append((currentsum, currentindex+1))
dictionary[currentsum, currentindex+1)] = 'visited'
Example, if call (2,1) of sum = 2 and coin-array-index = 1 is made, I append it to dictionary. If the same call is encountered again, I don't append it again. However, it does not work as different combinations can have same sum and index.
Is there anyway I can incorporate memoization in my iterative solution above. I want to do it in a way such that it is functionally same as the recursive solution.
I have managed to figure out the solution. Essentially, I used post order traversal and used a state variable to record the stage of recursion the current call is in. Using the stage, I have managed to go bottom up after going top down.
The solution I came up with is as follows:
def change(self, amount: int, coins: List[int]) -> int:
if amount<=0:
return 1
if len(coins) == 0:
return 0
d= dict()
#currentsum, index, instruction
coins.sort(reverse=True)
stack = [(0, 0, 'ENTER')]
calls = 0
while len(stack)!=0:
currentsum, index, instruction = stack.pop()
if currentsum == amount:
d[(currentsum, index)] = 1
continue
elif instruction == 'ENTER':
stack.append((currentsum, index, 'EXIT'))
if (index+1)<=(len(coins)-1):
if d.get((currentsum, index+1), None) == None:
stack.append((currentsum, index+1, 'ENTER'))
newsum = currentsum + coins[index]
if newsum<=amount:
if d.get((newsum, index), None) == None:
stack.append((newsum, index, 'ENTER'))
elif instruction == 'EXIT':
newsum = currentsum + coins[index]
left = 0 if d.get((newsum, index), None) == None else d.get((newsum, index))
right = 0 if d.get((currentsum, index+1), None) == None else d.get((currentsum, index+1))
d[(currentsum, index)] = left+right
calls = calls+1
print(calls)
return d[(0,0)]

Difference between case 1 vs case 2?

I've worked on this question, Convert Sorted Array.
At first
I didn't know what to be returned in the end, so I created another function within the given one to carry out the recursion
**case 1**
def sortedArrayToBST(self, nums: List[int]) -> TreeNode:
def bst(num_list):
# base_case
if len(nums) < 2:
return TreeNode(nums[-1])
# recursive_case
mid = len(nums) // 2
node = TreeNode(nums[mid])
node.left = bst(nums[:mid])
node.right = bst(nums[mid + 1:])
ans = bst(nums)
return ans
but, it kept giving me 'time limit exceeded or maximum depth in recursion' as a result.
Then, as soon as I removed the inner 'bts' function and then just did the same recursion process the given function(sortedArrayToBST) itself, the error had gone just like magic...
**case 2**
def sortedArrayToBST(self, nums: List[int]) -> TreeNode:
if not nums:
return None
if len(nums) == 1:
return TreeNode(nums[-1])
# recursive_case
mid = len(nums) // 2
node = TreeNode(nums[mid])
node.left = self.sortedArrayToBST(nums[:mid])
node.right = self.sortedArrayToBST(nums[mid + 1:])
return node
However, having said that, I can't see what's different between the two codes. There must be a key difference between the two but can't work it out on my own.
Could you please enlighten me on what the difference is between case 1 and case 2 so what causes error in one but not in the other.
In case 1, the length of the processed list does not decrease across recursive calls because, while the parameter of bts is num_list, nums is being processed in its body. The error would disappear in case 1 if num_list is processed (instead of nums) in bts.

Python Codility Frog River One time complexity

So this is another approach to probably well-known codility platform, task about frog crossing the river. And sorry if this question is asked in bad manner, this is my first post here.
The goal is to find the earliest time when the frog can jump to the other side of the river.
For example, given X = 5 and array A such that:
A[0] = 1
A[1] = 3
A[2] = 1
A[3] = 4
A[4] = 2
A[5] = 3
A[6] = 5
A[7] = 4
the function should return 6.
Example test: (5, [1, 3, 1, 4, 2, 3, 5, 4])
Full task content:
https://app.codility.com/programmers/lessons/4-counting_elements/frog_river_one/
So that was my first obvious approach:
def solution(X, A):
lista = list(range(1, X + 1))
if X < 1 or len(A) < 1:
return -1
found = -1
for element in lista:
if element in A:
if A.index(element) > found:
found = A.index(element)
else: return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This solution is 100% correct and gets 0% in performance tests.
So I thought less lines + list comprehension will get better score:
def solution(X, A):
if X < 1 or len(A) < 1:
return -1
try:
found = max([ A.index(element) for element in range(1, X + 1) ])
except ValueError:
return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This one also works and has 0% performance but it's faster anyway.
I also found solution by deanalvero (https://github.com/deanalvero/codility/blob/master/python/lesson02/FrogRiverOne.py):
def solution(X, A):
# write your code in Python 2.6
frog, leaves = 0, [False] * (X)
for minute, leaf in enumerate(A):
if leaf <= X:
leaves[leaf - 1] = True
while leaves[frog]:
frog += 1
if frog == X: return minute
return -1
This solution gets 100% in correctness and performance tests.
My question arises probably because I don't quite understand this time complexity thing. Please tell me how the last solution is better from my second solution? It has a while loop inside for loop! It should be slow but it's not.
Here is a solution in which you would get 100% in both correctness and performance.
def solution(X, A):
i = 0
dict_temp = {}
while i < len(A):
dict_temp[A[i]] = i
if len(dict_temp) == X:
return i
i += 1
return -1
The answer already been told, but I'll add an optional solution that i think might help you understand:
def save_frog(x, arr):
# creating the steps the frog should make
steps = set([i for i in range(1, x + 1)])
# creating the steps the frog already did
froggy_steps = set()
for index, leaf in enumerate(arr):
froggy_steps.add(leaf)
if froggy_steps == steps:
return index
return -1
I think I got the best performance using set()
take a look at the performance test runtime seconds and compare them with yours
def solution(X, A):
positions = set()
seconds = 0
for i in range(0, len(A)):
if A[i] not in positions and A[i] <= X:
positions.add(A[i])
seconds = i
if len(positions) == X:
return seconds
return -1
The amount of nested loops doesn't directly tell you anything about the time complexity. Let n be the length of the input array. The inside of the while-loop needs in average O(1) time, although its worst case time complexity is O(n). The fast solution uses a boolean array leaves where at every index it has the value true if there is a leaf and false otherwise. The inside of the while-loop during the entire algotihm is excetuded no more than n times. The outer for-loop is also executed only n times. This means the time complexity of the algorithm is O(n).
The key is that both of your initial solutions are quadratic. They involve O(n) inner scans for each of the parent elements (resulting in O(n**2)).
The fast solution initially appears to suffer the same fate as it's obvious it contains a loop within a loop. But the inner while loop does not get fully scanned for each 'leaf'. Take a look at where 'frog' gets initialized and you'll note that the while loop effectively picks up where it left off for each leaf.
Here is my 100% solution that considers the sum of numeric progression.
def solution(X, A):
covered = [False] * (X+1)
n = len(A)
Sx = ((1+X)*X)/2 # sum of the numeric progression
for i in range(n):
if(not covered[A[i]]):
Sx -= A[i]
covered[A[i]] = True
if (Sx==0):
return i
return -1
Optimized solution from #sphoenix, no need to compare two sets, it's not really good.
def solution(X, A):
found = set()
for pos, i in enumerate(A, 0):
if i <= X:
found.add(i)
if len(found) == X:
return pos
return -1
And one more optimized solution for binary array
def solution(X, A):
steps, leaves = X, [False] * X
for minute, leaf in enumerate(A, 0):
if not leaves[leaf - 1]:
leaves[leaf - 1] = True
steps -= 1
if 0 == steps:
return minute
return -1
The last one is better, less resources. set consumes more resources compared to binary list (memory and CPU).
def solution(X, A):
# if there are not enough items in the list
if X > len(A):
return -1
# else check all items
else:
d = {}
for i, leaf in enumerate(A):
d[leaf] = i
if len(d) == X:
return i
# if all else fails
return -1
I tried to use as much simple instruction as possible.
def solution(X, A):
if (X > len(A)): # check for no answer simple
return -1
elif(X == 1): # check for single element
return 0
else:
std_set = {i for i in range(1,X+1)} # list of standard order
this_set = set(A) # set of unique element in list
if(sum(std_set) > sum(this_set)): # check for no answer complex
return -1
else:
for i in range(0, len(A) - 1):
if std_set:
if(A[i] in std_set):
std_set.remove(A[i]) # remove each element in standard set
if not std_set: # if all removed, return last filled position
return(i)
I guess this code might not fulfill runtime but it the simplest I could think of
I am using OrderedDict from collections and sum of first n numbers to check the frog will be able to cross or not.
def solution(X, A):
from collections import OrderedDict as od
if sum(set(A))!=(X*(X+1))//2:
return -1
k=list(od.fromkeys(A).keys())[-1]
for x,y in enumerate(A):
if y==k:
return x
This code gives 100% for correctness and performance, runs in O(N)
def solution(x, a):
# write your code in Python 3.6
# initialize all positions to zero
# i.e. if x = 2; x + 1 = 3
# x_positions = [0,1,2]
x_positions = [0] * (x + 1)
min_time = -1
for k in range(len(a)):
# since we are looking for min time, ensure that you only
# count the positions that matter
if a[k] <= x and x_positions[a[k]] == 0:
x_positions[a[k]] += 1
min_time = k
# ensure that all positions are available for the frog to jump
if sum(x_positions) == x:
return min_time
return -1
100% performance using sets
def solution(X, A):
positions = set()
for i in range(len(A)):
if A[i] not in positions:
positions.add(A[i])
if len(positions) == X:
return i
return -1

Can I reduce the computational complexity of this?

Well, I have this bit of code that is slowing down the program hugely because it is linear complexity but called a lot of times making the program quadratic complexity. If possible I would like to reduce its computational complexity but otherwise I'll just optimize it where I can. So far I have reduced down to:
def table(n):
a = 1
while 2*a <= n:
if (-a*a)%n == 1: return a
a += 1
Anyone see anything I've missed? Thanks!
EDIT: I forgot to mention: n is always a prime number.
EDIT 2: Here is my new improved program (thank's for all the contributions!):
def table(n):
if n == 2: return 1
if n%4 != 1: return
a1 = n-1
for a in range(1, n//2+1):
if (a*a)%n == a1: return a
EDIT 3: And testing it out in its real context it is much faster! Well this question appears solved but there are many useful answers. I should also say that as well as those above optimizations, I have memoized the function using Python dictionaries...
Ignoring the algorithm for a moment (yes, I know, bad idea), the running time of this can be decreased hugely just by switching from while to for.
for a in range(1, n / 2 + 1)
(Hope this doesn't have an off-by-one error. I'm prone to make these.)
Another thing that I would try is to look if the step width can be incremented.
Take a look at http://modular.fas.harvard.edu/ent/ent_py .
The function sqrtmod does the job if you set a = -1 and p = n.
You missed a small point because the running time of your improved algorithm is still in the order of the square root of n. As long you have only small primes n (let's say less than 2^64), that's ok, and you should probably prefer your implementation to a more complex one.
If the prime n becomes bigger, you might have to switch to an algorithm using a little bit of number theory. To my knowledge, your problem can be solved only with a probabilistic algorithm in time log(n)^3. If I remember correctly, assuming the Riemann hypothesis holds (which most people do), one can show that the running time of the following algorithm (in ruby - sorry, I don't know python) is log(log(n))*log(n)^3:
class Integer
# calculate b to the power of e modulo self
def power(b, e)
raise 'power only defined for integer base' unless b.is_a? Integer
raise 'power only defined for integer exponent' unless e.is_a? Integer
raise 'power is implemented only for positive exponent' if e < 0
return 1 if e.zero?
x = power(b, e>>1)
x *= x
(e & 1).zero? ? x % self : (x*b) % self
end
# Fermat test (probabilistic prime number test)
def prime?(b = 2)
raise "base must be at least 2 in prime?" if b < 2
raise "base must be an integer in prime?" unless b.is_a? Integer
power(b, self >> 1) == 1
end
# find square root of -1 modulo prime
def sqrt_of_minus_one
return 1 if self == 2
return false if (self & 3) != 1
raise 'sqrt_of_minus_one works only for primes' unless prime?
# now just try all numbers (each succeeds with probability 1/2)
2.upto(self) do |b|
e = self >> 1
e >>= 1 while (e & 1).zero?
x = power(b, e)
next if [1, self-1].include? x
loop do
y = (x*x) % self
return x if y == self-1
raise 'sqrt_of_minus_one works only for primes' if y == 1
x = y
end
end
end
end
# find a prime
p = loop do
x = rand(1<<512)
next if (x & 3) != 1
break x if x.prime?
end
puts "%x" % p
puts "%x" % p.sqrt_of_minus_one
The slow part is now finding the prime (which takes approx. log(n)^4 integer operation); finding the square root of -1 takes for 512-bit primes still less than a second.
Consider pre-computing the results and storing them in a file. Nowadays many platforms have a huge disk capacity. Then, obtaining the result will be an O(1) operation.
(Building on Adam's answer.)
Look at the Wikipedia page on quadratic reciprocity:
x^2 ≡ −1 (mod p) is solvable if and only if p ≡ 1 (mod 4).
Then you can avoid the search of a root precisely for those odd prime n's that are not congruent with 1 modulo 4:
def table(n):
if n == 2: return 1
if n%4 != 1: return None # or raise exception
...
Based off OP's second edit:
def table(n):
if n == 2: return 1
if n%4 != 1: return
mod = 0
a1 = n - 1
for a in xrange(1, a1, 2):
mod += a
while mod >= n: mod -= n
if mod == a1: return a//2 + 1
It looks like you're trying to find the square root of -1 modulo n. Unfortunately, this is not an easy problem, depending on what values of n are input into your function. Depending on n, there might not even be a solution. See Wikipedia for more information on this problem.
Edit 2: Surprisingly, strength-reducing the squaring reduces the time a lot, at least on my Python2.5 installation. (I'm surprised because I thought interpreter overhead was taking most of the time, and this doesn't reduce the count of operations in the inner loop.) Reduces the time from 0.572s to 0.146s for table(1234577).
def table(n):
n1 = n - 1
square = 0
for delta in xrange(1, n, 2):
square += delta
if n <= square: square -= n
if square == n1: return delta // 2 + 1
strager posted the same idea but I think less tightly coded. Again, jug's answer is best.
Original answer: Another trivial coding tweak on top of Konrad Rudolph's:
def table(n):
n1 = n - 1
for a in xrange(1, n // 2 + 1):
if (a*a) % n == n1: return a
Speeds it up measurably on my laptop. (About 25% for table(1234577).)
Edit: I didn't notice the python3.0 tag; but the main change was hoisting part of the calculation out of the loop, not the use of xrange. (Academic since there's a better algorithm.)
Is it possible for you to cache the results?
When you calculate a large n you are given the results for the lower n's almost for free.
One thing that you are doing is repeating the calculation -a*a over and over again.
Create a table of the values once and then do look up in the main loop.
Also although this probably doesn't apply to you because your function name is table but if you call a function that takes time to calculate you should cache the result in a table and just do a table look up if you call it again with the same value. This save you the time of calculating all of the values when you first run but you don't waste time repeating the calculation more than once.
I went through and fixed the Harvard version to make it work with python 3.
http://modular.fas.harvard.edu/ent/ent_py
I made some slight changes to make the results exactly the same as the OP's function. There are two possible answers and I forced it to return the smaller answer.
import timeit
def table(n):
if n == 2: return 1
if n%4 != 1: return
a1=n-1
def inversemod(a, p):
x, y = xgcd(a, p)
return x%p
def xgcd(a, b):
x_sign = 1
if a < 0: a = -a; x_sign = -1
x = 1; y = 0; r = 0; s = 1
while b != 0:
(c, q) = (a%b, a//b)
(a, b, r, s, x, y) = (b, c, x-q*r, y-q*s, r, s)
return (x*x_sign, y)
def mul(x, y):
return ((x[0]*y[0]+a1*y[1]*x[1])%n,(x[0]*y[1]+x[1]*y[0])%n)
def pow(x, nn):
ans = (1,0)
xpow = x
while nn != 0:
if nn%2 != 0:
ans = mul(ans, xpow)
xpow = mul(xpow, xpow)
nn >>= 1
return ans
for z in range(2,n) :
u, v = pow((1,z), a1//2)
if v != 0:
vinv = inversemod(v, n)
if (vinv*vinv)%n == a1:
vinv %= n
if vinv <= n//2:
return vinv
else:
return n-vinv
tt=0
pri = [ 5,13,17,29,37,41,53,61,73,89,97,1234577,5915587277,3267000013,3628273133,2860486313,5463458053,3367900313 ]
for x in pri:
t=timeit.Timer('q=table('+str(x)+')','from __main__ import table')
tt +=t.timeit(number=100)
print("table(",x,")=",table(x))
print('total time=',tt/100)
This version takes about 3ms to run through the test cases above.
For comparison using the prime number 1234577
OP Edit2 745ms
The accepted answer 522ms
The above function 0.2ms

Resources