So this is another approach to probably well-known codility platform, task about frog crossing the river. And sorry if this question is asked in bad manner, this is my first post here.
The goal is to find the earliest time when the frog can jump to the other side of the river.
For example, given X = 5 and array A such that:
A[0] = 1
A[1] = 3
A[2] = 1
A[3] = 4
A[4] = 2
A[5] = 3
A[6] = 5
A[7] = 4
the function should return 6.
Example test: (5, [1, 3, 1, 4, 2, 3, 5, 4])
Full task content:
https://app.codility.com/programmers/lessons/4-counting_elements/frog_river_one/
So that was my first obvious approach:
def solution(X, A):
lista = list(range(1, X + 1))
if X < 1 or len(A) < 1:
return -1
found = -1
for element in lista:
if element in A:
if A.index(element) > found:
found = A.index(element)
else: return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This solution is 100% correct and gets 0% in performance tests.
So I thought less lines + list comprehension will get better score:
def solution(X, A):
if X < 1 or len(A) < 1:
return -1
try:
found = max([ A.index(element) for element in range(1, X + 1) ])
except ValueError:
return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This one also works and has 0% performance but it's faster anyway.
I also found solution by deanalvero (https://github.com/deanalvero/codility/blob/master/python/lesson02/FrogRiverOne.py):
def solution(X, A):
# write your code in Python 2.6
frog, leaves = 0, [False] * (X)
for minute, leaf in enumerate(A):
if leaf <= X:
leaves[leaf - 1] = True
while leaves[frog]:
frog += 1
if frog == X: return minute
return -1
This solution gets 100% in correctness and performance tests.
My question arises probably because I don't quite understand this time complexity thing. Please tell me how the last solution is better from my second solution? It has a while loop inside for loop! It should be slow but it's not.
Here is a solution in which you would get 100% in both correctness and performance.
def solution(X, A):
i = 0
dict_temp = {}
while i < len(A):
dict_temp[A[i]] = i
if len(dict_temp) == X:
return i
i += 1
return -1
The answer already been told, but I'll add an optional solution that i think might help you understand:
def save_frog(x, arr):
# creating the steps the frog should make
steps = set([i for i in range(1, x + 1)])
# creating the steps the frog already did
froggy_steps = set()
for index, leaf in enumerate(arr):
froggy_steps.add(leaf)
if froggy_steps == steps:
return index
return -1
I think I got the best performance using set()
take a look at the performance test runtime seconds and compare them with yours
def solution(X, A):
positions = set()
seconds = 0
for i in range(0, len(A)):
if A[i] not in positions and A[i] <= X:
positions.add(A[i])
seconds = i
if len(positions) == X:
return seconds
return -1
The amount of nested loops doesn't directly tell you anything about the time complexity. Let n be the length of the input array. The inside of the while-loop needs in average O(1) time, although its worst case time complexity is O(n). The fast solution uses a boolean array leaves where at every index it has the value true if there is a leaf and false otherwise. The inside of the while-loop during the entire algotihm is excetuded no more than n times. The outer for-loop is also executed only n times. This means the time complexity of the algorithm is O(n).
The key is that both of your initial solutions are quadratic. They involve O(n) inner scans for each of the parent elements (resulting in O(n**2)).
The fast solution initially appears to suffer the same fate as it's obvious it contains a loop within a loop. But the inner while loop does not get fully scanned for each 'leaf'. Take a look at where 'frog' gets initialized and you'll note that the while loop effectively picks up where it left off for each leaf.
Here is my 100% solution that considers the sum of numeric progression.
def solution(X, A):
covered = [False] * (X+1)
n = len(A)
Sx = ((1+X)*X)/2 # sum of the numeric progression
for i in range(n):
if(not covered[A[i]]):
Sx -= A[i]
covered[A[i]] = True
if (Sx==0):
return i
return -1
Optimized solution from #sphoenix, no need to compare two sets, it's not really good.
def solution(X, A):
found = set()
for pos, i in enumerate(A, 0):
if i <= X:
found.add(i)
if len(found) == X:
return pos
return -1
And one more optimized solution for binary array
def solution(X, A):
steps, leaves = X, [False] * X
for minute, leaf in enumerate(A, 0):
if not leaves[leaf - 1]:
leaves[leaf - 1] = True
steps -= 1
if 0 == steps:
return minute
return -1
The last one is better, less resources. set consumes more resources compared to binary list (memory and CPU).
def solution(X, A):
# if there are not enough items in the list
if X > len(A):
return -1
# else check all items
else:
d = {}
for i, leaf in enumerate(A):
d[leaf] = i
if len(d) == X:
return i
# if all else fails
return -1
I tried to use as much simple instruction as possible.
def solution(X, A):
if (X > len(A)): # check for no answer simple
return -1
elif(X == 1): # check for single element
return 0
else:
std_set = {i for i in range(1,X+1)} # list of standard order
this_set = set(A) # set of unique element in list
if(sum(std_set) > sum(this_set)): # check for no answer complex
return -1
else:
for i in range(0, len(A) - 1):
if std_set:
if(A[i] in std_set):
std_set.remove(A[i]) # remove each element in standard set
if not std_set: # if all removed, return last filled position
return(i)
I guess this code might not fulfill runtime but it the simplest I could think of
I am using OrderedDict from collections and sum of first n numbers to check the frog will be able to cross or not.
def solution(X, A):
from collections import OrderedDict as od
if sum(set(A))!=(X*(X+1))//2:
return -1
k=list(od.fromkeys(A).keys())[-1]
for x,y in enumerate(A):
if y==k:
return x
This code gives 100% for correctness and performance, runs in O(N)
def solution(x, a):
# write your code in Python 3.6
# initialize all positions to zero
# i.e. if x = 2; x + 1 = 3
# x_positions = [0,1,2]
x_positions = [0] * (x + 1)
min_time = -1
for k in range(len(a)):
# since we are looking for min time, ensure that you only
# count the positions that matter
if a[k] <= x and x_positions[a[k]] == 0:
x_positions[a[k]] += 1
min_time = k
# ensure that all positions are available for the frog to jump
if sum(x_positions) == x:
return min_time
return -1
100% performance using sets
def solution(X, A):
positions = set()
for i in range(len(A)):
if A[i] not in positions:
positions.add(A[i])
if len(positions) == X:
return i
return -1
Related
given an array of numbers arr and an integer x, distribute x such that the difference between any pairs is minimum possible.
e.g. arr = [4,2,0] and x = 10;
the answer should be [6,5,5];
it is obligatory to use all of x.
Compute the final mean as (sum(arr) + x) / len(arr). That would be the ideal target for all numbers if we could also decrease.
The rounded down quotient tells us the minimum every number shall become, and the remainder tells us how many numbers shall get an additional 1 added. Do that after eliminating numbers already too large.
Total time O(n log n).
Python implementation:
def distribute(arr, x):
total = sum(arr) + x
I = sorted(range(len(arr)), key=arr.__getitem__)
while I:
minimum, additional = divmod(total, len(I))
if arr[I[-1]] <= minimum:
break
total -= arr[I.pop()]
for i in sorted(I):
arr[i] = minimum
if additional > 0:
arr[i] += 1
additional -= 1
Results from testing some hardcoded inputs, larger random inputs, and exhaustive small inputs:
433103 tests passed
0 tests failed
Full code (Try it online!):
from random import choices
from itertools import product
def distribute(arr, x):
total = sum(arr) + x
I = sorted(range(len(arr)), key=arr.__getitem__)
while I:
minimum, additional = divmod(total, len(I))
if arr[I[-1]] <= minimum:
break
total -= arr[I.pop()]
for i in sorted(I):
arr[i] = minimum
if additional > 0:
arr[i] += 1
additional -= 1
def naive(arr, x):
for _ in range(x):
arr[arr.index(min(arr))] += 1
passed = failed = 0
def test(arr, x):
expect = arr.copy()
naive(expect, x)
result = arr.copy()
distribute(result, x)
global passed, failed
if result == expect:
passed += 1
else:
failed += 1
print('failed:')
print(f'{arr = }')
print(f'{expect = }')
print(f'{result = }')
print()
# Tests from OP, me, and David
test([4, 2, 0], 10)
test([4, 2, 99, 0], 10)
test([20, 15, 10, 5, 0], 10)
# Random larger tests
for x in range(1000):
arr = choices(range(100), k=100)
test(arr, x)
# Exhaustive smaller tests
for n in range(5):
for arr in product(range(10), repeat=n):
arr = list(arr)
for x in range(n * 10):
test(arr, x)
print(f'{passed} tests passed')
print(f'{failed} tests failed')
For large inputs with smaller range, it can be more efficient to binary search the target minimum. I didn't expect it, but apparently this solution can be up to seven times faster than don't talk just code's answer even for medium size ranges. Here's an example with range 20 (seven times faster), and one with 100,000,000 (two times faster): https://ideone.com/X6GxFD. As we increase input length, this answer seems to be significantly faster even for the full 64 bit range.
Python code:
def f(A, x):
smallest = min(A)
lo = smallest
hi = smallest + x
while lo < hi:
mid = lo + (hi - lo) // 2
can_reach = True
temp = x
for a in A:
if a <= mid:
diff = mid - a
if diff > temp:
can_reach = False
break
else:
temp -= diff
if can_reach:
lo = mid + 1
else:
hi = mid
target = lo - 1
for i, a in enumerate(A):
if a < target:
x -= target - a
A[i] = target
if x:
for i, a in enumerate(A):
if a == target:
A[i] += 1
x -= 1
if x == 0:
break
return A
Here's a solution that can beat both my binary search answer, as well as don't talk just code's answer at some larger input lengths. The idea is to sort the array and find the largest minimum by accumulation, traversing from smaller to larger, with O(1) space for the latter, avoiding pop operations.
Test link.
Python code:
def g(A, x):
s = sorted(range(len(A)), key=lambda i: A[i])
total = x
count = 1
curr = A[s[0]]
to_add = 0
extra = 0
for i in range(1, len(A)):
diff = A[s[i]] - curr
needed = count * diff
if needed >= total:
break
curr = A[s[i]]
total -= needed
count += 1
if total:
extra, to_add = divmod(total, count)
for i in range(count):
A[s[i]] = curr + extra
if to_add:
A[s[i]] += 1
to_add -= 1
return A
Assuming the position of unchanged values does not need to be preserved:
convert into a min heap ("heapify", O(n))
repeat pop&count minimal values from the heap until
- empty: distribute rest: done -or-
- top is greater:
If there's not enough left to make all minimums equal top, distribute rest: done
else decrease rest and continue 1.
O(n+#increased_values*log(n))
Final write back of increased values left as an exercise (for now).
Assuming that you are to minimize the maximum difference between any pair of numbers, then this is the general approach:
Sort the numbers
Find the lowest number(s)
If there are Y lowest numbers, then decrement X by Y and add 1 to each of the lowest numbers until either X runs out, or the lowest numbers become equal to the next lowest numbers,
If X is used up then exit.
If not then got to step #2 and repeat.
Obviously, you can improve step #3 with a little bit of math.
I am trying to solve the problem given below: (I solved it using recursion but I am having a hard time trying to use a cache to prevent a lot of the same steps being recalculated.
"""
Given a positive integer N, find the smallest number of steps it will take to reach 1.
There are two kinds of permitted steps:
You may decrement N to N - 1.
If a * b = N, you may decrement N to the larger of a and b.
For example, given 100, you can reach 1 in five steps with the following route:
100 -> 10 -> 9 -> 3 -> 2 -> 1.
"""
def multiples(num):
ret = []
start = 2
while start < num:
if num % start == 0:
ret.append(num // start)
start +=1
if ret and start >= ret[-1]:
break
return ret if ret else None
def min_jumps_no_cache(num, jumps=0):
if num == 1:
return jumps
mults = multiples(num)
res = []
res.append(min_jumps(num - 1, jumps +1))
if mults:
for mult in mults:
res.append(min_jumps(mult , jumps +1))
return min(res)
Now, I am trying to add a cache in here because of the obvious high runtime of this solution. But I have run into a similar issue before where I am overwriting the cache and I was curious if there is a solution to this.
def min_jumps_cache(num, jumps=0, cache={}):
if num == 1:
return jumps
if num in cache:
return cache[num]
mults = multiples(num)
res = []
temp = min_jumps_cache(num - 1, jumps +1, cache)
res.append(temp)
if mults:
for mult in mults:
res.append(min_jumps_cache(mult , jumps +1, cache))
temp = min(res)
cache[num] = min(temp, cache[num]) if num in cache else temp
return temp
It seems to me the issue here is you can't really cache an answer until you have calculated both its "left and right" solutions to find the answer. Is there something else I am missing here?
Your solution is fine so far as it goes.
However it will be more efficient to do a breadth-first solution from the bottom up. (This can trivially be optimized more. How is an exercise for the reader.)
def path (n):
path_from = {}
queue = [(1, None)]
while True:
value, prev = queue.pop(0)
value not in path_from:
path_from[value] = prev
if value == n:
break # EXIT HERE
queue.append((value+1, value))
for i in range(2, min(value+1, n//value + 1)):
queue.append((i*value, value))
answer = []
while n is not None:
answer.append(n)
n = path_from[n]
return(answer)
print(path(100))
Problem: I am struggling to understand/visualize the Dynamic Programming approach for "A type of balanced 0-1 matrix in "Dynamic Programming - Wikipedia Article."
Wikipedia Link: https://en.wikipedia.org/wiki/Dynamic_programming#A_type_of_balanced_0.E2.80.931_matrix
I couldn't understand how the memoization works when dealing with a multidimensional array. For example, when trying to solve the Fibonacci series with DP, using an array to store previous state results is easy, as the index value of the array store the solution for that state.
Can someone explain DP approach for the "0-1 balanced matrix" in simpler manner?
Wikipedia offered both a crappy explanation and a not ideal algorithm. But let's work with it as a starting place.
First let's take the backtracking algorithm. Rather than put the cells of the matrix "in some order", let's go everything in the first row, then everything in the second row, then everything in the third row, and so on. Clearly that will work.
Now let's modify the backtracking algorithm slightly. Instead of going cell by cell, we'll go row by row. So we make a list of the n choose n/2 possible rows which are half 0 and half 1. Then have a recursive function that looks something like this:
def count_0_1_matrices(n, filled_rows=None):
if filled_rows is None:
filled_rows = []
if some_column_exceeds_threshold(n, filled_rows):
# Cannot have more than n/2 0s or 1s in any column
return 0
else:
answer = 0
for row in possible_rows(n):
answer = answer + count_0_1_matrices(n, filled_rows + [row])
return answer
This is a backtracking algorithm like what we had before. We are just doing whole rows at a time, not cells.
But notice, we're passing around more information than we need. There is no need to pass in the exact arrangement of rows. All that we need to know is how many 1s are needed in each remaining column. So we can make the algorithm look more like this:
def count_0_1_matrices(n, still_needed=None):
if still_needed is None:
still_needed = [int(n/2) for _ in range(n)]
# Did we overrun any column?
for i in still_needed:
if i < 0:
return 0
# Did we reach the end of our matrix?
if 0 == sum(still_needed):
return 1
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(n):
next_still_needed = [still_needed[i] - row[i] for i in range(n)]
answer = answer + count_0_1_matrices(n, next_still_needed)
return answer
This version is almost the recursive function in the Wikipedia version. The main difference is that our base case is that after every row is finished, we need nothing, while Wikipedia would have us code up the base case to check the last row after every other is done.
To get from this to a top-down DP, you only need to memoize the function. Which in Python you can do by defining then adding an #memoize decorator. Like this:
from functools import wraps
def memoize(func):
cache = {}
#wraps(func)
def wrap(*args):
if args not in cache:
cache[args] = func(*args)
return cache[args]
return wrap
But remember that I criticized the Wikipedia algorithm? Let's start improving it! The first big improvement is this. Do you notice that the order of the elements of still_needed can't matter, just their values? So just sorting the elements will stop you from doing the calculation separately for each permutation. (There can be a lot of permutations!)
#memoize
def count_0_1_matrices(n, still_needed=None):
if still_needed is None:
still_needed = [int(n/2) for _ in range(n)]
# Did we overrun any column?
for i in still_needed:
if i < 0:
return 0
# Did we reach the end of our matrix?
if 0 == sum(still_needed):
return 1
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(n):
next_still_needed = [still_needed[i] - row[i] for i in range(n)]
answer = answer + count_0_1_matrices(n, sorted(next_still_needed))
return answer
That little innocuous sorted doesn't look important, but it saves a lot of work! And now that we know that still_needed is always sorted, we can simplify our checks for whether we are done, and whether anything went negative. Plus we can add an easy check to filter out the case where we have too many 0s in a column.
#memoize
def count_0_1_matrices(n, still_needed=None):
if still_needed is None:
still_needed = [int(n/2) for _ in range(n)]
# Did we overrun any column?
if still_needed[-1] < 0:
return 0
total = sum(still_needed)
if 0 == total:
# We reached the end of our matrix.
return 1
elif total*2/n < still_needed[0]:
# We have total*2/n rows left, but won't get enough 1s for a
# column.
return 0
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(n):
next_still_needed = [still_needed[i] - row[i] for i in range(n)]
answer = answer + count_0_1_matrices(n, sorted(next_still_needed))
return answer
And, assuming you implement possible_rows, this should both work and be significantly more efficient than what Wikipedia offered.
=====
Here is a complete working implementation. On my machine it calculated the 6'th term in under 4 seconds.
#! /usr/bin/env python
from sys import argv
from functools import wraps
def memoize(func):
cache = {}
#wraps(func)
def wrap(*args):
if args not in cache:
cache[args] = func(*args)
return cache[args]
return wrap
#memoize
def count_0_1_matrices(n, still_needed=None):
if 0 == n:
return 1
if still_needed is None:
still_needed = [int(n/2) for _ in range(n)]
# Did we overrun any column?
if still_needed[0] < 0:
return 0
total = sum(still_needed)
if 0 == total:
# We reached the end of our matrix.
return 1
elif total*2/n < still_needed[-1]:
# We have total*2/n rows left, but won't get enough 1s for a
# column.
return 0
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(n):
next_still_needed = [still_needed[i] - row[i] for i in range(n)]
answer = answer + count_0_1_matrices(n, tuple(sorted(next_still_needed)))
return answer
#memoize
def possible_rows(n):
return [row for row in _possible_rows(n, n/2)]
def _possible_rows(n, k):
if 0 == n:
yield tuple()
else:
if k < n:
for row in _possible_rows(n-1, k):
yield tuple(row + (0,))
if 0 < k:
for row in _possible_rows(n-1, k-1):
yield tuple(row + (1,))
n = 2
if 1 < len(argv):
n = int(argv[1])
print(count_0_1_matrices(2*n)))
You're memoizing states that are likely to be repeated. The state that needs to be remembered in this case is the vector (k is implicit). Let's look at one of the examples you linked to. Each pair in the vector argument (of length n) is representing "the number of zeros and ones that have yet to be placed in that column."
Take the example on the left, where the vector is ((1, 1) (1, 1) (1, 1) (1, 1)), when k = 2 and the assignments leading to it were 1 0 1 0, k = 3 and 0 1 0 1, k = 4. But we could get to the same state, ((1, 1) (1, 1) (1, 1) (1, 1)), k = 2 from a different set of assignments, for example: 0 1 0 1, k = 3 and 1 0 1 0, k = 4. If we would memoize the result for the state, ((1, 1) (1, 1) (1, 1) (1, 1)), we could avoid recalculating the recursion for that branch again.
Please let me know if there's anything I could better clarify.
Further elaboration in response to your comment:
The Wikipedia example seems to be pretty much a brute-force with memoization. The algorithm seems to attempt to enumerate all the matrixes but uses memoization to exit early from repeated states. How do we enumerate all possibilities? To take their example, n = 4, we start with the vector [(2,2),(2,2),(2,2),(2,2)] where zeros and ones are yet to be placed. (Since the sum of each tuple in the vector is k, we could have a simpler vector where k and the count of either ones or zeros is maintained.)
At every stage, k, in the recursion, we enumerate all possible configurations for the next vector. If the state exists in our hash, we simply return the value for that key. Otherwise, we assign the vector as a new key in the hash (in which case this recursion branch will continue).
For example:
Vector [(2,2),(2,2),(2,2),(2,2)]
Possible assignments of 1's: [1 1 0 0], [1 0 1 0], [1 0 0 1] ... etc.
First branch: [(2,1),(2,1),(1,2),(1,2)]
is this vector a key in the hash?
if yes, return value lookup
else, assign this vector as a key in the hash where the value is the sum
of the function calls with the next possible vectors as their arguments
Building on the excellent answer by https://stackoverflow.com/users/585411/btilly, I've updated their algorithm to exclude "0" cases in the still_needed tuple. The code is about 50% faster largely because of more cache hits using the collapsable tuple.
import time
from typing import Tuple
from sys import argv
from functools import cache
#cache
def possible_rows(n, k=None) -> Tuple[int]:
if k is None:
k = n / 2
return [row for row in _possible_rows(n, k)]
def _possible_rows(n, k) -> Tuple[int]:
if 0 == n:
yield tuple()
else:
if k < n:
for row in _possible_rows(n-1, k):
yield tuple(row + (0,))
if 0 < k:
for row in _possible_rows(n-1, k-1):
yield tuple(row + (1,))
def count(n: int, k: int) -> int:
if n == 0:
return 1
still_needed = tuple([k] * n)
return count_0_1_matrices(k, still_needed)
#cache
def count_0_1_matrices(k:int, still_needed: Tuple[int]):
"""
Assume still_needed contains only positive ints, and is sorted ascending
"""
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(len(still_needed), k):
# Decrement the still_needed value tuple by the row tuple and only keep positive results. Sorting is important for cache hits.
next_still_needed = tuple(sorted([sn - r for sn, r in zip(still_needed, row) if sn > r]))
# Only continue if we still need values and there are enough rows left
if not next_still_needed:
answer += 1
elif len(next_still_needed) >= k and sum(next_still_needed) >= next_still_needed[-1] * k:
# sum / k -> how many rows left. We need enough rows left to continue down this path.
answer += count_0_1_matrices(k, next_still_needed)
return answer
if __name__ == "__main__":
n = 7
if 1 < len(argv):
n = int(argv[1])
start = time.time()
result = count(2*n, n)
print(f"{result} in {time.time() - start} seconds")
Suppose sum(xi) = 10, 0<= xi <= 2, i = 1, 2, ..., 10. How to find all integer solutions for xi. thank you. I have read about Euclidean algorithm, but it looks like just for two unknown variables. What algorithms can be used here.
If you really want to have all solutions: recursively enumerate all possible variable assignments with some optimizations:
The value of the last variable can be calculated from the sum constraint
The search can be pruned, when you see that the partial assignment can no longer lead to a valid solution (e.g if the sum is already larger than 10 or if there are too few variables left to reach a sum of 10)
Recursion is best. Here is the natural Python solution with generators:
def solutions(variables, sum_left, max_value):
if 0 == variables:
if 0 == sum_left:
yield []
else:
for i in range(0, max_value + 1):
if sum_left < i:
break
else:
for partial_solution in solutions(variables - 1, sum_left - i,
max_value):
yield [i] + partial_solution
for x in solutions(10, 10, 2):
print(x)
The benefit of generators being that you don't have to build a long list in memory first. Here is an alternate solution which does not use generators and also avoids building up the list.
def do_something_for_solutions(variables, sum_left, max_value, known=None):
if known is None:
known = []
if 0 == variables:
if 0 == sum_left:
do_something(known)
else:
for i in range(0, max_value + 1):
if sum_left < i:
break
else:
do_something_for_solutions(variables - 1, sum_left - i,
max_value, known + [i])
def do_something(solution):
print(solution)
do_something_for_solutions(10, 10, 2)
If you choose to return solutions, that is possible as follows:
def solutions(variables, sum_left, max_value):
if 0 == variables:
if 0 == sum_left:
return [[]]
else:
return []
else:
answer = []
for i in range(0, max_value + 1):
if sum_left < i:
break
else:
for partial_solution in solutions(variables - 1, sum_left - i,
max_value):
answer.append([i] + partial_solution)
return answer
for x in solutions(10, 10, 2):
print(x)
(Be warned that if you change the parameters, that list can easily become huge...)
You are looking for the permutations of the integer partitions of the number 100, where each integer partition has
at most 10 parts; and
each part is at most 15.
There are certainly a lot of cases, but 10! of them are still manageable by computers.
Edit: OP has edited the question, so: the number 10 should be broken up into integer partitions with at most 10 parts, where each part is at most 2.
Well, I have this bit of code that is slowing down the program hugely because it is linear complexity but called a lot of times making the program quadratic complexity. If possible I would like to reduce its computational complexity but otherwise I'll just optimize it where I can. So far I have reduced down to:
def table(n):
a = 1
while 2*a <= n:
if (-a*a)%n == 1: return a
a += 1
Anyone see anything I've missed? Thanks!
EDIT: I forgot to mention: n is always a prime number.
EDIT 2: Here is my new improved program (thank's for all the contributions!):
def table(n):
if n == 2: return 1
if n%4 != 1: return
a1 = n-1
for a in range(1, n//2+1):
if (a*a)%n == a1: return a
EDIT 3: And testing it out in its real context it is much faster! Well this question appears solved but there are many useful answers. I should also say that as well as those above optimizations, I have memoized the function using Python dictionaries...
Ignoring the algorithm for a moment (yes, I know, bad idea), the running time of this can be decreased hugely just by switching from while to for.
for a in range(1, n / 2 + 1)
(Hope this doesn't have an off-by-one error. I'm prone to make these.)
Another thing that I would try is to look if the step width can be incremented.
Take a look at http://modular.fas.harvard.edu/ent/ent_py .
The function sqrtmod does the job if you set a = -1 and p = n.
You missed a small point because the running time of your improved algorithm is still in the order of the square root of n. As long you have only small primes n (let's say less than 2^64), that's ok, and you should probably prefer your implementation to a more complex one.
If the prime n becomes bigger, you might have to switch to an algorithm using a little bit of number theory. To my knowledge, your problem can be solved only with a probabilistic algorithm in time log(n)^3. If I remember correctly, assuming the Riemann hypothesis holds (which most people do), one can show that the running time of the following algorithm (in ruby - sorry, I don't know python) is log(log(n))*log(n)^3:
class Integer
# calculate b to the power of e modulo self
def power(b, e)
raise 'power only defined for integer base' unless b.is_a? Integer
raise 'power only defined for integer exponent' unless e.is_a? Integer
raise 'power is implemented only for positive exponent' if e < 0
return 1 if e.zero?
x = power(b, e>>1)
x *= x
(e & 1).zero? ? x % self : (x*b) % self
end
# Fermat test (probabilistic prime number test)
def prime?(b = 2)
raise "base must be at least 2 in prime?" if b < 2
raise "base must be an integer in prime?" unless b.is_a? Integer
power(b, self >> 1) == 1
end
# find square root of -1 modulo prime
def sqrt_of_minus_one
return 1 if self == 2
return false if (self & 3) != 1
raise 'sqrt_of_minus_one works only for primes' unless prime?
# now just try all numbers (each succeeds with probability 1/2)
2.upto(self) do |b|
e = self >> 1
e >>= 1 while (e & 1).zero?
x = power(b, e)
next if [1, self-1].include? x
loop do
y = (x*x) % self
return x if y == self-1
raise 'sqrt_of_minus_one works only for primes' if y == 1
x = y
end
end
end
end
# find a prime
p = loop do
x = rand(1<<512)
next if (x & 3) != 1
break x if x.prime?
end
puts "%x" % p
puts "%x" % p.sqrt_of_minus_one
The slow part is now finding the prime (which takes approx. log(n)^4 integer operation); finding the square root of -1 takes for 512-bit primes still less than a second.
Consider pre-computing the results and storing them in a file. Nowadays many platforms have a huge disk capacity. Then, obtaining the result will be an O(1) operation.
(Building on Adam's answer.)
Look at the Wikipedia page on quadratic reciprocity:
x^2 ≡ −1 (mod p) is solvable if and only if p ≡ 1 (mod 4).
Then you can avoid the search of a root precisely for those odd prime n's that are not congruent with 1 modulo 4:
def table(n):
if n == 2: return 1
if n%4 != 1: return None # or raise exception
...
Based off OP's second edit:
def table(n):
if n == 2: return 1
if n%4 != 1: return
mod = 0
a1 = n - 1
for a in xrange(1, a1, 2):
mod += a
while mod >= n: mod -= n
if mod == a1: return a//2 + 1
It looks like you're trying to find the square root of -1 modulo n. Unfortunately, this is not an easy problem, depending on what values of n are input into your function. Depending on n, there might not even be a solution. See Wikipedia for more information on this problem.
Edit 2: Surprisingly, strength-reducing the squaring reduces the time a lot, at least on my Python2.5 installation. (I'm surprised because I thought interpreter overhead was taking most of the time, and this doesn't reduce the count of operations in the inner loop.) Reduces the time from 0.572s to 0.146s for table(1234577).
def table(n):
n1 = n - 1
square = 0
for delta in xrange(1, n, 2):
square += delta
if n <= square: square -= n
if square == n1: return delta // 2 + 1
strager posted the same idea but I think less tightly coded. Again, jug's answer is best.
Original answer: Another trivial coding tweak on top of Konrad Rudolph's:
def table(n):
n1 = n - 1
for a in xrange(1, n // 2 + 1):
if (a*a) % n == n1: return a
Speeds it up measurably on my laptop. (About 25% for table(1234577).)
Edit: I didn't notice the python3.0 tag; but the main change was hoisting part of the calculation out of the loop, not the use of xrange. (Academic since there's a better algorithm.)
Is it possible for you to cache the results?
When you calculate a large n you are given the results for the lower n's almost for free.
One thing that you are doing is repeating the calculation -a*a over and over again.
Create a table of the values once and then do look up in the main loop.
Also although this probably doesn't apply to you because your function name is table but if you call a function that takes time to calculate you should cache the result in a table and just do a table look up if you call it again with the same value. This save you the time of calculating all of the values when you first run but you don't waste time repeating the calculation more than once.
I went through and fixed the Harvard version to make it work with python 3.
http://modular.fas.harvard.edu/ent/ent_py
I made some slight changes to make the results exactly the same as the OP's function. There are two possible answers and I forced it to return the smaller answer.
import timeit
def table(n):
if n == 2: return 1
if n%4 != 1: return
a1=n-1
def inversemod(a, p):
x, y = xgcd(a, p)
return x%p
def xgcd(a, b):
x_sign = 1
if a < 0: a = -a; x_sign = -1
x = 1; y = 0; r = 0; s = 1
while b != 0:
(c, q) = (a%b, a//b)
(a, b, r, s, x, y) = (b, c, x-q*r, y-q*s, r, s)
return (x*x_sign, y)
def mul(x, y):
return ((x[0]*y[0]+a1*y[1]*x[1])%n,(x[0]*y[1]+x[1]*y[0])%n)
def pow(x, nn):
ans = (1,0)
xpow = x
while nn != 0:
if nn%2 != 0:
ans = mul(ans, xpow)
xpow = mul(xpow, xpow)
nn >>= 1
return ans
for z in range(2,n) :
u, v = pow((1,z), a1//2)
if v != 0:
vinv = inversemod(v, n)
if (vinv*vinv)%n == a1:
vinv %= n
if vinv <= n//2:
return vinv
else:
return n-vinv
tt=0
pri = [ 5,13,17,29,37,41,53,61,73,89,97,1234577,5915587277,3267000013,3628273133,2860486313,5463458053,3367900313 ]
for x in pri:
t=timeit.Timer('q=table('+str(x)+')','from __main__ import table')
tt +=t.timeit(number=100)
print("table(",x,")=",table(x))
print('total time=',tt/100)
This version takes about 3ms to run through the test cases above.
For comparison using the prime number 1234577
OP Edit2 745ms
The accepted answer 522ms
The above function 0.2ms