I am trying to solve the following problem on leetcode: Coin Change 2
Input: amount = 5, coins = [1, 2,5]
Output: 4 Explanation: there are four ways to make up the amount:
5=5
5=2+2+1
5=2+1+1+1
5=1+1+1+1+1
I am trying to implement an iterative solution which essentially simulates/mimic recursion using stack. I have managed to implement it and the solution works, but it exceeds time limit.
I have noticed that the recursive solutions make use of memoization to optimize. I would like to incorporate that in my iterative solution as well, but I am lost on how to proceed.
My solution so far:
# stack to simulate recursion
stack = []
# add starting indexes and sum to stack
#Tuple(x,y) where x is sum, y is index of the coins array input
for i in range(0, len(coins)):
if coins[i]<=amount:
stack.append((coins[i], i))
result = 0
while len(stack)!=0:
c = stack.pop()
currentsum = c[0]
currentindex = c[1]
# can't explore further
if currentsum >amount:
continue
# condition met, increment result
if currentsum == amount:
result = result+1
continue
# add coin at current index to sum if doesn't exceed amount (append call to stack)
if (currentsum+coins[currentindex])<=amount:
stack.append((currentsum+coins[currentindex], currentindex))
#skip coin at current index (append call to stack)
if (currentindex+1)<=len(coins)-1:
stack.append((currentsum, currentindex+1))
return result
I have tried using dictionary to record appends to the stack as follows:
#if the call has not already happened, add to dictionary
if dictionary.get((currentsum, currentindex+1), None) == None:
stack.append((currentsum, currentindex+1))
dictionary[currentsum, currentindex+1)] = 'visited'
Example, if call (2,1) of sum = 2 and coin-array-index = 1 is made, I append it to dictionary. If the same call is encountered again, I don't append it again. However, it does not work as different combinations can have same sum and index.
Is there anyway I can incorporate memoization in my iterative solution above. I want to do it in a way such that it is functionally same as the recursive solution.
I have managed to figure out the solution. Essentially, I used post order traversal and used a state variable to record the stage of recursion the current call is in. Using the stage, I have managed to go bottom up after going top down.
The solution I came up with is as follows:
def change(self, amount: int, coins: List[int]) -> int:
if amount<=0:
return 1
if len(coins) == 0:
return 0
d= dict()
#currentsum, index, instruction
coins.sort(reverse=True)
stack = [(0, 0, 'ENTER')]
calls = 0
while len(stack)!=0:
currentsum, index, instruction = stack.pop()
if currentsum == amount:
d[(currentsum, index)] = 1
continue
elif instruction == 'ENTER':
stack.append((currentsum, index, 'EXIT'))
if (index+1)<=(len(coins)-1):
if d.get((currentsum, index+1), None) == None:
stack.append((currentsum, index+1, 'ENTER'))
newsum = currentsum + coins[index]
if newsum<=amount:
if d.get((newsum, index), None) == None:
stack.append((newsum, index, 'ENTER'))
elif instruction == 'EXIT':
newsum = currentsum + coins[index]
left = 0 if d.get((newsum, index), None) == None else d.get((newsum, index))
right = 0 if d.get((currentsum, index+1), None) == None else d.get((currentsum, index+1))
d[(currentsum, index)] = left+right
calls = calls+1
print(calls)
return d[(0,0)]
Related
I am trying to solve the problem given below: (I solved it using recursion but I am having a hard time trying to use a cache to prevent a lot of the same steps being recalculated.
"""
Given a positive integer N, find the smallest number of steps it will take to reach 1.
There are two kinds of permitted steps:
You may decrement N to N - 1.
If a * b = N, you may decrement N to the larger of a and b.
For example, given 100, you can reach 1 in five steps with the following route:
100 -> 10 -> 9 -> 3 -> 2 -> 1.
"""
def multiples(num):
ret = []
start = 2
while start < num:
if num % start == 0:
ret.append(num // start)
start +=1
if ret and start >= ret[-1]:
break
return ret if ret else None
def min_jumps_no_cache(num, jumps=0):
if num == 1:
return jumps
mults = multiples(num)
res = []
res.append(min_jumps(num - 1, jumps +1))
if mults:
for mult in mults:
res.append(min_jumps(mult , jumps +1))
return min(res)
Now, I am trying to add a cache in here because of the obvious high runtime of this solution. But I have run into a similar issue before where I am overwriting the cache and I was curious if there is a solution to this.
def min_jumps_cache(num, jumps=0, cache={}):
if num == 1:
return jumps
if num in cache:
return cache[num]
mults = multiples(num)
res = []
temp = min_jumps_cache(num - 1, jumps +1, cache)
res.append(temp)
if mults:
for mult in mults:
res.append(min_jumps_cache(mult , jumps +1, cache))
temp = min(res)
cache[num] = min(temp, cache[num]) if num in cache else temp
return temp
It seems to me the issue here is you can't really cache an answer until you have calculated both its "left and right" solutions to find the answer. Is there something else I am missing here?
Your solution is fine so far as it goes.
However it will be more efficient to do a breadth-first solution from the bottom up. (This can trivially be optimized more. How is an exercise for the reader.)
def path (n):
path_from = {}
queue = [(1, None)]
while True:
value, prev = queue.pop(0)
value not in path_from:
path_from[value] = prev
if value == n:
break # EXIT HERE
queue.append((value+1, value))
for i in range(2, min(value+1, n//value + 1)):
queue.append((i*value, value))
answer = []
while n is not None:
answer.append(n)
n = path_from[n]
return(answer)
print(path(100))
A stack permutation of number N is defined as the number of sequences which you can print by doing the following
Keep two stacks say A and B.
Push numbers from 1 to N in reverse order in B. (so the top of B is 1 and the last element in B is N)
Do the following operations
Choose the top element from A or B and print it and delete it (pop it). This can be done on a non-empty stack only.
Move the top element from B to A (if B is non-empty)
If both stacks are empty then stop
All possible sequences obtained by doing these operations in some order are called stack permutations.
eg: N = 2
stack permutations are (1, 2) and (2, 1)
eg: N = 3
stack permutations are (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1) and (3, 2, 1)
The number of stack permutations for N numbers is C(N), where C(N) is the Nth Catalan Number.
Suppose we generate all stack permutations for a given N and then print them in lexicographical order (dictionary order), how can we determine the kth permutation, without actually generating all the permutations and then sorting them?
I want some algorithmic approaches that are programmable.
You didn't say whether k should be 0 based or 1 based. I chose 0. Switching back is easy.
The approach is to first write a function to be able to count how many stack permutations there are from a given decision point. Use memoization to make it fast. And then proceed down the decision tree by skipping over decisions that lead to permutations which are lexicographically smaller. That will lead to the list of decisions that are the one you want.
def count_stack_permutations (on_b, on_a=0, can_take_from_a=True, cache={}):
key = (on_b, on_a, can_take_from_a)
if on_a < 0:
return 0 # can't go negative.
elif on_b == 0:
if can_take_from_a:
return 1 # Just drain a
else:
return 0 # Got nothing.
elif key not in cache:
# Drain b
answer = count_stack_permutations(on_b-1, on_a, True)
# Drain a?
if can_take_from_a:
answer = answer + count_stack_permutations(on_b, on_a-1, True)
# Move from b to a.
answer = answer + count_stack_permutations(on_b-1, on_a+1, False)
cache[key] = answer
return cache[key]
def find_kth_permutation (n, k):
# The end of the array is the top
a = []
b = list(range(n, 0, -1))
can_take_from_a = True # We obviously won't first. :-)
answer = []
while 0 < max(len(a), len(b)):
action = None
on_a = len(a)
on_b = len(b)
# If I can take from a, that is always smallest.
if can_take_from_a:
if count_stack_permutations(on_b, on_a - 1, True) <= k:
k = k - count_stack_permutations(on_b, on_a - 1, True)
else:
action = 'a'
# Taking from b is smaller than digging into b so I can take deeper.
if action is None:
if count_stack_permutations(on_b-1, on_a, True) <= k:
k = k - count_stack_permutations(on_b-1, on_a, True)
else:
action = 'b'
# Otherwise I will move.
if action is None:
if count_stack_permutations(on_b-1, on_a, False) < k:
return None # Should never happen
else:
action = 'm'
if action == 'a':
answer.append(a.pop())
can_take_from_a = True
elif action == 'b':
answer.append(b.pop())
can_take_from_a = True
else:
a.append(b.pop())
can_take_from_a = False
return answer
# And demonstrate it in action.
for k in range(0, 6):
print((k, find_kth_permutation(3, k)))
This is possible using factoradic(https://en.wikipedia.org/wiki/Factorial_number_system)
If you need quick solution in Java use JNumberTools
JNumberTools.permutationsOf("A","B","C")
.uniqueNth(4) //next 4th permutation
.forEach(System.out::println);
This API will generate the next nth permutation directly in lexicographic order. So you can even generate next billionth permutation of 100 items.
for generating next nth permutation of given size use:
JNumberTools.permutationsOf("A","B","C")
.kNth(2,4) //next 4th permutation of size 2
.forEach(System.out::println);
maven dependency for JNumberTools is:
<dependency>
<groupId>io.github.deepeshpatel</groupId>
<artifactId>jnumbertools</artifactId>
<version>1.0.0</version>
</dependency>
I've worked on this question, Convert Sorted Array.
At first
I didn't know what to be returned in the end, so I created another function within the given one to carry out the recursion
**case 1**
def sortedArrayToBST(self, nums: List[int]) -> TreeNode:
def bst(num_list):
# base_case
if len(nums) < 2:
return TreeNode(nums[-1])
# recursive_case
mid = len(nums) // 2
node = TreeNode(nums[mid])
node.left = bst(nums[:mid])
node.right = bst(nums[mid + 1:])
ans = bst(nums)
return ans
but, it kept giving me 'time limit exceeded or maximum depth in recursion' as a result.
Then, as soon as I removed the inner 'bts' function and then just did the same recursion process the given function(sortedArrayToBST) itself, the error had gone just like magic...
**case 2**
def sortedArrayToBST(self, nums: List[int]) -> TreeNode:
if not nums:
return None
if len(nums) == 1:
return TreeNode(nums[-1])
# recursive_case
mid = len(nums) // 2
node = TreeNode(nums[mid])
node.left = self.sortedArrayToBST(nums[:mid])
node.right = self.sortedArrayToBST(nums[mid + 1:])
return node
However, having said that, I can't see what's different between the two codes. There must be a key difference between the two but can't work it out on my own.
Could you please enlighten me on what the difference is between case 1 and case 2 so what causes error in one but not in the other.
In case 1, the length of the processed list does not decrease across recursive calls because, while the parameter of bts is num_list, nums is being processed in its body. The error would disappear in case 1 if num_list is processed (instead of nums) in bts.
So this is another approach to probably well-known codility platform, task about frog crossing the river. And sorry if this question is asked in bad manner, this is my first post here.
The goal is to find the earliest time when the frog can jump to the other side of the river.
For example, given X = 5 and array A such that:
A[0] = 1
A[1] = 3
A[2] = 1
A[3] = 4
A[4] = 2
A[5] = 3
A[6] = 5
A[7] = 4
the function should return 6.
Example test: (5, [1, 3, 1, 4, 2, 3, 5, 4])
Full task content:
https://app.codility.com/programmers/lessons/4-counting_elements/frog_river_one/
So that was my first obvious approach:
def solution(X, A):
lista = list(range(1, X + 1))
if X < 1 or len(A) < 1:
return -1
found = -1
for element in lista:
if element in A:
if A.index(element) > found:
found = A.index(element)
else: return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This solution is 100% correct and gets 0% in performance tests.
So I thought less lines + list comprehension will get better score:
def solution(X, A):
if X < 1 or len(A) < 1:
return -1
try:
found = max([ A.index(element) for element in range(1, X + 1) ])
except ValueError:
return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This one also works and has 0% performance but it's faster anyway.
I also found solution by deanalvero (https://github.com/deanalvero/codility/blob/master/python/lesson02/FrogRiverOne.py):
def solution(X, A):
# write your code in Python 2.6
frog, leaves = 0, [False] * (X)
for minute, leaf in enumerate(A):
if leaf <= X:
leaves[leaf - 1] = True
while leaves[frog]:
frog += 1
if frog == X: return minute
return -1
This solution gets 100% in correctness and performance tests.
My question arises probably because I don't quite understand this time complexity thing. Please tell me how the last solution is better from my second solution? It has a while loop inside for loop! It should be slow but it's not.
Here is a solution in which you would get 100% in both correctness and performance.
def solution(X, A):
i = 0
dict_temp = {}
while i < len(A):
dict_temp[A[i]] = i
if len(dict_temp) == X:
return i
i += 1
return -1
The answer already been told, but I'll add an optional solution that i think might help you understand:
def save_frog(x, arr):
# creating the steps the frog should make
steps = set([i for i in range(1, x + 1)])
# creating the steps the frog already did
froggy_steps = set()
for index, leaf in enumerate(arr):
froggy_steps.add(leaf)
if froggy_steps == steps:
return index
return -1
I think I got the best performance using set()
take a look at the performance test runtime seconds and compare them with yours
def solution(X, A):
positions = set()
seconds = 0
for i in range(0, len(A)):
if A[i] not in positions and A[i] <= X:
positions.add(A[i])
seconds = i
if len(positions) == X:
return seconds
return -1
The amount of nested loops doesn't directly tell you anything about the time complexity. Let n be the length of the input array. The inside of the while-loop needs in average O(1) time, although its worst case time complexity is O(n). The fast solution uses a boolean array leaves where at every index it has the value true if there is a leaf and false otherwise. The inside of the while-loop during the entire algotihm is excetuded no more than n times. The outer for-loop is also executed only n times. This means the time complexity of the algorithm is O(n).
The key is that both of your initial solutions are quadratic. They involve O(n) inner scans for each of the parent elements (resulting in O(n**2)).
The fast solution initially appears to suffer the same fate as it's obvious it contains a loop within a loop. But the inner while loop does not get fully scanned for each 'leaf'. Take a look at where 'frog' gets initialized and you'll note that the while loop effectively picks up where it left off for each leaf.
Here is my 100% solution that considers the sum of numeric progression.
def solution(X, A):
covered = [False] * (X+1)
n = len(A)
Sx = ((1+X)*X)/2 # sum of the numeric progression
for i in range(n):
if(not covered[A[i]]):
Sx -= A[i]
covered[A[i]] = True
if (Sx==0):
return i
return -1
Optimized solution from #sphoenix, no need to compare two sets, it's not really good.
def solution(X, A):
found = set()
for pos, i in enumerate(A, 0):
if i <= X:
found.add(i)
if len(found) == X:
return pos
return -1
And one more optimized solution for binary array
def solution(X, A):
steps, leaves = X, [False] * X
for minute, leaf in enumerate(A, 0):
if not leaves[leaf - 1]:
leaves[leaf - 1] = True
steps -= 1
if 0 == steps:
return minute
return -1
The last one is better, less resources. set consumes more resources compared to binary list (memory and CPU).
def solution(X, A):
# if there are not enough items in the list
if X > len(A):
return -1
# else check all items
else:
d = {}
for i, leaf in enumerate(A):
d[leaf] = i
if len(d) == X:
return i
# if all else fails
return -1
I tried to use as much simple instruction as possible.
def solution(X, A):
if (X > len(A)): # check for no answer simple
return -1
elif(X == 1): # check for single element
return 0
else:
std_set = {i for i in range(1,X+1)} # list of standard order
this_set = set(A) # set of unique element in list
if(sum(std_set) > sum(this_set)): # check for no answer complex
return -1
else:
for i in range(0, len(A) - 1):
if std_set:
if(A[i] in std_set):
std_set.remove(A[i]) # remove each element in standard set
if not std_set: # if all removed, return last filled position
return(i)
I guess this code might not fulfill runtime but it the simplest I could think of
I am using OrderedDict from collections and sum of first n numbers to check the frog will be able to cross or not.
def solution(X, A):
from collections import OrderedDict as od
if sum(set(A))!=(X*(X+1))//2:
return -1
k=list(od.fromkeys(A).keys())[-1]
for x,y in enumerate(A):
if y==k:
return x
This code gives 100% for correctness and performance, runs in O(N)
def solution(x, a):
# write your code in Python 3.6
# initialize all positions to zero
# i.e. if x = 2; x + 1 = 3
# x_positions = [0,1,2]
x_positions = [0] * (x + 1)
min_time = -1
for k in range(len(a)):
# since we are looking for min time, ensure that you only
# count the positions that matter
if a[k] <= x and x_positions[a[k]] == 0:
x_positions[a[k]] += 1
min_time = k
# ensure that all positions are available for the frog to jump
if sum(x_positions) == x:
return min_time
return -1
100% performance using sets
def solution(X, A):
positions = set()
for i in range(len(A)):
if A[i] not in positions:
positions.add(A[i])
if len(positions) == X:
return i
return -1
Problem: I am struggling to understand/visualize the Dynamic Programming approach for "A type of balanced 0-1 matrix in "Dynamic Programming - Wikipedia Article."
Wikipedia Link: https://en.wikipedia.org/wiki/Dynamic_programming#A_type_of_balanced_0.E2.80.931_matrix
I couldn't understand how the memoization works when dealing with a multidimensional array. For example, when trying to solve the Fibonacci series with DP, using an array to store previous state results is easy, as the index value of the array store the solution for that state.
Can someone explain DP approach for the "0-1 balanced matrix" in simpler manner?
Wikipedia offered both a crappy explanation and a not ideal algorithm. But let's work with it as a starting place.
First let's take the backtracking algorithm. Rather than put the cells of the matrix "in some order", let's go everything in the first row, then everything in the second row, then everything in the third row, and so on. Clearly that will work.
Now let's modify the backtracking algorithm slightly. Instead of going cell by cell, we'll go row by row. So we make a list of the n choose n/2 possible rows which are half 0 and half 1. Then have a recursive function that looks something like this:
def count_0_1_matrices(n, filled_rows=None):
if filled_rows is None:
filled_rows = []
if some_column_exceeds_threshold(n, filled_rows):
# Cannot have more than n/2 0s or 1s in any column
return 0
else:
answer = 0
for row in possible_rows(n):
answer = answer + count_0_1_matrices(n, filled_rows + [row])
return answer
This is a backtracking algorithm like what we had before. We are just doing whole rows at a time, not cells.
But notice, we're passing around more information than we need. There is no need to pass in the exact arrangement of rows. All that we need to know is how many 1s are needed in each remaining column. So we can make the algorithm look more like this:
def count_0_1_matrices(n, still_needed=None):
if still_needed is None:
still_needed = [int(n/2) for _ in range(n)]
# Did we overrun any column?
for i in still_needed:
if i < 0:
return 0
# Did we reach the end of our matrix?
if 0 == sum(still_needed):
return 1
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(n):
next_still_needed = [still_needed[i] - row[i] for i in range(n)]
answer = answer + count_0_1_matrices(n, next_still_needed)
return answer
This version is almost the recursive function in the Wikipedia version. The main difference is that our base case is that after every row is finished, we need nothing, while Wikipedia would have us code up the base case to check the last row after every other is done.
To get from this to a top-down DP, you only need to memoize the function. Which in Python you can do by defining then adding an #memoize decorator. Like this:
from functools import wraps
def memoize(func):
cache = {}
#wraps(func)
def wrap(*args):
if args not in cache:
cache[args] = func(*args)
return cache[args]
return wrap
But remember that I criticized the Wikipedia algorithm? Let's start improving it! The first big improvement is this. Do you notice that the order of the elements of still_needed can't matter, just their values? So just sorting the elements will stop you from doing the calculation separately for each permutation. (There can be a lot of permutations!)
#memoize
def count_0_1_matrices(n, still_needed=None):
if still_needed is None:
still_needed = [int(n/2) for _ in range(n)]
# Did we overrun any column?
for i in still_needed:
if i < 0:
return 0
# Did we reach the end of our matrix?
if 0 == sum(still_needed):
return 1
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(n):
next_still_needed = [still_needed[i] - row[i] for i in range(n)]
answer = answer + count_0_1_matrices(n, sorted(next_still_needed))
return answer
That little innocuous sorted doesn't look important, but it saves a lot of work! And now that we know that still_needed is always sorted, we can simplify our checks for whether we are done, and whether anything went negative. Plus we can add an easy check to filter out the case where we have too many 0s in a column.
#memoize
def count_0_1_matrices(n, still_needed=None):
if still_needed is None:
still_needed = [int(n/2) for _ in range(n)]
# Did we overrun any column?
if still_needed[-1] < 0:
return 0
total = sum(still_needed)
if 0 == total:
# We reached the end of our matrix.
return 1
elif total*2/n < still_needed[0]:
# We have total*2/n rows left, but won't get enough 1s for a
# column.
return 0
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(n):
next_still_needed = [still_needed[i] - row[i] for i in range(n)]
answer = answer + count_0_1_matrices(n, sorted(next_still_needed))
return answer
And, assuming you implement possible_rows, this should both work and be significantly more efficient than what Wikipedia offered.
=====
Here is a complete working implementation. On my machine it calculated the 6'th term in under 4 seconds.
#! /usr/bin/env python
from sys import argv
from functools import wraps
def memoize(func):
cache = {}
#wraps(func)
def wrap(*args):
if args not in cache:
cache[args] = func(*args)
return cache[args]
return wrap
#memoize
def count_0_1_matrices(n, still_needed=None):
if 0 == n:
return 1
if still_needed is None:
still_needed = [int(n/2) for _ in range(n)]
# Did we overrun any column?
if still_needed[0] < 0:
return 0
total = sum(still_needed)
if 0 == total:
# We reached the end of our matrix.
return 1
elif total*2/n < still_needed[-1]:
# We have total*2/n rows left, but won't get enough 1s for a
# column.
return 0
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(n):
next_still_needed = [still_needed[i] - row[i] for i in range(n)]
answer = answer + count_0_1_matrices(n, tuple(sorted(next_still_needed)))
return answer
#memoize
def possible_rows(n):
return [row for row in _possible_rows(n, n/2)]
def _possible_rows(n, k):
if 0 == n:
yield tuple()
else:
if k < n:
for row in _possible_rows(n-1, k):
yield tuple(row + (0,))
if 0 < k:
for row in _possible_rows(n-1, k-1):
yield tuple(row + (1,))
n = 2
if 1 < len(argv):
n = int(argv[1])
print(count_0_1_matrices(2*n)))
You're memoizing states that are likely to be repeated. The state that needs to be remembered in this case is the vector (k is implicit). Let's look at one of the examples you linked to. Each pair in the vector argument (of length n) is representing "the number of zeros and ones that have yet to be placed in that column."
Take the example on the left, where the vector is ((1, 1) (1, 1) (1, 1) (1, 1)), when k = 2 and the assignments leading to it were 1 0 1 0, k = 3 and 0 1 0 1, k = 4. But we could get to the same state, ((1, 1) (1, 1) (1, 1) (1, 1)), k = 2 from a different set of assignments, for example: 0 1 0 1, k = 3 and 1 0 1 0, k = 4. If we would memoize the result for the state, ((1, 1) (1, 1) (1, 1) (1, 1)), we could avoid recalculating the recursion for that branch again.
Please let me know if there's anything I could better clarify.
Further elaboration in response to your comment:
The Wikipedia example seems to be pretty much a brute-force with memoization. The algorithm seems to attempt to enumerate all the matrixes but uses memoization to exit early from repeated states. How do we enumerate all possibilities? To take their example, n = 4, we start with the vector [(2,2),(2,2),(2,2),(2,2)] where zeros and ones are yet to be placed. (Since the sum of each tuple in the vector is k, we could have a simpler vector where k and the count of either ones or zeros is maintained.)
At every stage, k, in the recursion, we enumerate all possible configurations for the next vector. If the state exists in our hash, we simply return the value for that key. Otherwise, we assign the vector as a new key in the hash (in which case this recursion branch will continue).
For example:
Vector [(2,2),(2,2),(2,2),(2,2)]
Possible assignments of 1's: [1 1 0 0], [1 0 1 0], [1 0 0 1] ... etc.
First branch: [(2,1),(2,1),(1,2),(1,2)]
is this vector a key in the hash?
if yes, return value lookup
else, assign this vector as a key in the hash where the value is the sum
of the function calls with the next possible vectors as their arguments
Building on the excellent answer by https://stackoverflow.com/users/585411/btilly, I've updated their algorithm to exclude "0" cases in the still_needed tuple. The code is about 50% faster largely because of more cache hits using the collapsable tuple.
import time
from typing import Tuple
from sys import argv
from functools import cache
#cache
def possible_rows(n, k=None) -> Tuple[int]:
if k is None:
k = n / 2
return [row for row in _possible_rows(n, k)]
def _possible_rows(n, k) -> Tuple[int]:
if 0 == n:
yield tuple()
else:
if k < n:
for row in _possible_rows(n-1, k):
yield tuple(row + (0,))
if 0 < k:
for row in _possible_rows(n-1, k-1):
yield tuple(row + (1,))
def count(n: int, k: int) -> int:
if n == 0:
return 1
still_needed = tuple([k] * n)
return count_0_1_matrices(k, still_needed)
#cache
def count_0_1_matrices(k:int, still_needed: Tuple[int]):
"""
Assume still_needed contains only positive ints, and is sorted ascending
"""
# Calculate the answer by recursion.
answer = 0
for row in possible_rows(len(still_needed), k):
# Decrement the still_needed value tuple by the row tuple and only keep positive results. Sorting is important for cache hits.
next_still_needed = tuple(sorted([sn - r for sn, r in zip(still_needed, row) if sn > r]))
# Only continue if we still need values and there are enough rows left
if not next_still_needed:
answer += 1
elif len(next_still_needed) >= k and sum(next_still_needed) >= next_still_needed[-1] * k:
# sum / k -> how many rows left. We need enough rows left to continue down this path.
answer += count_0_1_matrices(k, next_still_needed)
return answer
if __name__ == "__main__":
n = 7
if 1 < len(argv):
n = int(argv[1])
start = time.time()
result = count(2*n, n)
print(f"{result} in {time.time() - start} seconds")