Difference between case 1 vs case 2? - algorithm

I've worked on this question, Convert Sorted Array.
At first
I didn't know what to be returned in the end, so I created another function within the given one to carry out the recursion
**case 1**
def sortedArrayToBST(self, nums: List[int]) -> TreeNode:
def bst(num_list):
# base_case
if len(nums) < 2:
return TreeNode(nums[-1])
# recursive_case
mid = len(nums) // 2
node = TreeNode(nums[mid])
node.left = bst(nums[:mid])
node.right = bst(nums[mid + 1:])
ans = bst(nums)
return ans
but, it kept giving me 'time limit exceeded or maximum depth in recursion' as a result.
Then, as soon as I removed the inner 'bts' function and then just did the same recursion process the given function(sortedArrayToBST) itself, the error had gone just like magic...
**case 2**
def sortedArrayToBST(self, nums: List[int]) -> TreeNode:
if not nums:
return None
if len(nums) == 1:
return TreeNode(nums[-1])
# recursive_case
mid = len(nums) // 2
node = TreeNode(nums[mid])
node.left = self.sortedArrayToBST(nums[:mid])
node.right = self.sortedArrayToBST(nums[mid + 1:])
return node
However, having said that, I can't see what's different between the two codes. There must be a key difference between the two but can't work it out on my own.
Could you please enlighten me on what the difference is between case 1 and case 2 so what causes error in one but not in the other.

In case 1, the length of the processed list does not decrease across recursive calls because, while the parameter of bts is num_list, nums is being processed in its body. The error would disappear in case 1 if num_list is processed (instead of nums) in bts.

Related

Implementing iterative solution in a functionally recursive way with memoization

I am trying to solve the following problem on leetcode: Coin Change 2
Input: amount = 5, coins = [1, 2,5]
Output: 4 Explanation: there are four ways to make up the amount:
5=5
5=2+2+1
5=2+1+1+1
5=1+1+1+1+1
I am trying to implement an iterative solution which essentially simulates/mimic recursion using stack. I have managed to implement it and the solution works, but it exceeds time limit.
I have noticed that the recursive solutions make use of memoization to optimize. I would like to incorporate that in my iterative solution as well, but I am lost on how to proceed.
My solution so far:
# stack to simulate recursion
stack = []
# add starting indexes and sum to stack
#Tuple(x,y) where x is sum, y is index of the coins array input
for i in range(0, len(coins)):
if coins[i]<=amount:
stack.append((coins[i], i))
result = 0
while len(stack)!=0:
c = stack.pop()
currentsum = c[0]
currentindex = c[1]
# can't explore further
if currentsum >amount:
continue
# condition met, increment result
if currentsum == amount:
result = result+1
continue
# add coin at current index to sum if doesn't exceed amount (append call to stack)
if (currentsum+coins[currentindex])<=amount:
stack.append((currentsum+coins[currentindex], currentindex))
#skip coin at current index (append call to stack)
if (currentindex+1)<=len(coins)-1:
stack.append((currentsum, currentindex+1))
return result
I have tried using dictionary to record appends to the stack as follows:
#if the call has not already happened, add to dictionary
if dictionary.get((currentsum, currentindex+1), None) == None:
stack.append((currentsum, currentindex+1))
dictionary[currentsum, currentindex+1)] = 'visited'
Example, if call (2,1) of sum = 2 and coin-array-index = 1 is made, I append it to dictionary. If the same call is encountered again, I don't append it again. However, it does not work as different combinations can have same sum and index.
Is there anyway I can incorporate memoization in my iterative solution above. I want to do it in a way such that it is functionally same as the recursive solution.
I have managed to figure out the solution. Essentially, I used post order traversal and used a state variable to record the stage of recursion the current call is in. Using the stage, I have managed to go bottom up after going top down.
The solution I came up with is as follows:
def change(self, amount: int, coins: List[int]) -> int:
if amount<=0:
return 1
if len(coins) == 0:
return 0
d= dict()
#currentsum, index, instruction
coins.sort(reverse=True)
stack = [(0, 0, 'ENTER')]
calls = 0
while len(stack)!=0:
currentsum, index, instruction = stack.pop()
if currentsum == amount:
d[(currentsum, index)] = 1
continue
elif instruction == 'ENTER':
stack.append((currentsum, index, 'EXIT'))
if (index+1)<=(len(coins)-1):
if d.get((currentsum, index+1), None) == None:
stack.append((currentsum, index+1, 'ENTER'))
newsum = currentsum + coins[index]
if newsum<=amount:
if d.get((newsum, index), None) == None:
stack.append((newsum, index, 'ENTER'))
elif instruction == 'EXIT':
newsum = currentsum + coins[index]
left = 0 if d.get((newsum, index), None) == None else d.get((newsum, index))
right = 0 if d.get((currentsum, index+1), None) == None else d.get((currentsum, index+1))
d[(currentsum, index)] = left+right
calls = calls+1
print(calls)
return d[(0,0)]

Minimum Jumps required from given number to 1

I am trying to solve the problem given below: (I solved it using recursion but I am having a hard time trying to use a cache to prevent a lot of the same steps being recalculated.
"""
Given a positive integer N, find the smallest number of steps it will take to reach 1.
There are two kinds of permitted steps:
You may decrement N to N - 1.
If a * b = N, you may decrement N to the larger of a and b.
For example, given 100, you can reach 1 in five steps with the following route:
100 -> 10 -> 9 -> 3 -> 2 -> 1.
"""
def multiples(num):
ret = []
start = 2
while start < num:
if num % start == 0:
ret.append(num // start)
start +=1
if ret and start >= ret[-1]:
break
return ret if ret else None
def min_jumps_no_cache(num, jumps=0):
if num == 1:
return jumps
mults = multiples(num)
res = []
res.append(min_jumps(num - 1, jumps +1))
if mults:
for mult in mults:
res.append(min_jumps(mult , jumps +1))
return min(res)
Now, I am trying to add a cache in here because of the obvious high runtime of this solution. But I have run into a similar issue before where I am overwriting the cache and I was curious if there is a solution to this.
def min_jumps_cache(num, jumps=0, cache={}):
if num == 1:
return jumps
if num in cache:
return cache[num]
mults = multiples(num)
res = []
temp = min_jumps_cache(num - 1, jumps +1, cache)
res.append(temp)
if mults:
for mult in mults:
res.append(min_jumps_cache(mult , jumps +1, cache))
temp = min(res)
cache[num] = min(temp, cache[num]) if num in cache else temp
return temp
It seems to me the issue here is you can't really cache an answer until you have calculated both its "left and right" solutions to find the answer. Is there something else I am missing here?
Your solution is fine so far as it goes.
However it will be more efficient to do a breadth-first solution from the bottom up. (This can trivially be optimized more. How is an exercise for the reader.)
def path (n):
path_from = {}
queue = [(1, None)]
while True:
value, prev = queue.pop(0)
value not in path_from:
path_from[value] = prev
if value == n:
break # EXIT HERE
queue.append((value+1, value))
for i in range(2, min(value+1, n//value + 1)):
queue.append((i*value, value))
answer = []
while n is not None:
answer.append(n)
n = path_from[n]
return(answer)
print(path(100))

Implementing a PriorityQueue Algorithm

Here is my implementation of a PriorityQueue algorithm. I have a feeling that my pop function is wrong. But I am not sure where exactly it is wrong. I have checked multiple times on where my logic went wrong but it seems to be perfectly correct(checked with CLRS pseudo code).
class PriorityQueue:
"""Array-based priority queue implementation."""
def __init__(self):
"""Initially empty priority queue."""
self.queue = []
self.min_index = None
def parent(self, i):
return int(i/2)
def left(self, i):
return 2*i+1
def right(self, i):
return 2*i+2
def min_heapify(self, heap_size, i):
#Min heapify as written in CLRS
smallest = i
l = self.left(i)
r = self.right(i)
#print([l,r,len(self.queue),heap_size])
try:
if l <= heap_size and self.queue[l] < self.queue[i]:
smallest = l
else:
smallest = i
except IndexError:
pass
try:
if r <= heap_size and self.queue[r] < self.queue[smallest]:
smallest = r
except IndexError:
pass
if smallest != i:
self.queue[i], self.queue[smallest] = self.queue[smallest], self.queue[i]
self.min_heapify(heap_size, smallest)
def heap_decrease_key(self, i, key):
#Implemented as specified in CLRS
if key > self.queue[i]:
raise ValueError("new key is larger than current key")
#self.queue[i] = key
while i > 0 and self.queue[self.parent(i)] > self.queue[i]:
self.queue[i], self.queue[self.parent(i)] = self.queue[self.parent(i)], self.queue[i]
i = self.parent(i)
def __len__(self):
# Number of elements in the queue.
return len(self.queue)
def append(self, key):
"""Inserts an element in the priority queue."""
if key is None:
raise ValueError('Cannot insert None in the queue')
self.queue.append(key)
heap_size = len(self.queue)
self.heap_decrease_key(heap_size - 1, key)
def min(self):
"""The smallest element in the queue."""
if len(self.queue) == 0:
return None
return self.queue[0]
def pop(self):
"""Removes the minimum element in the queue.
Returns:
The value of the removed element.
"""
if len(self.queue) == 0:
return None
self._find_min()
popped_key = self.queue[self.min_index]
self.queue[0] = self.queue[len(self.queue)-1]
del self.queue[-1]
self.min_index = None
self.min_heapify(len(self.queue), 0)
return popped_key
def _find_min(self):
# Computes the index of the minimum element in the queue.
#
# This method may crash if called when the queue is empty.
if self.min_index is not None:
return
min = self.queue[0]
self.min_index = 0
Any hint or input will be highly appreciated
The main issue is that the parent function is wrong. As it should do the opposite from the left and right methods, you should first subtract 1 from i before halving that value:
def parent(self, i):
return int((i-1)/2)
Other things to note:
You don't really have a good use for the member self.min_index. It is either 0 or None, and the difference is not really used in your code, as it follows directly from whether the heap is empty or not. This also means you don't need the method _find_min, (which in itself is strange: you assign to min, but never use that). Any way, drop that method, and the line where you call it. Also drop the line where you assign None to self.min_index, and the only other place where you read the value, just use 0.
You have two ways to protect against index errors in the min_heapify method: <= heapsize and a try block. The first protection should really have < instead of <=, but you should use only one way, not two. So either test the less-than, or trap the exception.
The else block with smallest = i is unnecessary, because at that time smallest == i.
min_heapify has a first parameter that always receives the full size of the heap. So it is an unnecessary parameter. It would also not make sense to ever call this method with another value for it. So drop that argument from the method definition and all calls. And then define heap_size = len(self.queue) as a local name in that function
In heap_decrease_key you commented out the assignment #self.queue[i] = key, which is fine as long as you never call this method to really decrease a key. But although you never do that from "inside" the class, the user of the class may well want to use it in that way (since that is what the method's name is suggesting). So better uncomment that assignment.
With the above changes, your instance would only have queue as its data property. This is fine, but you could consider to let PriorityQueue inherit from list, so that you don't need this property either, and can just work with the list that you inherit. By consequence, you should then replace self.queue with self throughout your code, and you can drop the __init__ and __len__ methods, since the list implementation of those is just what you need. A bit of care is needed in the case where you want to call a list original method, when you have overridden it, like append. In that case use super().append.
With all of the above changes applied:
class PriorityQueue(list):
"""Array-based priority queue implementation."""
def parent(self, i):
return int((i-1)/2)
def left(self, i):
return 2*i+1
def right(self, i):
return 2*i+2
def min_heapify(self, i):
#Min heapify as written in CLRS
heap_size = len(self)
smallest = i
l = self.left(i)
r = self.right(i)
if l < heap_size and self[l] < self[i]:
smallest = l
if r < heap_size and self[r] < self[smallest]:
smallest = r
if smallest != i:
self[i], self[smallest] = self[smallest], self[i]
self.min_heapify(smallest)
def heap_decrease_key(self, i, key):
#Implemented as specified in CLRS
if key > self[i]:
raise ValueError("new key is larger than current key")
self[i] = key
while i > 0 and self[self.parent(i)] > self[i]:
self[i], self[self.parent(i)] = self[self.parent(i)], self[i]
i = self.parent(i)
def append(self, key):
"""Inserts an element in the priority queue."""
if key is None:
raise ValueError('Cannot insert None in the queue')
super().append(key)
heap_size = len(self)
self.heap_decrease_key(heap_size - 1, key)
def min(self):
"""The smallest element in the queue."""
if len(self) == 0:
return None
return self[0]
def pop(self):
"""Removes the minimum element in the queue.
Returns:
The value of the removed element.
"""
if len(self) == 0:
return None
popped_key = self[0]
self[0] = self[-1]
del self[-1]
self.min_heapify(0)
return popped_key
Your parent function is already wrong.
The root element of your heap is stored in array index 0, the children are in 1 and 2. The parent of 1 is 0, that is correct, but the parent of 2 should also be 0, whereas your function returns 1.
Usually the underlying array of a heap does not use the index 0, instead the root element is at index 1. This way you can compute parent and children like this:
parent(i): i // 2
left_child(i): 2 * i
right_child(i): 2 * i + 1

Python Codility Frog River One time complexity

So this is another approach to probably well-known codility platform, task about frog crossing the river. And sorry if this question is asked in bad manner, this is my first post here.
The goal is to find the earliest time when the frog can jump to the other side of the river.
For example, given X = 5 and array A such that:
A[0] = 1
A[1] = 3
A[2] = 1
A[3] = 4
A[4] = 2
A[5] = 3
A[6] = 5
A[7] = 4
the function should return 6.
Example test: (5, [1, 3, 1, 4, 2, 3, 5, 4])
Full task content:
https://app.codility.com/programmers/lessons/4-counting_elements/frog_river_one/
So that was my first obvious approach:
def solution(X, A):
lista = list(range(1, X + 1))
if X < 1 or len(A) < 1:
return -1
found = -1
for element in lista:
if element in A:
if A.index(element) > found:
found = A.index(element)
else: return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This solution is 100% correct and gets 0% in performance tests.
So I thought less lines + list comprehension will get better score:
def solution(X, A):
if X < 1 or len(A) < 1:
return -1
try:
found = max([ A.index(element) for element in range(1, X + 1) ])
except ValueError:
return -1
return found
X = 5
A = [1,2,4,5,3]
solution(X,A)
This one also works and has 0% performance but it's faster anyway.
I also found solution by deanalvero (https://github.com/deanalvero/codility/blob/master/python/lesson02/FrogRiverOne.py):
def solution(X, A):
# write your code in Python 2.6
frog, leaves = 0, [False] * (X)
for minute, leaf in enumerate(A):
if leaf <= X:
leaves[leaf - 1] = True
while leaves[frog]:
frog += 1
if frog == X: return minute
return -1
This solution gets 100% in correctness and performance tests.
My question arises probably because I don't quite understand this time complexity thing. Please tell me how the last solution is better from my second solution? It has a while loop inside for loop! It should be slow but it's not.
Here is a solution in which you would get 100% in both correctness and performance.
def solution(X, A):
i = 0
dict_temp = {}
while i < len(A):
dict_temp[A[i]] = i
if len(dict_temp) == X:
return i
i += 1
return -1
The answer already been told, but I'll add an optional solution that i think might help you understand:
def save_frog(x, arr):
# creating the steps the frog should make
steps = set([i for i in range(1, x + 1)])
# creating the steps the frog already did
froggy_steps = set()
for index, leaf in enumerate(arr):
froggy_steps.add(leaf)
if froggy_steps == steps:
return index
return -1
I think I got the best performance using set()
take a look at the performance test runtime seconds and compare them with yours
def solution(X, A):
positions = set()
seconds = 0
for i in range(0, len(A)):
if A[i] not in positions and A[i] <= X:
positions.add(A[i])
seconds = i
if len(positions) == X:
return seconds
return -1
The amount of nested loops doesn't directly tell you anything about the time complexity. Let n be the length of the input array. The inside of the while-loop needs in average O(1) time, although its worst case time complexity is O(n). The fast solution uses a boolean array leaves where at every index it has the value true if there is a leaf and false otherwise. The inside of the while-loop during the entire algotihm is excetuded no more than n times. The outer for-loop is also executed only n times. This means the time complexity of the algorithm is O(n).
The key is that both of your initial solutions are quadratic. They involve O(n) inner scans for each of the parent elements (resulting in O(n**2)).
The fast solution initially appears to suffer the same fate as it's obvious it contains a loop within a loop. But the inner while loop does not get fully scanned for each 'leaf'. Take a look at where 'frog' gets initialized and you'll note that the while loop effectively picks up where it left off for each leaf.
Here is my 100% solution that considers the sum of numeric progression.
def solution(X, A):
covered = [False] * (X+1)
n = len(A)
Sx = ((1+X)*X)/2 # sum of the numeric progression
for i in range(n):
if(not covered[A[i]]):
Sx -= A[i]
covered[A[i]] = True
if (Sx==0):
return i
return -1
Optimized solution from #sphoenix, no need to compare two sets, it's not really good.
def solution(X, A):
found = set()
for pos, i in enumerate(A, 0):
if i <= X:
found.add(i)
if len(found) == X:
return pos
return -1
And one more optimized solution for binary array
def solution(X, A):
steps, leaves = X, [False] * X
for minute, leaf in enumerate(A, 0):
if not leaves[leaf - 1]:
leaves[leaf - 1] = True
steps -= 1
if 0 == steps:
return minute
return -1
The last one is better, less resources. set consumes more resources compared to binary list (memory and CPU).
def solution(X, A):
# if there are not enough items in the list
if X > len(A):
return -1
# else check all items
else:
d = {}
for i, leaf in enumerate(A):
d[leaf] = i
if len(d) == X:
return i
# if all else fails
return -1
I tried to use as much simple instruction as possible.
def solution(X, A):
if (X > len(A)): # check for no answer simple
return -1
elif(X == 1): # check for single element
return 0
else:
std_set = {i for i in range(1,X+1)} # list of standard order
this_set = set(A) # set of unique element in list
if(sum(std_set) > sum(this_set)): # check for no answer complex
return -1
else:
for i in range(0, len(A) - 1):
if std_set:
if(A[i] in std_set):
std_set.remove(A[i]) # remove each element in standard set
if not std_set: # if all removed, return last filled position
return(i)
I guess this code might not fulfill runtime but it the simplest I could think of
I am using OrderedDict from collections and sum of first n numbers to check the frog will be able to cross or not.
def solution(X, A):
from collections import OrderedDict as od
if sum(set(A))!=(X*(X+1))//2:
return -1
k=list(od.fromkeys(A).keys())[-1]
for x,y in enumerate(A):
if y==k:
return x
This code gives 100% for correctness and performance, runs in O(N)
def solution(x, a):
# write your code in Python 3.6
# initialize all positions to zero
# i.e. if x = 2; x + 1 = 3
# x_positions = [0,1,2]
x_positions = [0] * (x + 1)
min_time = -1
for k in range(len(a)):
# since we are looking for min time, ensure that you only
# count the positions that matter
if a[k] <= x and x_positions[a[k]] == 0:
x_positions[a[k]] += 1
min_time = k
# ensure that all positions are available for the frog to jump
if sum(x_positions) == x:
return min_time
return -1
100% performance using sets
def solution(X, A):
positions = set()
for i in range(len(A)):
if A[i] not in positions:
positions.add(A[i])
if len(positions) == X:
return i
return -1

Is it possible to convert this recursive solution (to print brackets) to an iterative version?

I need to print the different variations of printing valid tags "<" and ">" given the number of times the tags should appear and below is the solution in python using recursion.
def genBrackets(c):
def genBracketsHelper(r,l,currentString):
if l > r or r == -1 or l == -1:
return
if r == l and r == 0:
print currentString
genBracketsHelper(r,l-1, currentString + '<')
genBracketsHelper(r-1,l, currentString + '>')
return
genBracketsHelper(c, c, '')
#display options with 4 tags
genBrackets(4)
I am having a hard time really understanding this and want to try to convert this into a iterative version but I haven't had any success.
As per this thread: Can every recursion be converted into iteration? - it looks like it should be possible and the only exception appears to be the Ackermann function.
If anyone has any tips on how to see the "stack" maintained in Eclipse - that would also be appreciated.
PS. This is not a homework question - I am just trying to understand recursion-to-iteration conversion better.
Edit by Matthieu M. an example of output for better visualization:
>>> genBrackets(3)
<<<>>>
<<><>>
<<>><>
<><<>>
<><><>
I tried to keep basically the same structure as your code, but using an explicit stack rather than function calls to genBracketsHelper:
def genBrackets(c=1):
# genBracketsStack is a list of tuples, each of which
# represents the arguments to a call of genBracketsHelper
# Push the initial call onto the stack:
genBracketsStack = [(c, c, '')]
# This loop replaces genBracketsHelper itself
while genBracketsStack != []:
# Get the current arguments (now from the stack)
(r, l, currentString) = genBracketsStack.pop()
# Basically same logic as before
if l > r or r == -1 or l == -1:
continue # Acts like return
if r == l and r == 0:
print currentString
# Recursive calls are now pushes onto the stack
genBracketsStack.append((r-1,l, currentString + '>'))
genBracketsStack.append((r,l-1, currentString + '<'))
# This is kept explicit since you had an explicit return before
continue
genBrackets(4)
Note that the conversion I am using relies on all of the recursive calls being at the end of the function; the code would be more complicated if that wasn't the case.
You asked about doing this without a stack.
This algorithm walks the entire solution space, so it does a bit more work than the original versions, but it's basically the same concept:
each string has a tree of possible suffixes in your grammar
since there are only two tokens, it's a binary tree
the depth of the tree will always be c*2, so...
there must be 2**(c*2) paths through the tree
Since each path is a sequence of binary decisions, the paths correspond to the binary representations of the integers between 0 and 2**(c*2)-1.
So: just loop through those numbers and see if the binary representation corresponds to a balanced string. :)
def isValid(string):
"""
True if and only if the string is balanced.
"""
count = { '<': 0, '>':0 }
for char in string:
count[char] += 1
if count['>'] > count['<']:
return False # premature closure
if count['<'] != count['>']:
return False # unbalanced
else:
return True
def genBrackets(c):
"""
Generate every possible combination and test each one.
"""
for i in range(0, 2**(c*2)):
candidate = bin(i)[2:].zfill(8).replace('0','<').replace('1','>')
if isValid(candidate):
print candidate
In general, a recursion creates a Tree of calls, the root being the original call, and the leaves being the calls that do not recurse.
A degenerate case is when a each call only perform one other call, in this case the tree degenerates into a simple list. The transformation into an iteration is then simply achieved by using a stack, as demonstrated by #Jeremiah.
In the more general case, as here, when each call perform more (strictly) than one call. You obtain a real tree, and there are, therefore, several ways to traverse it.
If you use a queue, instead of a stack, you are performing a breadth-first traversal. #Jeremiah presented a traversal for which I know no name. The typical "recursion" order is normally a depth-first traversal.
The main advantage of the typical recursion is that the length of the stack does not grow as much, so you should aim for depth-first in general... if the complexity does not overwhelm you :)
I suggest beginning by writing a depth first traversal of a tree, once this is done adapting it to your algorithm should be fairly simple.
EDIT: Since I had some time, I wrote the Python Tree Traversal, it's the canonical example:
class Node:
def __init__(self, el, children):
self.element = el
self.children = children
def __repr__(self):
return 'Node(' + str(self.element) + ', ' + str(self.children) + ')'
def depthFirstRec(node):
print node.element
for c in node.children: depthFirstRec(c)
def depthFirstIter(node):
stack = [([node,], 0), ]
while stack != []:
children, index = stack.pop()
if index >= len(children): continue
node = children[index]
print node.element
stack.append((children, index+1))
stack.append((node.children, 0))
Note that the stack management is slightly complicated by the need to remember the index of the child we were currently visiting.
And the adaptation of the algorithm following the depth-first order:
def generateBrackets(c):
# stack is a list of pairs children/index
stack = [([(c,c,''),], 0), ]
while stack != []:
children, index = stack.pop()
if index >= len(children): continue # no more child to visit at this level
stack.append((children, index+1)) # register next child at this level
l, r, current = children[index]
if r == 0 and l == 0: print current
# create the list of children of this node
# (bypass if we are already unbalanced)
if l > r: continue
newChildren = []
if l != 0: newChildren.append((l-1, r, current + '<'))
if r != 0: newChildren.append((l, r-1, current + '>'))
stack.append((newChildren, 0))
I just realized that storing the index is a bit "too" complicated, since I never visit back. The simple solution thus consists in removing the list elements I don't need any longer, treating the list as a queue (in fact, a stack could be sufficient)!
This applies with minimum transformation.
def generateBrackets2(c):
# stack is a list of queues of children
stack = [[(c,c,''),], ]
while stack != []:
children = stack.pop()
if children == []: continue # no more child to visit at this level
stack.append(children[1:]) # register next child at this level
l, r, current = children[0]
if r == 0 and l == 0: print current
# create the list of children of this node
# (bypass if we are already unbalanced)
if l > r: continue
newChildren = []
if l != 0: newChildren.append((l-1, r, current + '<'))
if r != 0: newChildren.append((l, r-1, current + '>'))
stack.append(newChildren)
Yes.
def genBrackets(c):
stack = [(c, c, '')]
while stack:
right, left, currentString = stack.pop()
if left > right or right == -1 or left == -1:
pass
elif right == left and right == 0:
print currentString
else:
stack.append((right, left-1, currentString + '<'))
stack.append((right-1, left, currentString + '>'))
The output order is different, but the results should be the same.

Resources