In place min-max tree invalidation problems - algorithm

I’m trying to build a parallel implementation of a min-max search. My current approach is to materialize the tree to a small depth and then do the normal thing from each of these nodes.
The simple way to do this is to compute the heuristic value for each leaf and then sweep up and compute the min/max. The problem is that it this omits alpha/beta pruning at the upper levels and makes for a major performance hit.
My first “solution” to that was to push the min/max up after each leaf is computed. This gives updating value so I can scan up the tree and check if a leaf should be pruned.
The problem is that it's totally broken. (2 days of debugging to notice that, darn I feel stupid)
Now for the question:
Is there a way to build a min-max tree that allows the leafs to be evaluated in random order and also allows alpha/beta pruning?

Check out parallel game tree search, e.g. this paper.

I think I have found a solution but I don't like it in a few regards:
Annotate the tree with the number of unfinished children.
After a leaf is evaluated, update it's parent, decrement the parent's count
If that count just reached zero, update it's parent, decrement that count
Lather, rise, repeat
Alpha/beta pruning works as expected.
The problems with this is that with random order evaluation, a lot more nodes will get evaluated before stuff starts getting pruned. On the other hand, that might be mitigated by better ordering of the leafs.

Related

What are some true Iterative Depth First Search implementation suggestions?

So from what I am seeing I have been thaught like most people that the iterative version of DFS is just like iterative BFS besides two differences: replace queue with stack and mark a node as discovered after POP not after PUSH.
Two questions have really been puzzling me recently:
For certain cases it will result in different output, but is it really necessary to mark a node as visited after we POP? Why is it wrong to do it after we PUSH? As far as I can see this will result in the same consequence as to why we do it for BFS...to not have duplicates in our queue/stack.
Now the big one: My impression is that this kind of implementation of iterative DFS is not true DFS at all: If we think about the recursive version it is quite space efficient since it doesnt store all the possible neighbours at one level(as we would do in the iterative version), it only selects one and goes with it and then it backtracks and goes for a second one. As an extreme example think of a graph with one node in the center and conected to 100 leaf nodes. In the recursive implementation if we start from the middle node the underlying stack will grow to maximum 2 ... one for the middle node and one for every leaf it visits. If we do it as we have been thaught with the iterative version the stack will grow to 1000 elements. That doesnt seem right.
So with all those previous details in mind, my question is what would the approach be to have a true iterative DFS implementation?
Question 1: If you check+mark before the push, it uses less space but changes the order in which nodes are visited. You will visit everything, though.
Question 2: You are correct that an iterative DFS will usually put all the children of a node onto the stack at the same time. This increases the space used for some graphs, but it doesn't change the worst case space usage, and it's the easiest way so there's usually no reason to change that.
Occasionally you know that it will save a lot of space if you don't do this, and then you can write an iterative DFS that works more like the recursive one. Instead of pushing the next nodes to visit on the stack, you push a parent and a position in its list of children, or equivalent, which is pretty much what the recursive version has to remember when it recurses. In pseudo-code, it looks like this:
func DFS(start):
let visited = EMPTY_SET
let stack = EMPTY_STACK
visited.add(start)
visit(start)
stack.push( (start,0) )
while(!stack.isEmpty()):
let (parent, pos) = stack.pop()
if (pos < parent.numChildren()):
let child = parent.child[pos]
stack.push(parent,pos+1)
if (!visited.contains(child)):
visited.add(child)
visit(child)
stack.push( (child,0) )
You can see that it's a little more complicated, and the records you push on the stack are tuples, which is annoying in some cases. Often we'll use two stacks in parallel instead of creating tuples to push, or we'll push/pop two records at a time, depending on how nodes and child list positions have to be represented.

Fair deletion of nodes in Binary Search Tree

The idea of deleting a node in BST is:
If the node has no child, delete it and update the parent's pointer to this node as null
If the node has one child, replace the node with its children by updating the node's parent's pointer to its child
If the node has two children, find the predecessor of the node and replace it with its predecessor, also update the predecessor's parent's pointer by pointing it to its only child (which only can be a left child)
the last case can also be done with use of a successor instead of predecessor!
It's said that if we use predecessor in some cases and successor in some other cases (giving them equal priority) we can have better empirical performance ,
Now the question is , how is it done ? based on what strategy? and how does it affect the performance ? (I guess by performance they mean time complexity)
What I think is that we have to choose predecessor or successor to have a more balanced tree ! but I don't know how to choose which one to use !
One solution is to randomly choose one of them (fair randomness) but isn't better to have the strategy based on the tree structure ? but the question is WHEN to choose WHICH ?
The thing is that is fundamental problem - to find correct removal algorithm for BST. For 50 years people were trying to solve it (just like in-place merge) and they didn't find anything better then just usual algorithm (with predecessor/successor removing). So, what is wrong with classic algorithm? Actually, this removing unbalances the tree. After several random operations add/remove you'll get unbalanced tree with height sqrt(n). And it is no matter what you choosed - remove successor or predecessor (or random chose beetwen these ways) - the result is the same.
So, what to choose? I'm guessing random based (succ or pred) deletion will postpone unbalancing of your tree. But, if you want to have perfectly balanced tree - you have to use red-black ones or something like that.
As you said, it's a question of balance, so in general the method that disturbs the balance the least is preferable. You can hold some metrics to measure the level of balance (e.g., difference from maximal and minimal leaf height, average height etc.), but I'm not sure whether the overhead worth it. Also, there are self-balancing data structures (red-black, AVL trees etc.) that mitigate this problem by rebalancing after each deletion. If you want to use the basic BST, I suppose the best strategy without apriori knowledge of tree structure and the deletion sequence would be to toggle between the 2 methods for each deletion.

What invariant do RRB-trees maintain?

Relaxed Radix Balanced Trees (RRB-trees) are a generalization of immutable vectors (used in Clojure and Scala) that have 'effectively constant' indexing and update times. RRB-trees maintain efficient indexing and update but also allow efficient concatenation (log n).
The authors present the data structure in a way that I find hard to follow. I am not quite sure what the invariant is that each node maintains.
In section 2.5, they describe their algorithm. I think they are ensuring that indexing into the node will only ever require e extra steps of linear search after radix searching. I do not understand how they derived their formula for the extra steps, and I think perhaps I'm not sure what each of the variables mean (in particular "a total of p sub-tree branches").
What's how does the RRB-tree concatenation algorithm work?
They do describe an invariant in section 2.4 "However, as mentioned earlier
B-Trees nodes do not facilitate radix searching. Instead we chose
the initial invariant of allowing the node sizes to range between m
and m - 1. This defines a family of balanced trees starting with
well known 2-3 trees, 3-4 trees and (for m=32) 31-32 trees. This
invariant ensures balancing and achieves radix branch search in the
majority of cases. Occasionally a few step linear search is needed
after the radix search to find the correct branch.
The extra steps required increase at the higher levels."
Looking at their formula, it looks like they have worked out the maximum and minimum possible number of values stored in a subtree. The difference between the two is the maximum possible difference between the maximum and minimum number of values underneath a point. If you divide this by the number of values underneath a slot, you have the maximum number of slots you could be off by when you work out which slot to look at to see if it contains the index you are searching for.
#mcdowella is correct that's what they say about relaxed nodes. But if you're splitting and joining nodes, a range from m to m-1 means you will sometimes have to adjust up to m-1 (m-2?) nodes in order to add or remove a single element from a node. This seems horribly inefficient. I think they meant between m and (2 m) - 1 because this allows nodes to be split into 2 when they get too big, or 2 nodes joined into one when they are too small without ever needing to change a third node. So it's a typo that the "2" is missing in "2 m" in the paper. Jean Niklas L’orange's masters thesis backs me up on this.
Furthermore, all strict nodes have the same length which must be a power of 2. The reason for this is an optimization in Rich Hickey's Clojure PersistentVector. Well, I think the important thing is to pack all strict nodes left (more on this later) so you don't have to guess which branch of the tree to descend. But being able to bit-shift and bit-mask instead of divide is a nice bonus. I didn't time the get() operation on a relaxed Scala Vector, but the relaxed Paguro vector is about 10x slower than the strict one. So it makes every effort to be as strict as possible, even producing 2 strict levels if you repeatedly insert at 0.
Their tree also has an even height - all leaf nodes are equal distance from the root. I think it would still work if relaxed trees had to be within, say, one level of one-another, though not sure what that would buy you.
Relaxed nodes can have strict children, but not vice-versa.
Strict nodes must be filled from the left (low-index) without gaps. Any non-full Strict nodes must be on the right-hand (high-index) edge of the tree. All Strict leaf nodes can always be full if you do appends in a focus or tail (more on that below).
You can see most of the invariants by searching for the debugValidate() methods in the Paguro implementation. That's not their paper, but it's mostly based on it. Actually, the "display" variables in the Scala implementation aren't mentioned in the paper either. If you're going to study this stuff, you probably want to start by taking a good look at the Clojure PersistentVector because the RRB Tree has one inside it. The two differences between that and the RRB Tree are 1. the RRB Tree allows "relaxed" nodes and 2. the RRB Tree may have a "focus" instead of a "tail." Both focus and tail are small buffers (maybe the same size as a strict leaf node), the difference being that the focus will probably be localized to whatever area of the vector was last inserted/appended to, while the tail is always at the end (PerSistentVector can only be appended to, never inserted into). These 2 differences are what allow O(log n) arbitrary inserts and removals, plus O(log n) split() and join() operations.

Binary Search Tree for specific intent

We all know there are plenty of self-balancing binary search trees (BST), being the most famous the Red-Black and the AVL. It might be useful to take a look at AA-trees and scapegoat trees too.
I want to do deletions insertions and searches, like any other BST. However, it will be common to delete all values in a given range, or deleting whole subtrees. So:
I want to insert, search, remove values in O(log n) (balanced tree).
I would like to delete a subtree, keeping the whole tree balanced, in O(log n) (worst-case or amortized)
It might be useful to delete several values in a row, before balancing the tree
I will most often insert 2 values at once, however this is not a rule (just a tip in case there is a tree data structure that takes this into account)
Is there a variant of AVL or RB that helps me on this? Scapegoat-trees look more like this, but would also need some changes, anyone who has got experience on them can share some thougts?
More precisely, which balancing procedure and/or removal procedure would help me keep this actions time-efficient?
It is possible to delete a range of values a BST in O(logn + objects num).
The easiest way I know is to work with the Deterministic Skip List data structure (you might want to read a bit about this data structure before you go on).
In the deterministic skip list all of the real values are stored in the bottom level, and there are pointers on upper levels to them. Insert, search and remove are done in O(logn).
The range deletion operation can be done according to the following algorithm:
Find the first element in the range - O(logn)
Go forward in the linked list, and remove all elements that are still in the range. If there are elements with pointers to the upper levels - remove them too, until reaching the topmost level (removal from a linked list) - O(number of deleted objects)
Fix the pointers to fit deterministic skip list (2-3 elements between every pointer upward)
The total complexity of the range delete is O(logn + number of objects in the range).
Notice that if you choose to work with a random skip list, you get the same complexity, but on average, and not worst case. The plus is that you don't have to fix the upper level pointers to meet the 2-3 demand.
A deterministic skip list has a 1-1 mapping to a 2-3 tree, so with some more work, the procedure described above could work for a 2-3 tree as well.
Long ago in the pre-STL days I wrote my own B-Tree (BST) algorithm because I had a rather large data set at the time (roughly 700K items in 2 trees that were interdependent). I found that rebalancing after every 100-200 insertions/deletions was the peak performance I could get at the time based on experimentation on 486 and SGI hardware. This number may be different now, or maybe not since it does appear to be an algorithmic optimization limit unless you convert to a parallel model.
In short, you could apply a modification trigger for the rebalancing, and allow for forced rebalancing when you've completed all your modifications.
The improvement was remarkable. The initial straight load was not complete after 25m (killed the process). Rebalancing as we went also was killed after 15m. The restricted modification loads with a rebalance every 100 mods loaded and ran in less than 3m. Note that during the "run" portion, there were 0-8 modification to the tree per initial entry. You really need to consider whether you always need to be in-balance when the tree will be modified again in the near term.
Hmm, what about B-trees? They are also balanced, and if you choose a big-order one --- it depends on how many items do you have ---, you will save a bunch of object creation/destruction times.
To 2. If you have a B-tree of order 100, you can remove up to 100 items by one function call.
To 3. This feature can be applied to almost any of the trees, just implement a RemoveSome() function that removes N items and does a rebalance. For B-trees, it's a bit trickier, but can be done.
Note: I supposed you're a programmer. If you need a complete, tested, off-the-shelf solution, you need another answer.
It should be easy to implement deleting a node and its sub nodes in an AVL tree if every node stores its height instead of a balance factor. After deleting a node keep rotating until the two child nodes differ by no more than one. Then move up the tree and repeat. The only real difference from a normal deletion will be a while instead of an if for testing the heights.
The Set implementation in the OCaml standard library is a purely functional AVL tree that satisfies all of your requirements and, in particular, has very efficient implementations of set theoretic operations (union, intersection, difference). Insertion and deletion are O(log n). You can remove subtrees and runs of elements by representing them as a set and using set difference. You can insert two elements simultaneously by creating a 2-element set and applying set union.

What's the difference between backtracking and depth first search?

What's the difference between backtracking and depth first search?
Backtracking is a more general purpose algorithm.
Depth-First search is a specific form of backtracking related to searching tree structures. From Wikipedia:
One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking.
It uses backtracking as part of its means of working with a tree, but is limited to a tree structure.
Backtracking, though, can be used on any type of structure where portions of the domain can be eliminated - whether or not it is a logical tree. The Wiki example uses a chessboard and a specific problem - you can look at a specific move, and eliminate it, then backtrack to the next possible move, eliminate it, etc.
I think this answer to another related question offers more insights.
For me, the difference between backtracking and DFS is that backtracking handles an implicit tree and DFS deals with an explicit one. This seems trivial, but it means a lot. When the search space of a problem is visited by backtracking, the implicit tree gets traversed and pruned in the middle of it. Yet for DFS, the tree/graph it deals with is explicitly constructed and unacceptable cases have already been thrown, i.e. pruned, away before any search is done.
So, backtracking is DFS for implicit tree, while DFS is backtracking without pruning.
IMHO, most of the answers are either largely imprecise and/or without any reference to verify. So let me share a very clear explanation with a reference.
First, DFS is a general graph traversal (and search) algorithm. So it can be applied to any graph (or even forest). Tree is a special kind of Graph, so DFS works for tree as well. In essence, let’s stop saying it works only for a tree, or the likes.
Based on [1], Backtracking is a special kind of DFS used mainly for space (memory) saving. The distinction that I’m about to mention may seem confusing since in Graph algorithms of such kind we’re so used to having adjacency list representations and using iterative pattern for visiting all immediate neighbors (for tree it is the immediate children) of a node, we often ignore that a bad implementation of get_all_immediate_neighbors may cause a difference in memory uses of the underlying algorithm.
Further, if a graph node has branching factor b, and diameter h (for a tree this is the tree height), if we store all immediate neighbors at each steps of visiting a node, memory requirements will be big-O(bh). However, if we take only a single (immediate) neighbor at a time and expand it, then the memory complexity reduces to big-O(h). While the former kind of implementation is termed as DFS, the latter kind is called Backtracking.
Now you see, if you’re working with high level languages, most likely you’re actually using Backtracking in the guise of DFS. Moreover, keeping track of visited nodes for a very large problem set could be really memory intensive; calling for a careful design of get_all_immediate_neighbors (or algorithms that can handle revisiting a node without getting into infinite loops).
[1] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd Ed
According to Donald Knuth, it's the same.
Here is the link on his paper about Dancing Links algorithm, which is used to solve such "non-tree" problems as N-queens and Sudoku solver.
Backtracking, also called depth-first search
Backtracking is usually implemented as DFS plus search pruning. You traverse search space tree depth-first constructing partial solutions along the way. Brute-force DFS can construct all search outcomes, even the ones, that do not make sense practically. This can be also very inefficient to construct all solutions (n! or 2^n). So in reality as you do DFS, you need to also prune partial solutions, which do not make sense in context of the actual task, and focus on the partial solutions, which can lead to valid optimal solutions. This is the actual backtracking technique - you discard partial solutions as early as possible, make a step back and try to find local optimum again.
Nothing stops to traverse search space tree using BFS and execute backtracking strategy along the way, but it doesn't make sense in practice because you would need to store search state layer by layer in the queue, and tree width grows exponentially to the height, so we would waste a lot of space very quickly. That's why trees are usually traversed using DFS. In this case search state is stored in the stack (call stack or explicit structure) and it can't exceed tree height.
Usually, a depth-first-search is a way of iterating through an actual graph/tree structure looking for a value, whereas backtracking is iterating through a problem space looking for a solution. Backtracking is a more general algorithm that doesn't necessarily even relate to trees.
I would say, DFS is the special form of backtracking; backtracking is the general form of DFS.
If we extend DFS to general problems, we can call it backtracking.
If we use backtracking to tree/graph related problems, we can call it DFS.
They carry the same idea in algorithmic aspect.
DFS describes the way in which you want to explore or traverse a graph. It focuses on the concept of going as deep as possible given the choice.
Backtracking, while usually implemented via DFS, focuses more on the concept of pruning unpromising search subspaces as early as possible.
IMO, on any specific node of the backtracking, you try to depth firstly branching into each of its children, but before you branch into any of the child node, you need to "wipe out" previous child's state(this step is equivalent to back walk to the parent node). In other words, each siblings state should not impact each other. On the contrary, during normal DFS algorithm, you don't usually have this constraint, you don't need to wipe out(back track) previous siblings state in order to construct next sibling node.
Depth first is an algorithm for traversing or searching a tree. See here. Backtracking is a much more broad term that is used whereever a solution candidate is formed and later discarded by backtracking to a former state. See here. Depth first search uses backtracking to search a branch first (solution candidate) and if not successful search the other branch(es).
Depth first search(DFS) and backtracking are different searching and traversing algorithms. DFS is more broad and is used in both graph and tree data structure while DFS is limited to tree. With that being said,it does not mean DFS can't be used in graph. It is used in graph as well but only produce spanning tree, a tree without loop(multiple edge starting and ending at same vertex). That is why it is limited to tree.
Now coming back to backtracking, DFS use backtracking algorithm in tree data structure so in tree, DFS and backtracking are similar.
Thus,we can say, they are in same in tree data structure whereas in graph data structure, they are not same.
Idea - Start from any point, check if its the desired endpoint, if yes then we found a solution else goes to all next possible positions and if we can't go further then return to the previous position and look for other alternatives marking that current path wont lead us to the solution.
Now backtracking and DFS are 2 different names given to the same idea applied on 2 different abstract data types.
If the idea is applied on matrix data structure we call it backtracking.
If the same idea is applied on tree or graph then we call it DFS.
The cliche here is that a matrix could be converted to a graph and a graph could be transformed to a matrix. So, we actually apply the idea. If on a graph then we call it DFS and if on a matrix then we call it backtracking.
The idea in both of the algorithm is same.
The way I look at DFS vs. Backtracking is that backtracking is much more powerful. DFS helps me answer whether a node exists in a tree, while backtracking can help me answer all paths between 2 nodes.
Note the difference: DFS visits a node and marks it as visited since we are primarily searching, so seeing things once is sufficient. Backtracking visits a node multiple times as it's a course correction, hence the name backtracking.
Most backtracking problems involve:
def dfs(node, visited):
visited.add(node)
for child in node.children:
dfs(child, visited)
visited.remove(node) # this is the key difference that enables course correction and makes your dfs a backtracking recursion.
In a depth-first search, you start at the root of the tree and then explore as far along each branch, then you backtrack to each subsequent parent node and traverse it's children
Backtracking is a generalised term for starting at the end of a goal, and incrementally moving backwards, gradually building a solution.
Backtracking is just depth first search with specific termination conditions.
Consider walking through a maze where for each step you make a decision, that decision is a call to the call stack (which conducts your depth first search)... if you reach the end, you can return your path. However, if you reach a deadend, you want to return out of a certain decision, in essence returning out of a function on your call stack.
So when I think of backtracking I care about
State
Decisions
Base Cases (Termination Conditions)
I explain it in my video on backtracking here.
An analysis of backtracking code is below. In this backtracking code I want all of the combinations that will result in a certain sum or target. Therefore, I have 3 decisions which make calls to my call stack, at each decision I either can pick a number as part of my path to reach the target num, skip that number, or pick it and pick it again. And then if I reach a termination condition, my backtracking step is just to return. Returning is the backtracking step because it gets out of that call on the call stack.
class Solution:
"""
Approach: Backtracking
State
-candidates
-index
-target
Decisions
-pick one --> call func changing state: index + 1, target - candidates[index], path + [candidates[index]]
-pick one again --> call func changing state: index, target - candidates[index], path + [candidates[index]]
-skip one --> call func changing state: index + 1, target, path
Base Cases (Termination Conditions)
-if target == 0 and path not in ret
append path to ret
-if target < 0:
return # backtrack
"""
def combinationSum(self, candidates: List[int], target: int) -> List[List[int]]:
"""
#desc find all unique combos summing to target
#args
#arg1 candidates, list of ints
#arg2 target, an int
#ret ret, list of lists
"""
if not candidates or min(candidates) > target: return []
ret = []
self.dfs(candidates, 0, target, [], ret)
return ret
def dfs(self, nums, index, target, path, ret):
if target == 0 and path not in ret:
ret.append(path)
return #backtracking
elif target < 0 or index >= len(nums):
return #backtracking
# for i in range(index, len(nums)):
# self.dfs(nums, i, target-nums[i], path+[nums[i]], ret)
pick_one = self.dfs(nums, index + 1, target - nums[index], path + [nums[index]], ret)
pick_one_again = self.dfs(nums, index, target - nums[index], path + [nums[index]], ret)
skip_one = self.dfs(nums, index + 1, target, path, ret)
In my opinion, the difference is pruning of tree. Backtracking can stop (finish) searching certain branch in the middle by checking the given conditions (if the condition is not met). However, in DFS, you have to reach to the leaf node of the branch to figure out if the condition is met or not, so you cannot stop searching certain branch until you reach to its leaf nodes.
The difference is: Backtracking is a concept of how an algorithm works, DFS (depth first search) is an actual algorithm that bases on backtracking. DFS essentially is backtracking (it is searching a tree using backtracking) but not every algorithm based on backtracking is DFS.
To add a comparison: Backtracking is a concept like divide and conqueror. QuickSort would be an algorithm based on the concept of divide and conqueror.

Resources