Minimax Algorithm queue possible? - data-structures

Is it possible to represent a minimax algorithm within a queue data structure or is it only possible within a tree?

If you implement minimax as a breadth-first game tree search, the FIFO nature of a queue is a natural fit for the algorithm. You would store each position in the queue, and then all the positions that could result from that position. Recurse until you reach your terminating search depth. But the drawback, and it is a big one, is that there are an exponential number of terminal nodes in relation to the depth of the tree and you would have to store all of them in the queue for a breadth-first search.
Minimax is better implemented as a depth-first search, which requires only a linear amount of memory in relation to tree depth. The data structure used for this search is a stack, either through recursive function calls or a direct stack based implementation without the function call overhead.

Related

Expected size for queue in a breadth-first search algorithm

One of the most common and simple operations executed on unweighted graphs is the Breadth-first search.
An aspect of the algorithm that is left to the practical implementation is how to implement the queue and especially, what capacity it should have. Specifically, for a given graph with N nodes if one allocates a queue with capacity N for sure there won't be a time where there is a need to re-allocate the queue because it has reached capacity, but if the N value is high enough this might lead to an excessive RAM requirement if the graph has a structure that leads to little use for the queue (for instance filiform graphs full of tendrils), especially if the BFS only returns the maximum length (like needed in the computation of a graph diameter) or the total length (like needed in the computation of a Closeness centrality).
Is there any good paper on the expected optimal size for the queue in the BFS algorithm, based on some property of the given graph (even with pre-processing)?
If there is any, does it generalize for other shortest-path tree algorithms in contexts such as Dijkstra?
Making my specific implementation details a bit more concrete, I am working on a rust algorithm using as queue the VecDeque. I am not aware of any better queue for the purposes of the BFS, as there is no need for sorting etc...

Can binomial heap be used to find connected components in a graph?

How can a binomial heap be useful in finding connected components of a graph, it cannot be used then why?
I've never seen binomial heaps used this way, since graph connected components are usually found using a depth-first search or breadth-first search, and neither algorithm requires you to use any sort of priority queue. You could, of course, do a sort of "priority-first search" to find connected components by replacing the stack or queue of DFS or BFS with a priority queue, but there's little reason to do so. That would slow the cost of finding connected components down to O(m + n log n) rather than the O(m + n) you'd get from a vanilla BFS or DFS.
There is one way in which you can tenuously say that binomial heaps might be useful, and that's in a different strategy for finding connected components. You can, alternatively, use a disjoint-set forest to identify connected components. You begin with each node in its own partition, then call the union operation for each edge to link nodes together. When you've finished, you will end up with a collection of trees, each of which represents one connected component.
There are many strategies for determining how to link trees in a disjoint-set forest. One of them is union-by-size, in which whenever you need to pick which representative to change, you pick the tree of smaller size and point it at the tree of larger size. You can prove that the smallest tree of height k that can be formed this way is a binomial tree of rank k. That's formed by pairing off all the nodes, then taking the representatives and pairing them off, etc. (Try it for yourself - isn't that cool?)
But that, to me, feels more like a coincidence than anything else. This is less about binomial heaps and more about binomial trees, and this particular shape only arises if you're looking for a pathological case rather than as a matter of course in the execution of the algorithm.
So the best answer I have is "technically you could do this, but you shouldn't, and technically binomial trees arise in this other context that's related to connected components, but that's not the same as using binomial heaps."
Hope this helps!

Best 'order' traversal to copy a balanced binary tree into an AVL tree with minimum rotations

I have two binary trees. One, A which I can access its nodes and pointers (left, right, parent) and B which I don't have access to any of its internals. The idea is to copy A into B by iterating over the nodes of A and doing an insert into B. B being an AVL tree, is there a traversal on A (preorder, inorder, postorder) so that there is a minimum number of rotations when inserting elements to B?
Edit:
The tree A is balanced, I just don't know the exact implementation;
Iteration on tree A needs to be done using only pointers (the programming language is C and there is no queue or stack data structure that I can make use of).
Rebalancing in AVL happens when the depth of one part of the tree exceeds the depth of some other part of the tree by more than one. So to avoid triggering a rebalance you want to feed nodes into the AVL tree one level at a time; that is, feed it all of the nodes from level N of the original tree before you feed it any of the nodes from level N+1.
That ordering would be achieved by a breadth-first traversal of the original tree.
Edit
OP added:
Iteration on tree A needs to be done using only pointers (the
programming language is C and there is no queue or stack data
structure that I can make use of).
That does not affect the answer to the question as posed, which is still that a breadth-first traversal requires the fewest rebalances.
It does affect the way you will implement the breadth-first traversal. If you can't use a predefined queue then there are several ways that you could implement your own queue in C: an array, if permitted, or some variety of linked list are the obvious choices.
If you aren't allowed to use dynamic memory allocation, and the size of the original tree is not bounded such that you can build a queue using a fixed buffer that is sized for the worst case, then you can abandon the queue-based approach and instead use recursion to visit successively deeper levels of the tree. (Imagine a recursive traversal that stops when it reaches a specified depth in the tree, and only emits a result for nodes at that specified depth. Wrap that recursion in a while or for loop that runs from a depth of zero to the maximum depth of the tree.)
If the original tree is not necessarily AVL-balanced, then you can't just copy it.
To ensure that there is no rebalancing in the new tree, you should create a complete binary tree, and you should insert the nodes in BFS/level order so that every intermediate tree is also complete.
A "complete" tree is one in which every level is full, except possibly the last. Since every complete tree is AVL-balanced, and every intermediate tree is complete, there will be no rebalancing required.
If you can't copy your original tree out into an array or other data structure, then you'll need to do log(N) in-order traversals of the original tree to copy all the nodes. During the first traversal, you select and copy the root. During the second, you select and copy level 2. During the third, you copy level 3, etc.
Whether or not a source node is selected for each level depends only on its index within the source tree, so the actual structure of the source tree is irrelevant.
Since each traversal takes O(N) time, the total time spent traversing is O(N log N). Since inserts take O(log N) time, though, that is how long insertion takes as well, so doing log N traversals does not increase the complexity of the overall process.

Do I have to implement Adjacency matrix with BFS?

I am trying to implement BFS algorithm using queue and I do not want to look for any online code for learning purposes. All what I am doing is just following algorithms and try to implement it. I have a question regarding for Adjacency matrix (data structure for graph).
I know one common graph data structures is adjacency matrix. So, my question here, Do I have to implement Adjacency matrix along with BFS algorithm or it does not matter.
I really got confused.
one of the things that confused me, the data for graph, where these data should be stored if there is not data structure ?
Sincerely
Breadth-first search assumes you have some kind of way of representing the graph structure that you're working with and its efficiency depends on the choice of representation you have, but you aren't constrained to use an adjacency matrix. Many implementations of BFS have the graph represented implicitly somehow (for example, as a 2D array storing a maze or as some sort of game) and work just fine. You can also use an adjacency list, which is particularly efficient for us in BFS.
The particular code you'll be writing will depend on how the graph is represented, but don't feel constrained to do it one way. Choose whatever's easiest for your application.
The best way to choose data structures is in terms of the operations. With a complete list of operations in hand, evaluate implementations wrt criteria important to the problem: space, speed, code size, etc.
For BFS, the operations are pretty simple:
Set<Node> getSources(Graph graph) // all in graph with no in-edges
Set<Node> getNeighbors(Node node) // all reachable from node by out-edges
Now we can evaluate graph data structure options in terms of n=number of nodes:
Adjacency matrix:
getSources is O(n^2) time
getNeighbors is O(n) time
Vector of adjacency lists (alone):
getSources is O(n) time
getNeighbors is O(1) time
"Clever" vector of adjacency lists:
getSources is O(1) time
getNeighbors is O(1) time
The cleverness is just maintaining the sources set as the graph is constructed, so the cost is amortized by edge insertion. I.e., as you create a node, add it to the sources list because it has no out edges. As you add an edge, remove the to-node from the sources set.
Now you can make an informed choice based on run time. Do the same for space, simplicity, or whatever other considerations are in play. Then choose and implement.

When is backward search better than forward?

I'm studying graph search algorithms (for this question sake, lets limit algorithms only on DFS, BreadthFS, ID).
All these algorithms can be implemented as either forward search (from start node to end node) or backward search (from end node to start node).
My question is, when will backward search perform better than forward? Is there a general rule for that?
With a breadth-first search or iterative deepening, I think the mathematical answer to your question involves the notion of a "ball" around a vertex. Define Ball(v, n) to be the set of nodes at distance at most n from node v, and let the distance from the start node s to the destination node t be d. Then in the worst case a forward search will perform better than a backward search if |Ball(s, d)| < |Ball(t, d)|. This is true because breadth-first search always (and ID in the worst case) expands out all nodes of some distance k from the start node before ever visiting any nodes of depth k + 1. Consequently, if there's a smaller number of nodes around the start than the target a forward search should be faster, whereas if there's a smaller number of nodes around the target than the start and backward search should be faster. Unfortunately, it's hard to know this number a priori; you usually either have to run the search to determine which is the case. You could potentially use the branching factor around the two nodes as a heuristic for this value, but it wouldn't necessarily guarantee one search would be faster.
One interesting algorithm you might want to consider exploring is bidirectional breadth-first search, which does a search simultaneously from the source and target nodes. It tends to be much faster than the standard breadth-first search (in particular, with a branching factor b and distance d between the nodes, BFS takes roughly O(bd) time while bidirectional BFS takes O(bd/2)). It's also not that hard to code up once you have a good BFS implementation.
As for depth-first search, I actually don't know of a good way to determine which will be faster because in the worst-case both searches could explore the entire graph before finding a path. If someone has a good explanation about how to determine which will be better, it would be great if they could post it.

Resources