Trouble applying the Alpha Beta Pruning algorithm to this tree - algorithm

I am trying to apply the alpha beta pruning algorithm to this given tree.
I am stuck when I hit node C because after expanding all the children of B, I give A >= -4, I then expand C to get I =-3, which IS greater than -4 (-3 >= -4). Do I therefore, update A to -3? If so do I then afterwards, prune J and K because -3 >= -3 ? When I worked through the example, I pruned, J, K, M and N. I am really uncertain about this =(
EDIT:
Another question: After exploring B and passing the value of B to A, do we pass this value to C and thus to I? I saw an example that this was the case. Here it is: http://web.cecs.pdx.edu/~mm/AIFall2011/alphabeta-example.pdf
However, in this example, http://web.cecs.pdx.edu/~mm/AIFall2011/alphabeta-example.pdf, it doesn't seem to pass down values, instead it seems to only propagate values upwards. I am not sure which one is correct or if it makes a difference at all.

After expanding all the children of B, then A has α=-4, β=∞.
When you get to I, then α=-4, β=-3. α < β so J and K are not pruned. They would need to be evaluated to make sure that they're not less than -3, lowering the evaluation of C. The value of A is updated to α=-3, β=∞ after C is expanded. You can't use the updated alpha value of A when evaluating J because it wouldn't have been updated yet.
J and K would be pruned if I was -5 instead. In that case it wouldn't matter what J and K are because we already know the evaluation of C is worse than B because -5 < -4, and J and K can only make that worse.
Each node passes the alpha and beta values to its children. The children will then update their own copies of the alpha or beta value depending on whose turn it is and return the final evaluation of that node. That is then used to update the alpha or beta value of the parent.
See Alpha-Beta pruning for example:
function alphabeta(node, depth, α, β, Player)
if depth = 0 or node is a terminal node
return the heuristic value of node
if Player = MaxPlayer
for each child of node
α := max(α, alphabeta(child, depth-1, α, β, not(Player)))
if β ≤ α
break // Beta cut-off
return α
else
for each child of node
β := min(β, alphabeta(child, depth-1, α, β, not(Player)))
if β ≤ α
break // Alpha cut-off
return β
// Initial call
alphabeta(origin, depth, -infinity, +infinity, MaxPlayer)

Whenever I need to refresh my understanding of the algorithm I use this:
http://homepage.ufp.pt/jtorres/ensino/ia/alfabeta.html
You can enter your tree there and step through the algorithm. The values you would want are:
3 3 3 3
-2 -4 3 etc.
I find that deducing the algorithm from an example provides a deeper understanding.

Related

Algorithm to find the minimum among the distances between pairs of vertices of a tree

A tree is given whose every vertex is initially coloured white.
Now, the vertices are coloured black one by one, and the aim is to find, after colouring every new vertex, the minimum among the distances between all possible pairs of black vertices.
It should be noted that the distance between a pair of vertices is the number of edges on the path between them.
To solve this problem, I've used an array min_dist[] (0-based indexing) such that min_dist[u] is the distance of vertex u from the nearest black vertex. Then, I've used a depth-first search on the graph after colouring every new vertex.
Let G represent the tree and c[] be the array representing the vertices to be coloured black in order.
SOLVE(G, c)
1. ans = +(infinity)
2. for (i = 0 to (|G.V| - 1))
3. min_dist[i] = +(infinity)
4. for (i = 0 to (|G.V| - 1))
5. min_dist[c[i]] = 0
6. DFS(G, ans, min_dist, c[i], -1)
7. print ans
DFS(G, ans, min_dist, v, parent)
1. for (each vertex child in G.Adj[v])
2. if (child == parent)
3. continue
4. if (min_dist[child] > min_dist[v] + 1)
5. min_dist[child] = min_dist[v] + 1
6. DFS(G, ans, min_dist, child, v);
7. else if (ans > min_dist[child] + min_dist[v] + 1)
8. ans = (min_dist[child] + min_dist[v] + 1)
Now, I think that my algorithm is correct, but the official solution to this problem is a slightly modified version of my algorithm where they've added an extra check for termination in the DFS.
DFS(G, ans, min_dist, v, parent)
1. if (min_dist[v] >= ans)
2. return
3. for (each vertex child in G.Adj[v])
...
I need help in verifying the correctness of this modified version.
I've taken many examples and in all of those, this modified version produces the correct answers. (https://kushagrj.github.io/Codeforces-Round-847-Div-3-Problem-F/)
So you have the optimization where you don't DFS into a child unless the child's min_dist was updated. Using the same reasoning, we can derive the extra test.
Suppose that the problem had an extra parameter θ, and you were to determine whether ans < θ. By tedious induction, if DFS is called when min_dist[v] ≥ θ−1, then it only writes values ≥ θ to min_dist. The only other reads would be to evaluate < θ, so you could restrict the distance labels to 0...θ, making θ effectively ∞. Then the no-update optimization kicks in.
The other idea is that you can repeatedly lower θ to the current best value of ans, effectively using ans as θ.

Algorithm to find best combination or path through nodes

As I am not very proficient in various optimization/tree algorithms, I am seeking help.
Problem Description:
Assume, a large sequence of sorted nodes is given with each node representing an integer value L. L is always getting bigger with each node and no nodes have the same L.
The goal now is to find the best combination of nodes, where the difference between the L-values of subsequent nodes is closest to a given integer value M(L) that changes over L.
Example:
So, in the beginning I would have L = 50 and M = 100. The next nodes have L = 70,140,159,240,310.
First, the value of 159 seems to be closest to L+M = 150, so it is chosen as the right value.
However, in the next step, M=100 is still given and we notice that L+M = 259, which is far away from 240.
If we now go back and choose the node with L=140 instead, which then is followed by 240, the overall match between the M values and the L-differences is stronger. The algorithm should be able to find back to the optimal path, even if a mistake was made along the way.
Some additional information:
1) the start node is not necessarily part of the best combination/path, but if required, one could first develop an algorithm, which chooses the best starter candidate.
2) the optimal combination of nodes is following the sorted sequence and not "jumping back" -> so 1,3,5,7 is possible but not 1,3,5,2,7.
3) in the end, the differences between the L values of chosen nodes should in the mean squared sense be closest to the M values
Every help is much appreciated!
If I understand your question correctly, you could use Dijktras algorithm:
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
http://www.mathworks.com/matlabcentral/fileexchange/20025-dijkstra-s-minimum-cost-path-algorithm
For that you have to know your neighbours of every node and create an Adjacency Matrix. With the implementation of Dijktras algorithm which I posted above you can specify edge weights. You could specify your edge weight in a manner that it is L of the node accessed + M. So for every node combination you have your L of new node + M. In that way the algorithm should find the optimum path between your nodes.
To get all edge combinations you can use Matlabs graph functions:
http://se.mathworks.com/help/matlab/ref/graph.html
If I understand your problem correctly you need an undirected graph.
You can access all edges with the command
G.Edges after you have created the graph.
I know its not the perfect answer but I hope it helps!
P.S. Just watch out, Djikstras algorithm can only handle positive edge weights.
Suppose we are given a number M and a list of n numbers, L[1], ..., L[n], and we want to find a subsequence of at least q of the latter numbers that minimises the sum of squared errors (SSE) with respect to M, where the SSE of a list of k positions x[1], ..., x[k] with respect to M is given by
SSE(M, x[1], ..., x[k]) = sum((L[x[i]]-L[x[i-1]]-M)^2) over all 2 <= i <= k,
with the SSE of a list of 0 or 1 positions defined to be 0.
(I'm introducing the parameter q and associated constraint on the subsequence length here because without it, there always exists a subsequence of length exactly 2 that achieves the minimum possible SSE -- and I'm guessing that such a short sequence isn't helpful to you.)
This problem can be solved in O(qn^2) time and O(qn) space using dynamic programming.
Define f(i, j) to be the minimum sum of squared errors achievable under the following constraints:
The number at position i is selected, and is the rightmost selected position. (Here, i = 0 implies that no positions are selected.)
We require that at least j (instead of q) of these first i numbers are selected.
Also define g(i, j) to be the minimum of f(k, j) over all 0 <= k <= i. Thus g(n, q) will be the minimum sum of squared errors achievable on the entire original problem. For efficient (O(1)) calculation of g(i, j), note that
g(i>0, j>0) = min(g(i-1, j), f(i, j))
g(0, 0) = 0
g(0, j>0) = infinity
To calculate f(i, j), note that if i > 0 then any solution must be formed by appending the ith position to some solution Y that selects at least j-1 positions and whose rightmost selected position is to the left of i -- i.e. whose rightmost selected position is k, for some k < i. The total SSE of this solution to the (i, j) subproblem will be whatever the SSE of Y was, plus a fixed term of (L[x[i]]-L[x[k]]-M)^2 -- so to minimise this total SSE, it suffices to minimise the SSE of Y. But we can compute that minimum: it is g(k, j-1).
Since this holds for any 0 <= k < i, it suffices to try all such values of k, and take the one that gives the lowest total SSE:
f(i>=j, j>=2) = min of (g(k, j-1) + (L[x[i]]-L[x[k]]-M)^2) over all 0 <= k < i
f(i>=j, j<2) = 0 # If we only need 0 or 1 position, SSE is 0
f(i, j>i) = infinity # Can't choose > i positions if the rightmost chosen position is i
With the above recurrences and base cases, we can compute g(n, q), the minimum possible sum of squared errors for the entire problem. By memoising values of f(i, j) and g(i, j), the time to compute all needed values of f(i, j) is O(qn^2), since there are at most (n+1)*(q+1) possible distinct combinations of input parameters (i, j), and computing a particular value of f(i, j) requires at most (n+1) iterations of the loop that chooses values of k, each iteration of which takes O(1) time outside of recursive subcalls. Storing solution values of f(i, j) requires at most (n+1)*(q+1), or O(qn), space, and likewise for g(i, j). As established above, g(i, j) can be computed in O(1) time when all needed values of f(x, y) have been computed, so g(n, q) can be computed in the same time complexity.
To actually reconstruct a solution corresponding to this minimum SSE, you can trace back through the computed values of f(i, j) in reverse order, each time looking for a value of k that achieves a minimum value in the recurrence (there may in general be many such values of k), setting i to this value of k, and continuing on until i=0. This is a standard dynamic programming technique.
I now answer my own post with my current implementation, in order to structure my post and load images. Unfortunately, the code does not do what it should do. Imagine L,M and q given like in the images below. With the calcf and calcg functions I calculated the F and G matrices where F(i+1,j+1) is the calculated and stored f(i,j) and G(i+1,j+1) from g(i,j). The SSE of the optimal combination should be G(N+1,q+1), but the result is wrong. If anyone found the mistake, that would be much appreciated.
G and F Matrix of given problem in the workspace. G and F are created by calculating g(N,q) via calcg(L,N,q,M).
calcf and calcg functions

When to terminate iterative deepening with alpha beta pruning and transposition tables?

How do I know when I can stop increasing the depth for an iterative deepening algorithm with negamax alpha beta pruning and transposition tables? The following pseudo code taken from a wiki page:
function negamax(node, depth, α, β, color)
alphaOrig := α
// Transposition Table Lookup; node is the lookup key for ttEntry
ttEntry := TranspositionTableLookup( node )
if ttEntry is valid and ttEntry.depth ≥ depth
if ttEntry.Flag = EXACT
return ttEntry.Value
else if ttEntry.Flag = LOWERBOUND
α := max( α, ttEntry.Value)
else if ttEntry.Flag = UPPERBOUND
β := min( β, ttEntry.Value)
endif
if α ≥ β
return ttEntry.Value
endif
if depth = 0 or node is a terminal node
return color * the heuristic value of node
bestValue := -∞
childNodes := GenerateMoves(node)
childNodes := OrderMoves(childNodes)
foreach child in childNodes
val := -negamax(child, depth - 1, -β, -α, -color)
bestValue := max( bestValue, val )
α := max( α, val )
if α ≥ β
break
// Transposition Table Store; node is the lookup key for ttEntry
ttEntry.Value := bestValue
if bestValue ≤ alphaOrig
ttEntry.Flag := UPPERBOUND
else if bestValue ≥ β
ttEntry.Flag := LOWERBOUND
else
ttEntry.Flag := EXACT
endif
ttEntry.depth := depth
TranspositionTableStore( node, ttEntry )
return bestValue
And this is the iterative deepening call:
while(depth < ?)
{
depth++;
rootNegamaxValue := negamax( rootNode, depth, -∞, +∞, 1)
}
Of course, when I know the total number of moves in a game I could use depth < numberOfMovesLeft as an upper bound. But if this information is not given, when do I know that another call of negamax doesn't give any better result then the previous run? What do I need to change in the algorithm?
The short answer is: when you run out of time (and the transpositional tables are irrelevant to the answer/question)
Here I assume that your evaluation function is reasonable (gives good approximation of the position).
The main idea to combine the iterative deepening with alpha beta is the following: let's assume that you have 15 seconds to come up with the best move. How far can you search? I do not know and no one else know. You can try to search till depth = 8 only to find out that the search finished in 1 second (so you waisted available 14 seconds of time). With trial and error you found that depth = 10 gives you result in 13 seconds. So you decided to use it all the time. But now something went terribly wrong (your alpha beta was not pruning good enough, some of the positions took too much time to evaluate) and your result was not ready in 15 seconds. So you either made a random move or have lost the game.
So that this would never happened it is nice to have a good result ready. So you do the following. Get the best result for depth=1 and store it. Find the best result for depth=2, and overwrite it. And so on. From time to time check how much time left, and if it is really close to timelimit - return your best move.
Now you do not need to worry about the time, your method will give the best result you have found so far. With all these recalculations of different subtrees you only waste half of your resources (if you check the whole tree, but in alpha-beta you most probably are not). The additional advantage is that now you reorder the moves from the best to worse on each depth iteration and thus will make pruning more aggressive.

Complete graph with only two possible costs. What's the shortest path's cost from 0 to N - 1

You are given a complete undirected graph with N vertices. All but K edges have a cost of A. Those K edges have a cost of B and you know them (as a list of pairs). What's the minimum cost from node 0 to node N - 1.
2 <= N <= 500k
0 <= K <= 500k
1 <= A, B <= 500k
The problem is, obviously, when those K edges cost more than the other ones and node 0 and node N - 1 are connected by a K-edge.
Dijkstra doesn't work. I've even tried something very similar with a BFS.
Step1: Let G(0) be the set of "good" adjacent nodes with node 0.
Step2: For each node in G(0):
compute G(node)
if G(node) contains N - 1
return step
else
add node to some queue
repeat step2 and increment step
The problem is that this uses up a lot of time due to the fact that for every node you have to make a loop from 0 to N - 1 in order to find the "good" adjacent nodes.
Does anyone have any better ideas? Thank you.
Edit: Here is a link from the ACM contest: http://acm.ro/prob/probleme/B.pdf
This is laborous case work:
A < B and 0 and N-1 are joined by A -> trivial.
B < A and 0 and N-1 are joined by B -> trivial.
B < A and 0 and N-1 are joined by A ->
Do BFS on graph with only K edges.
A < B and 0 and N-1 are joined by B ->
You can check in O(N) time is there is a path with length 2*A (try every vertex in middle).
To check other path lengths following algorithm should do the trick:
Let X(d) be set of nodes reachable by using d shorter edges from 0. You can find X(d) using following algorithm: Take each vertex v with unknown distance and iterativelly check edges between v and vertices from X(d-1). If you found short edge, then v is in X(d) otherwise you stepped on long edge. Since there are at most K long edges you can step on them at most K times. So you should find distance of each vertex in at most O(N + K) time.
I propose a solution to a somewhat more general problem where you might have more than two types of edges and the edge weights are not bounded. For your scenario the idea is probably a bit overkill, but the implementation is quite simple, so it might be a good way to go about the problem.
You can use a segment tree to make Dijkstra more efficient. You will need the operations
set upper bound in a range as in, given U, L, R; for all x[i] with L <= i <= R, set x[i] = min(x[i], u)
find a global minimum
The upper bounds can be pushed down the tree lazily, so both can be implemented in O(log n)
When relaxing outgoing edges, look for the edges with cost B, sort them and update the ranges in between all at once.
The runtime should be O(n log n + m log m) if you sort all the edges upfront (by outgoing vertex).
EDIT: Got accepted with this approach. The good thing about it is that it avoids any kind of special casing. It's still ~80 lines of code.
In the case when A < B, I would go with kind of a BFS, where you would check where you can't reach instead of where you can. Here's the pseudocode:
G(k) is the set of nodes reachable by k cheap edges and no less. We start with G(0) = {v0}
while G(k) isn't empty and G(k) doesn't contain vN-1 and k*A < B
A = array[N] of zeroes
for every node n in G(k)
for every expensive edge (n,m)
A[m]++
# now we have that A[m] == |G(k)| iff m can't be reached by a cheap edge from any of G(k)
set G(k+1) to {m; A[m] < |G(k)|} except {n; n is in G(0),...G(k)}
k++
This way you avoid iterating through the (many) cheap edges and only iterate through the relatively few expensive edges.
As you have correctly noted, the problem comes when A > B and edge from 0 to n-1 has a cost of A.
In this case you can simply delete all edges in the graph that have a cost of A. This is because an optimal route shall only have edges with cost B.
Then you can perform a simple BFS since the costs of all edges are the same. It will give you optimal performance as pointed out by this link: Finding shortest path for equal weighted graph
Moreover, you can stop your BFS when the total cost exceeds A.

Understanding the cut-off condition in the alpha-beta pruning algorithm

I'm having trouble understanding this pseudocode I found for alpha beta pruning on wikipedia:
function alphabeta(node, depth, α, β, Player)
if depth = 0 or node is a terminal node
return the heuristic value of node
if Player = MaxPlayer
for each child of node
α := max(α, alphabeta(child, depth-1, α, β, not(Player)))
if β ≤ α
break (* Beta cut-off *)
return α
else
for each child of node
β := min(β, alphabeta(child, depth-1, α, β, not(Player)))
if β ≤ α
break (* Alpha cut-off *)
return β
What is confusing me is the if Player = MaxPlayer condition. I understand the whole recursively calling the function with not(Player) to get the minimum value, which will then recursively call the function with Player, repeating until the depth limit is reached or a goal state has be found. However, I don't understand the
if β ≤ α
break
statement. My understanding of that is that, the second value higher than the minimum value found in the previous call (β) is found, that is the value that is used. But since this is the MAX part of the function, don't we want the HIGHEST value, not just ANY value that is greater than beta?
This is the trimming phase of the algorithm, in the MaxPlayer clause (When checking for max value for the player in this node):
Beta is the parameter of the function which is the "trimming factor". It represents the minimum score you have found so far. It means that the parent of the current node, which is a minimizing node - has already found a solution which is beta.
Now, if we continue iterating all children, we will get something at least as good as the current alpha. Since beta <= alpha - the parent node - which is minimizing node - will NEVER chose this alpha (or any value greater than it) - it will chose a value which is beta or lower - and the current node has no chance of finding such, so we can trim the calculation.
Example:
MIN
/ \
/ \
/ \
/ \
5 MAX
/ | \
/ | \
/ | \
6 8 4
When evaluating the MAX node, we will return 8, if we apply the normal min-max algorithm. However, we know that the MIN function is going to do min(5, MAX(6, 8, 4)). Since after we read 6 we know max(6, 8, 4) >= 6, we can return 6 without continuing computations because the MIN computation of the upper level will be min(5, MAX(6, 8, 4)) = min(5, 6) = 5.
This is the intuition for one level, it is of course done recursively to "flow" to all levels with the same idea.
The same idea holds for the trimming condition in the MIN vertex.

Resources