I'm looking at this pseudocode for the Minimax algorithm:
Function Minimax-Decision(state) returns an action
;inputs: state (current game state)
;'E' means element of, 'a' is the action
return a E Actions(state) maximizing Min-Value(Result(a, state))
Function Max-Value(state) returns a utility value
if Terminal-Test(state) then return Utility(state)
v <-- -infinity
for a, s in Successors(state) do v <-- Max(v, Min-Value(s))
return v
Function Min-Value(state) returns a utility value
if Terminal-Test(state) then return Utility(state)
v <-- infinity
for a, s in Successors(state) do v <-- Min(v, Max-Values(s))
return v
I think I know how the Minimax algorithm works. You fill "score" values from the bottom up starting from the utility scores. At Max's nodes are the largest of its children, Min's will be the smallest. Max predicts that Min will always try to put Max in the worst position possible for next turn, and using that knowledge, Max tries to make the best move possible.
So for: with order Max,Min,Max from the top
1) Max wants to make the best move for himself (max utility) so C1=5,C2=11,C3=8,etc
2) Max predicts that Min will want to put Max in the worst position possible (restrict Max to smallest utility) so B1=5,B2=2,B3=3
3) Max wants to make best possible move, so A=B1=5
Whats confusing me about the pseudocode is the double recursion, and the purpose of v. Can anyone break this one down for me?
Thanks for reading everyone!
I think you can understand this code by walking through an informal proof by induction that it works for trees of depth d.
For depth 1 you just have a single node and both Min-Value and Max-Value return the utility of this node.
For depth d > 1 consider Min-Value first. v starts off as infinity, so at the first call to Min(v, Max-Value(s)) v gets set to the utility of a child node, as computed by Max, because it is Max's turn after Min, and we know it is correct by induction because it is of depth d-1. (This call amounts to an assignment because v <= infinity), Later calls to Min(v, Max-Value(s)) in this routine end up computing the Min of Max-Value(s) over all children of the node passed in. So Min-Value ends up computing the minimum utility over all children of the node passed in, which is the value of that node to Min when it is Min's turn to move.
The argument for Max-Value is much the same as for Min-Value, so the induction tells you that, for trees of any depth, both Min-Value and Max-Value return to you the value of the node passed to them, when it is Max or Min's turn to move and make the choices associated with that node.
You could also show by induction that what this code does is equivalent to what you have described.
So the double recursion arises because it allows Max and Min to take alternate turns as you walk up the tree from the leaves, and v is a temporary which is used to work out the min or max value of all the child nodes of a tree.
Related
We're given a directed tree to work with. We define the concepts of p-ancestor and p-cousin as follows
p-ancestor: A node is an 1-ancestor of another if it is the parent of it. It is the p-ancestor of a node, if it is the parent of the (p-1)-th ancestor.
p-cousin: A node is the p-cousin of another, if they share the same p-ancestor.
For example, consider the tree below.
4 has three 1-cousins i,e, 3, 4 and 5 since they all share the common
1-ancestor, which is 1
For a particular tree, the problem is as follows. You are given multiple pairs of (node,p) and are supposed to count (and output) the number of p-cousins of the corresponding nodes.
A slow algorithm would be to crawl up to the p-ancestor and run a BFS for each node.
What is the (asymptotically) fastest way to solve the problem?
If an off-line solution is acceptable, two Depth first searches can do the job.
Assume that we can index all of those n queries (node, p) from 0 to n - 1
We can convert each query (node, p) into another type of query (ancestor , p) as follow:
Answer for query (node, p), with node has level a (distance from root to this node is a), is the number of descendants level a of the ancestor at level a - p. So, for each queries, we can find who is that ancestor:
Pseudo code
dfs(int node, int level, int[]path, int[] ancestorForQuery, List<Query>[]data){
path[level] = node;
visit all child node;
for(Query query : data[node])
if(query.p <= level)
ancestorForQuery[query.index] = path[level - p];
}
Now, after the first DFS, instead of the original query, we have a new type of query (ancestor, p)
Assume that we have an array count, which at index i stores the number of node which has level i. Assume that, node a at level x , we need to count number of p descendants, so, the result for this query is:
query result = count[x + p] after we visit a - count[x + p] before we visit a
Pseudo code
dfs2(int node, int level, int[] result, int[]count, List<TransformedQuery>[]data){
count[level] ++;
for(TransformedQuery query : data[node]){
result[query.index] -= count[level + query.p];
}
visit all child node;
for(TransformedQuery query : data[node]){
result[query.index] += count[level + query.p];
}
}
Result of each query is stored in result array.
If p is fixed, I suggest the following algorithm:
Let's say that count[v] is number of p-children of v. Initially all count[v] are set to 0. And pparent[v] is p-parent of v.
Let's now run a dfs on the tree and keep the stack of visited nodes, i.e. when we visit some v, we put it into the stack. Once we leave v, we pop.
Suppose we've come to some node v in our dfs. Let's do count[stack[size - p]]++, indicating that we are a p-child of v. Also pparent[v] = stack[size-p]
Once your dfs is finished, you can calculate the desired number of p-cousins of v like this:
count[pparent[v]]
The complexity of this is O(n + m) for dfs and O(1) for each query
First I'll describe a fairly simple way to answer each query in O(p) time that uses O(n) preprocessing time and space, and then mention a way that query times can be sped up to O(log p) time for a factor of just O(log n) extra preprocessing time and space.
O(p)-time query algorithm
The basic idea is that if we write out the sequence of nodes visited during a DFS traversal of the tree in such a way that every node is written out at a vertical position corresponding to its level in the tree, then the set of p-cousins of a node form a horizontal interval in this diagram. Note that this "writing out" looks very much like a typical tree diagram, except without lines connecting nodes, and (if a postorder traversal is used; preorder would be just as good) parent nodes always appearing to the right of their children. So given a query (v, p), what we will do is essentially:
Find the p-th ancestor u of the given node v. Naively this takes O(p) time.
Find the p-th left-descendant l of u -- that is, the node you reach after repeating the process of visiting the leftmost child of the current node, p times. Naively this takes O(p) time.
Find the p-th right-descendant r of u (defined similarly). Naively this takes O(p) time.
Return the value x[r] - x[l] + 1, where x[i] is a precalculated value that records the number of nodes in the sequence described above that are at the same level as, and at or to the left of, node i. This takes constant time.
The preprocessing step is where we calculate x[i], for each 1 <= i <= n. This is accomplished by performing a DFS that builds up a second array y[] that records the number y[d] of nodes visited so far at depth d. Specifically, y[d] is initially 0 for each d; during the DFS, when we visit a node v at depth d, we simply increment y[d] and then set x[v] = y[d].
O(log p)-time query algorithm
The above algorithm should already be fast enough if the tree is fairly balanced -- but in the worst case, when each node has just a single child, O(p) = O(n). Notice that it is navigating up and down the tree in the first 3 of the above 4 steps that force O(p) time -- the last step takes constant time.
To fix this, we can add some extra pointers to make navigating up and down the tree faster. A simple and flexible way uses "pointer doubling": For each node v, we will store log2(depth(v)) pointers to successively higher ancestors. To populate these pointers, we perform log2(maxDepth) DFS iterations, where on the i-th iteration we set each node v's i-th ancestor pointer to its (i-1)-th ancestor's (i-1)-th ancestor: this takes just two pointer lookups per node per DFS. With these pointers, moving any distance p up the tree always takes at most log(p) jumps, because the distance can be reduced by at least half on each jump. The exact same procedure can be used to populate corresponding lists of pointers for "left-descendants" and "right-descendants" to speed up steps 2 and 3, respectively, to O(log p) time.
A slightly more theoretical question, but here it is nonetheless.
Setting
Let:
UCYLE = { : G is an undirected graph that contains a simple cycle}.
My Solution
we show UCYLE is in L by constructing algorithm M that decides UCYLE using $L$ space.
M = "On input where G = (V,E)
For each v_i in V, for each v_j in Neighbor(v_i), store the current v_i and v_j
Traverse the edge (v_i,v_j) and then follow all possible paths through G using DFS.
If we encounter v_k in Neighbor(v_i) / {v_j} so that there is an edge (v_i,v_k) in E, then ACCEPT. Else REJECT."
First we claim M decides UCYLE. First, if there exists a cycle in $G$, then it must start and end on some vertex $v_i$, step one of $M$ tries all such $v_i$'s and therefore must find the desired vertex. Next, suppose the cycle starts at $v_i$, then there must exists a starting edge $(v_i,v_j)$ so that if we follow the cycle, we come back to $v_i$ through a different edge $(v_k,v_i)$, so we accept in step three. Since the graph is undirected, we can always come back to $v_i$ through $(v_i,v_j)$, but $M$ does not accept this case. By construction, neither does $M$ accept if we come upon some $v_k in Neighbor(v_i)/{v_j}$ but there is no edge from $v_k$ to $v_i$.
Now we show M is in L. First if the vertices are labled $1,\ldots,n$ where $|\mathbb V| = n$, then it requires $log(n)$ bits to specify each $v_i$. Next note in $\mathcal M$ we only need to keep track of the current $v_i$ and $v_j$, so M is $2 log(n) = O(log n), which is in L
My Problem
My problem is how do you perform DFS on the graph in $log(n)$ space. For example, in the worst case where each vertex has degree $n$, you'd have to keep a counter of which vertex you took on a particular path, which would require $n log(n)$ space.
The state you maintain as you search is four vertices: (v_i, v_j, prev, current).
The next state is: (v_i, v_j, current, v) where v is the next neighbour of current after prev (wrapping back to the first if prev is the numerically last neighbour of current).
You stop when current is a neighbour of v_i and reject if it's not v_j.
In pseudo-code, something like this:
for v_i in vertices
for v_j in neighbours(v_i)
current, prev = v_j, v_i
repeat
idx = neighbours(current).index(v_j)
idx = (idx + 1) % len(neighbours(current))
current, prev = neighbours(current)[idx], current
until current adjacent to v_i
if current != v_j
return FOUND_A_CYCLE
return NO_CYCLES_EXIST
Intuitively, this is saying for each point in a maze, and for each corridor from that point, follow the left-hand wall, and if when you can see the start point again if it's not through the original corridor then you've found a cycle.
While it's easy to see that this algorithm uses O(log n) space, there's some proof necessary to show that this algorithm terminates.
I'm looking for an algorithm to solve this problem. I have to implement it (so I need a not np solution XD)
I have a complete graph with a cost on each arch and a reward on each vertex. I have only a start point, but it doesn't matter the end point, becouse the problem is to find a path to see as many vertex as possible, in order to have the maximum reward possible, but subject to a maximum cost limit. (for this reason it doesn't matter the end position).
I think to find the optimum solution is a np-hard problem, but also an approximate solution is apprecciated :D
Thanks
I'm trying study how to solve the problem with branch & bound...
update: complete problem dscription
I have a region in which there are several areas identify by its id and x,y,z position. Each vertex identifies one ot these areas. The maximum number of ares is 200.
From a start point S, I know the cost, specified in seconds and inserted in the arch (so only integer values), to reach each vertex from each other vertex (a complete graph).
When I visit a vertex I get a reward (float valiues).
My objective is to find a paths in a the graph that maximize the reward but I'm subject to a cost constraint on the paths. Indeed I have only limited minutes to complete the path (for example 600 seconds.)
The graph is made as matrix adjacency matrix for the cost and reward (but if is useful I can change the representation).
I can visit vertex more time but with one reward only!
Since you're interested in branch and bound, let's formulate a linear program. Use Floyd–Warshall to adjust the costs minimally downward so that cost(uw) ≤ cost(uv) + cost(vw) for all vertices u, v, w.
Let s be the starting vertex. We have 0-1 variables x(v) that indicate whether vertex v is part of the path and 0-1 variables y(uv) that indicate whether the arc uv is part of the path. We seek to maximize
sum over all vertices v of reward(v) x(v).
The constraints unfortunately are rather complicated. We first relate the x and y variables.
for all vertices v ≠ s, x(v) - sum over all vertices u of y(uv) = 0
Then we bound the cost.
sum over all arcs uv of cost(uv) y(uv) ≤ budget
We have (pre)flow constraints to ensure that the arcs chosen look like a path possibly accompanied by cycles (we'll handle the cycles shortly).
for all vertices v, sum over all vertices u of y(uv)
- sum over all vertices w of y(vw)
≥ -1 if v = s
0 if v ≠ s
To handle the cycles, we add cut covering constraints.
for all subsets of vertices T such that s is not in T,
for all vertices t in T,
x(t) - sum over all vertices u not in T and v in T of y(uv) ≥ 0
Because of the preflow constraints, a cycle necessarily is disconnected from the path structure.
There are exponentially many cut covering constraints, so when solving the LP, we have to generate them on demand. This means finding the minimum cut between s and each other vertex t, then verifying that the capacity of the cut is no greater than x(t). If we find a violation, then we add the constraint and use the dual simplex method to find the new optimum (repeat as necessary).
I'm going to pass on describing the branching machinery – this should be taken care of by your LP solver anyway.
Finding the optimal solution
Here is a recursive approach to solving your problem.
Let's begin with some definitions :
Let A = (Ai)1 ≤ i ≤ N be the areas.
Let wi,j = wj,i the time cost for traveling from Ai to Aj and vice versa.
Let ri the reward for visiting area Ai
Here is the recursive procedure that will output the exact requested solution : (pseudo-code)
List<Area> GetBestPath(int time_limit, Area S, int *rwd) {
int best_reward(0), possible_reward(0), best_fit(0);
List<Area> possible_path[N] = {[]};
if (time_limit < 0) {
return [];
}
if (!S.visited) {
*rwd += S.reward;
S.visit();
}
for (int i = 0; i < N; ++i) {
if (S.index != i) {
possible_path[i] = GetBestPath(time_limit - W[S.index][i], A[i], &possible_reward);
if (possible_reward > best_reward) {
best_reward = possible_reward;
best_fit = i;
}
}
}
*rwd+= best_reward;
possible_path[best_fit].push_front(S);
return possible_path[best_fit];
}
For obvious clarity reasons, I supposed the Ai to be globally reachable, as well as the wi,j.
Explanations
You start at S. First thing you do ? Collect the reward and mark the node as visited. Then you have to check which way to go is best between the S's N-1 neighbors (lets call them NS,i for 1 ≤ i ≤ N-1).
This is the exact same thing as solving the problem for NS,i with a time limit of :
time_limit - W(S ↔ NS,i)
And since you mark the visited nodes, when arriving at an area, you first check if it is marked. If so you have no reward ... Else you collect and mark it as visited ...
And so forth !
The ending condition is when time_limit (C) becomes negative. This tells us we reached the limit and cannot proceed to further moves : the recursion ends. The final path may contain useless journeys if all the rewards have been collected before the time limit C is reached. You'll have to "prune" the output list.
Complexity ?
Oh this solution is soooooooo awful in terms of complexity !
Each calls leads to N-1 calls ... Until the time limit is reached. The longest possible call sequence is yielded by going back and forth each time on the shortest edge. Let wmin be the weight of this edge.
Then obviously, the overall complexity is bounded by NC/wmin.C/wmin.
This is huuuuuge.
Another approach
Maintain a hash table of all the visited nodes.
On the other side, maintain a Max-priority queue (eg. using a MaxHeap) of the nodes that have not been collected yet. (The top of the heap is the node with the highest reward). The priority value for each node Ai in the queue is set as the couple (ri, E[wi,j])
Pop the heap : Target <- heap.pop().
Compute the shortest path to this node using Dijkstra algorithm.
Check out the path : If the cost of the path is too high, then the node is not reachable, add it to the unreachable nodes list.
Else collect all the uncollected nodes that you find in it and ...
Remove each collected node from the heap.
Set Target as the new starting point.
In either case, proceed to step 1. until the heap is empty.
Note : A hash table is the best suited to keep track of the collected node. This way, we can check a node in a path computed using Dijkstra in O(1).
Likewise, maintaining a hashtable leading to the position of each node in the heap might be useful to optimise the "pruning" of the heap, when collecting the nodes along a path.
A little analysis
This approach is slightly better than the first one in terms of complexity, but may not lead to the optimal result. In fact, it can even perform quite poorly on some graph configurations. For example, if all nodes have a reward r, except one node T that has r+1 and W(N ↔ T) = C for every node N, but the other edges would be all reachable, then this will only make you collect T and miss every other node. In this particular case, the best solution would have been to ignore T and collect everyone else leading to a reward of (N-1).r instead of only r+1.
I was trying to split a binary-tree into k similar-sized parts (by removing k-1 edges). Is there any efficient algorithm for this problem? Or is it NP-hard? Any pointers to papers, problem definitions, etc?
-- One reasonable metric for evaluating the quality of partitioning could be the size gap between the largest and smallest partition; another metric could be making the smallest partition having as many vertices as possible.
I can suggest pretty fast solution for making the smallest part having as many vertices as possible metric.
Let suppose we guess the size S of smallest partit and want check if it's correct.
First I want to make a few statements:
If total size of tree bigger than S there is at least one subtree which is bigger than S and all subtrees of that subtree are smaller. (It's enough to check both biggest.)
If there is some way to split tree where size of smallest part >= S and we have subtree T all subtrees of which are smaller than S than we can grant that no edges inside T are deleted. (Cause any such deletion will create a partition which will be smaller than S)
If there is some way to split tree where size of smallest part >= S, and we have some subtree T which size >= S, has no deleted edges inside but is not one of parts, we can split the tree in other way where subtree T will be one of parts itself and all parts will be no smaller than S. (Just move some extra vertices from original part to any other part, this other part will not become smaller.)
So here is an algorithm to check if we can split the tree in k parts no smaller than S.
find all suitable vertices (roots of subtrees of size >= S and size for both childs < S) and add them in list. You can start from the root and move through all vertices while subtrees are bigger than S.
While list not empty and number of parts lesser then K take a vertice from the list and cut it off the tree. Than update size of subtrees for parent vertices and add to the list if one of them become suitable.
You even have no need to update all the parent vertices, only until you will find first which's new subtree size is bigger than S, parent vertices cant't be suitable for adding in list yet and can be updated later.
You may need to construct tree back to restore original subtree sizes assigned to the vertices.
Now we can use bisection method. We can determine upper bound as Smax = n/k and lower bound can be retrieved from equation (2*Smin- 1)*(K - 1) + Smin = N it will grants that if we will cut off k-1 subtrees with two child subtrees of size Smin - 1 each, we will have part of size Smin left. Smin = (n + k -1)/(2*k - 1)
And now we can check S = (Smax + Smin)/2
If we manage to construct partition using the method above than S is smaller or equal to it's largest possible value, also smallest part in constructed partition may be bigger than S and we can set new lower bound to it instead of S, if we fail S is bigger than possible.
Time complexity of one check is k multiplied by number of parent nodes updated each time, for well balanced tree number of updated nodes is constant (we will use trick explaned earlier and will not update all parent nodes), still it's not bigger than (n/k) in worst case for ultimately unbalanced tree. Searching for suitable vertices has very similar behavior (all vertices passed while searching will be updated later.).
Difference between n/k and (n + k -1)/(2*k - 1) is proportional to n/k.
So we have time complexity O(k * log(n/k)) in best case if we have precalculated subtree sizes, O(n) if subtree sizes are not precalculated and O(n * log(n/k)) in worst case.
This method may lead to situation when last of parts will be comparably big but I suppose once you've got suggested method you can figure out some improvements to minimize it.
Here is a polynomial deterministic solution:
Let's assume that the tree is rooted and there are two fixed values: MIN and MAX - minimum and maximum allowed size of one component.
Then one can use dynamic programming to check if there is a partition such that each component size is between MIN and MAX:
Let's assume f(node, cuts_count, current_count) is true if and only if there is a way to make exactly cuts_count cuts in node's subtree so that current_count vertices are connected to the node so that condition 2) holds true.
The base case for the leaves is: f(leaf, 1, 0)(cut the edge from the parent to the leaf) is true if and only if MIN <= 1 and MAX >= 1 f(leaf, 0, 1)(do not cut it) is always true(it is false for all other values of cuts_count and current_count).
To compute f for a node(not a leaf), one can use the following algorithm:
//Combine all possible children states.
for cuts_left in 0..k
for cuts_right in 0..k
for cnt_left in 0..left_subtree_size
for cnt_right in 0..right_subtree_size
if f(left_child, cuts_left, cnt_left) is true and
f(right_child, cuts_right, cnt_right) is true and then
f(node, cuts_left + cuts_right, cnt_left + cnt_right + 1) = true
//Cut an edge from this node to its parent.
for cuts in 0..k-1
for cnt in 0..node's_subtree_size
if f(node, cuts, node's_subtree_size) is true and MIN <= cnt <= MAX:
f(node, cuts + 1, 0) = true
What this pseudo code does is combining all possible states of node's children to compute all reachable states for this node(the first bunch of for loops) and then produces the rest of reachable states by cutting the edge between this node and its parent(the second bunch of for loops)(the state means (node, cuts_count, current_count) tuple. I call it reachable if f(state) is true).
That is the case for a node with two children, the case with one child can be processes in similar manner.
Finally, if f(root, k, 0) is true then it is possible to find the partition which stratifies the condition 2) and it is not possible otherwise. We need to "pretend" that we did k cuts here because we also cut an imaginary edge from root to its parent(this edge and this parent doesn't exist actually) when we computed f for the root(to avoid corner case).
The space complexity of this algorithm(for fixed MIN and MAX) is O(n^2 * k)(n is the number of nodes), time complexity is O(k^2 * n^2). It might seem that the complexity is actually O(k^2 * n^3), but is not so because the product of number of vertices in left and right subtree of a node is exactly the number of pairs of node's such that their least common ancestor is this node. But the total number of pairs of nodes is O(n^2)(and each pair has only one least common ancestor). Thus, the sum of products of left and right subtree sizes over all nodes is O(n^2).
One can simply try all possible MIN and MAX values and choose the best, but it can be done faster. The key observation is that if there is a solution for MIN and MAX, there is always a solution for MIN and MAX + 1. Thus, one can iterate over all possible values of MIN(n / k different values) and apply binary search to find the smallest MAX which gives a valid solution(log n iterations). So the overall time complexity is O(n^2 * k^2 * n / k * log n) = O(n^3 * k * log n). However, if you want to maximize MIN(not to minimize the difference between MAX and MIN), you can simply use this algorithm and ignore MAX value everywhere(by setting its value to n). Then no binary search over MAX would be required, but one would be able to binary search over MIN instead and obtain an O(n^2 * k^2 * log n) solution.
To reconstruct the partition itself, one can start from f(root, k, 0) and apply the steps we used to compute f, but this time in opposite direction(from root to leaves). It is also possible to save the information about how to get the value of each state(what children's states were combined or what was the state before the edge was cut)(and update it appropriately during the initial computation of f) and then reconstruct the partition using this data(if my explanation of this step seems not very clear, reading an article on dynamic programming and reconstructing the answer might help).
So, there is a polynomial solution for this problem on a binary tree(even though it is NP-hard for an arbitrary graph).
The alpha-beta pruning algorithm is as follows:
function ALPHA-BETA-SEARCH(state) returns an action
v <- MAX-VALUE(state, -∞, +∞)
return the action in ACTIONS(state) with value v
function MAX-VALUE(state, α, β) returns a utility value
if TERMINAL-TEST(state) then
return UTILITY(state)
v <- -∞
for each a in ACTIONS(state) do
v <- MAX(v, MIN-VALUE(RESULT(state, a), α, β))
if v ≥ β then
return v
α <- MAX(α, v)
return v
function MIN-VALUE(state, α, β) returns a utility value
if TERMINAL-TEST(state) then
return UTILITY(state)
v <- +∞
for each a in ACTIONS(state) do
v <- MIN(v, MAX-VALUE(RESULT(state, a), α, β))
if v ≤ α then
return v
β <- MIN(β, v)
return v
The algorithm states that the ACTIONS function will give a list of all the available actions in a given state.
Let's e.g. take the game of checkers. Suppose that one checker, say A, is in diagonal with another checker, say B. If A can take B, then that is an (unavoidable, since we must take the other checker, if we can) action. Or, if there are multiple takes, these are all actions. This situation can actually be drawn using pencil and paper. More specifically, the situation could be represented using a tree, where each node represents a state and the edges to its child nodes represent the possible actions from that state.
I think you don't need to explicitly store a tree data structure. However, the algorithm above contains the following statement: return the action in ACTIONS(state) with value v. Now, the ACTIONS(state) will return the possible actions from a certain state (e.g. where A needs to play).
If we work out all the algorithm, we will get a value v, and we will follow the node with the value v that is passed from the terminal node.
Now, suppose I do not store the full tree of all possible moves or actions from every state. When return the action ACTIONS(state) with the value v is executed, I will only get the actions that lead me to the next state, and I know that one of the actions leads me to the best possible path. But, if I don't have extra bookkeeping, e.g. the full tree, will I be able to match the actions with the value v?
You shouldn't build an explicit tree structure in a minimax algorithm, and in practical situations, you can't. A minimax algorithm with a depth bound d and a branching factor b traverse a tree that is O(dᵇ) nodes large, which very soon gets too large to store. (In the version you posted, there isn't even a depth bound, meaning that you would generate the entire game tree.)
The way to keep track of the state is to rewrite the top-level ALPHA-BETA-SEARCH as
function ALPHA-BETA-SEARCH(state) returns an action
best_a <- nil
max_v <- -∞
for each a in actions(state)
v <- MIN-VALUE(RESULT(state, a), +∞, max_v)
if v > max_v then
max_v <- v
best_a <- a
return best_a
That is, you "unroll" the top call to MAX-VALUE in the main function.
Note that, in the function above, you're looping over each action a, computing their v. When a certain v is the maximum you've found so far, you know that the action you computed it from is currently the best_a.
Two issues I think are important to answer you:
The tree is built anyway - implicitly, each ACTIONS(vertex) op can simply connect vertex to each of his sons - so there is no real need to additional tree building anyway. And, of course, you can add properties such as v to each node of that tree.
Nevertheless, if you don't need and don't care for the actual tree, one possible solution is returning (v, state) [a tuple] instead of just v. All ops on the return value - will be done on v, the same as they are now, the only one who will actually use state - is the top level ALPHA-BETA-SEARCH(). Of course, it will require less elegant MIN and MAX functions, since you will need to find not only the value v - but also the vertex that is giving this value.