I'm building a database of chess evaluations (essentially a map from a chess position to an evaluation), and I want to use this to come up with a good move for given positions. The idea is to do a kind of "static" minimax, i.e.: for each position, use the stored evaluation if evaluations for child nodes (positions after next ply) are not available, otherwise use max (white to move)/min (black to move) evaluations of child nodes (which are determined in the same way).
The problem are, of course, loops in the graph, i.e. repeating positions. I can't fathom how to deal with this without making this infinitely less efficient.
The ideas I have explored so far are:
assume an evaluation of 0 for any position that can be reached in a game with less moves than are currently evaluated. This is an invalid assumption, because - for example - if White plays A, it might not be desirable for Black to follow up with x, but if White plays B, then y -> A -> x -> -B -> -y might be best line, resulting in the same position as A -> x, without any repetitions (-m denoting the inverse move to m here, lower case: Black moves, upper case: White moves).
having one instance for each possible way a position can be reached solves the loop problem, but this yields a bazillion of instances in some positions and is therefore not practical
the fact that there is a loop from a position back to that position doesn't mean that it's a draw by repetition, because playing the repeating line may not be best choice
I've tried iterating through the loops a few times to see if the overall evaluation would become stable. It doesn't, because in some cases, assuming the repeat is the best line means it isn't any longer - and then it goes back to the draw being the back line etc.
I know that chess engines use transposition tables to detect positions already reached before, but I believe this doesn't address my problem, and I actually wonder if there isn't an issue with them: a position may be reachable through two paths in the search tree - one of them going through the same position before, so it's a repeat, and the other path not doing that. Then the evaluation for path 1 would have to be 0, but the one for path 2 wouldn't necessarily be (path 1 may not be the best line), so whichever evaluation the transposition table holds may be wrong, right?
I feel sure this problem must have a "standard / best practice" solution, but google failed me. Any pointers / ideas would be very welcome!
I don't understand what the problem is. A minimax evaluation, unless we've added randomness to it, will have the exact same result for any given board position combined with who's turn it is and other key info. If we have the space available to store common board_position+who's_turn+castling+en passant+draw_related tuples (or hash thereof), go right ahead. When reaching that tuple in any other evaluation, just return the stored value or rely on its more detailed record for more complex evaluations (if the search yielding that record was not exhaustive, we can have different interpretations for it in any one evaluation). If the program also plays chess with time limits on the game, an additional time dimension (maybe a few broad blocks) would probably be needed in the memoisation as well.
(I assume you've read common public info about transposition tables.)
Related
I'm having a very difficult time determining how to translate a game state for a specific turn in a game I'm developing into a limited sequence of moves that represents the moves taken for that turn. I'd appreciate advice on how I can do this.
The rules for the game are relatively simple. There's a hex board, with hexes belonging to 2 players. On any given turn, pieces can already exist on the board, having been purchased on a previous turn, or they can be purchased onto the board (a yellow piece represents its being purchased onto the board this turn).
These pieces are "active", and can still be moved. Pieces can also be combined, and will still remain "active". They can be combined either by moving an existing piece onto another piece, or by purchasing a new piece onto an existing piece. When combined, an upgraded piece will exist on the target hex. Pieces can be of 3 strengths; X, Y, and Z. X combining with X gives Y, and X with Y gives Z.
Pieces can continue to be merged like this and remain "active". A piece can be moved to another hex in its own territory and remain "active". A piece stops being "active" when it is moved to capture the other player's hex. It cannot move after that, although it can still be combined with. Green below indicates an inactive piece.
A piece can also be summoned directly on top of another piece, resulting in an upgraded piece (if it was already active, it stays active; if it was inactive, it stays inactive):
Now, this is pretty easy to represent in game state; just update the state of the pieces and the board to reflect whatever's currently true. And it's quite easy to convert it into a sequence of moves as long as you theoretically allow for that sequence of moves to be unbounded; pieces could remain active and move to and fro ad infinitum. Of course, I want to keep the sequence of moves limited. This is where I'm having trouble. I have the following 2 moves:
Move piece to location
Summon piece to location
How can I convert the moves a player makes into a limited sequence of moves to represent what the player actually did, leading to the final state? I don't know if I'm missing something, but this seems to get almost impossibly complex to figure out. If you have pieces moving around within their own territory and remaining active, you might think you could just update the move in-place to the new coordinates instead of adding a new move to the new coordinates, but what if there is another move where a piece combines with that piece to form an upgraded piece, which relied upon the first piece moving to its first set of coordinates? Updating the move coordinates in-place now means that that second combination move becomes a regular move because it is now moving onto an empty hex, yet it should remain a combination move (the board state will in fact be the combined piece having moved to the new coordinates).
Conceptually, there should always be a limited sequence of moves that can represent any operation. However I am finding it extremely hard to figure out how to write an algorithm to do this automatically. I think an algorithm that would at least prevent the unbounded nature of the moves would be to say "a piece's most recent move is updated instead of adding the new move to the list if that most recent move is not a combine or capture operation". That should always result in the game state being correctly created by the move set, and prevent unlimited cycles. However that could still result in quite a lot of moves. For instance if you had 10 pieces in a territory, you could move all 1, capture with 1, move the remaining 9, combine one with another, move the remaining 8, etc. potentially resulting in over 60 moves from 10 pieces. It would be nice if there were an algorithm to get this down a bit, and I'm still not 100% sure that even this algorithm doesn't have some edge cases where it wouldn't work.
Am I missing a relatively straightforward way to solve this problem? The rules must stay the same but I'm open to suggestions about perhaps introducing new move types if that would help solve the problem, too.
From what I understand from your question and the comments, the problem seems to be the large amount of possible moves to be made in a single turn, including combinations. I couldn't think of a complete algorithm, but I think I can give you a pretty good starting point. In the end of the answer you will find the edge cases not contemplated here and possible solutions to them.
My suggestion is not to think in terms of movements, but in terms of events, and to check that each event is possible, according to the rules.
Previous assumptions and definitions.
The size of the board, the number of pieces and the number of purchasable pieces are finite.
Each piece is labelled with an ID of the form T_s, where T is the piece type (X, Y, Z) and s is a counter (1, 2, 3, 4...).
You have access to the information from the game state before the game turn and after it.
You can only purchase pieces of type X.
Possible events:
Piece moved
Piece purchased
No event of combination because it can always be translated to one or more events of movement. In other words: we can assume that if a piece is moved after being combined with another, there is an equivalent sequence of movements where both pieces just move to the final destination and get combined there. X_1 -> X_2 = Y_1; Y_1 -> NEWCELL equals to X_1 -> NEWCELL; X_2 -> NEWCELL
The algorithm
My algorithm is based on the fact that you can represent the game state as disjoint sets of labelled pieces. So the first thing to do is to use an inundation algorithm to compute the disjoint areas (area = set of cells) of the board in the initial state that belong to the player. Let's call these areas A_1, A_2, etc. It is not necessary to have this representation for the final state.
You also need to compute the difference of the initial pieces set and the final one, to determine the number of combinations and purchases made.
A. Let's call N_T_s to the number of pieces of type T in the starting state and N_T_f to the number of pieces of type T in the final state. You can calculate the number of purchased pieces like this: (N_X_f + 2*N_Y_f + 3*N_Z_f) - (N_X_s + 2*N_Y_s + 3*N_Z_s).
B. As the pieces are labelled, you can build a set NEW of pieces that were not present in the initial state. We will assume these are either the result of a purchase or a combination.
Prepare some variables:
Set a counter of PURCHASED with the initial value computed in the step 2A.
For each area A_i
Create a set of COMBINED_i with the pieces from A_i that are not present in the final state (as the only way for a piece to disappear is to be combined).
Define an area M_i as the union of A_i and all its adjacent cells.
For each area A_i and for each piece inside it:
A. If the piece is not inside NEW: its final position must be inside M_i. Given that a deactivated cell cannot move, it shouldn't move any further.
For each piece inside NEW:
The piece must be inside one of the M_i areas (otherwise the turn is invalid). Let's call it M_p.
A. If the piece is of type X: decrease the value of the counter PURCHASED. If the value is negative, the turn is not valid.
B. If the piece is of type Y:
If there exist two pieces X inside COMBINED_p, remove them from the set.
Else, if there exists a single X inside COMBINED_p remove it from the set and decrease the value of the counter PURCHASED.
Else, decrease twice the value of the counter PURCHASED.
C. If the piece is of type Z:
If there exist an X and a Y inside COMBINED_p, remove them from the set.
Else, if there exists an Y but no X inside COMBINED_p remove it from the set and decrease the value of the counter PURCHASED.
Else if there exist two X inside COMBINED_p remove them from the set and decrease the value of the counter PURCHASED.
Else if there exists a single X inside COMBINED_p remove it from the set and decrease twice the value of the counter PURCHASED.
Else, decrease thrice the value of the counter PURCHASED.
How to store events
In step 4A you can store the movement of the piece.
In step 5A you can store the purchase of the piece.
In steps 5B and 5C you can store the movements of the pieces you remove from the COMBINED_p (movement from their initial position to the position of the final combined piece) sets and the purchase of pieces each time you decrease the counter.
Observations
I didn't mention it but I guess you should remove from the areas M_i all the cells occupied by enemy pieces.
The order of if-elses in step 5 is very important. It is the greedy approach to the Change-making problem. Keep it in mind in case you added more combinations to the game.
You haven't told us where the currency for purchasing pieces comes from, but you can quickly check the validity of the purchases made in step 2A.
In case that the board was not finite (maybe it is automatically generated), then you could apply this algorithm to the finite surface defined by the presence of the pieces in both states.
Edge cases
The disjoint areas might not represent the true space where a piece can move, given that pieces are solid and can't be trespassed by other pieces without combining.
For example: I have an area that is just a row like this: #XY##. My algorithm would approve a final state ##YX#. A possible approach would be to check the existence of disjoint sub-areas and treat them separately.
In the final state, two initially disjoint areas could become joined thanks to a piece (let's call it a bridge) that moves into an intermediate cell that is adjacent to both areas at the same time. Precisely thanks to the first edge case, we don't need to worry about other existing pieces moving from one area to another through the bridge... but if we find that the bridge is a recently combined piece (in other words, it belongs to NEW), then we have to make extra checks and even backtracking to determine where did the combined pieces came from, because we can no longer assume they came from a single area plus possible purchases.
As you see, this is not a complete algorithm, but it may settle some working ground. The main contribution I attempt to make is the approach of the disjoint areas and the event focused analysis. If you want, I could try to elaborate further on the edge cases and try to find solutions and incorporate them in the whole algorithm.
In the end, given the flexibility of the ruleset, I had to come up with a different algorithm from the ones suggested. I came at the problem from the other end; instead of considering how to absolutely minimize the number of moves, what is it that allows the unbounded potential for number of moves? It's the fact that active pieces can move an unbounded number of times between empty hexes. Everything else is bounded quite nicely; a piece can move, then combine, but that's it; the combination removes it from the board and upgrades the underlying piece. It can move, then capture, but that ends its turn. So, to eliminate the unbounded nature of the moves list, I ended up with the following algorithm for validating a moves list:
Only one summon operation for a piece is allowed
For a move operation, any moves for a piece after it has captured are forbidden
Multiple non-combine/non-capture moves in a row for a piece are forbidden, if there are no moves between that piece's two moves that are combine/capture moves
When a piece is combined, it is removed from the board and the underlying piece is upgraded, meaning that that piece then can't move after the combine operation as it no longer exists on the board
This does mean that you can get a piece moving before a combine or capture operation, and a piece can even have multiple non-combine/non-capture moves in a turn if they are separated by other pieces being combined - which I think is somewhat necessary in case another piece is then moved on top of it for combination - but multiple such moves in a row are illegal, nicely limiting the bounds of how many moves may be submitted for a turn.
On the client side, it detects if a piece move is the piece moving twice in a row within its own territory. If it isn't, it just adds the move. If it is, it checks whether there is a move since the existing one with the same target coords. If there isn't, it just updates the existing move in-place. If there is, it splices that move out as well as the existing move for this piece, and inserts it at the end of the moves list, followed by this piece's move; this is needed to reflect the fact that this piece is now being combined with that piece, in the correct order.
This seems to work, but it's rather complicated and confusing. Seems to be the nature of the problem, though!
This is actually similar task as i.e. find what moves you need to do in chess to change from one state to another. It can even happen that the solution does not exist.
By default its O(x^n) issue, where x is possible moves and n number of moves. Yes, its growing quite rapidly. In general you have to check all possible moves in BFS way each turn until you find the solution.
Actually if you are successful with it, you are very close to implementing AI, you just add some scores to your moves and thats it.
To optimize it, you can use some heuristics (to i.e. prefer movement in correct direction or not expanding branches that does not make sense - i.e. when piece get combined and you know it is not possible). However there can be situations it will not be sufficient (i.e. piece needs to travel really far away around some enemy position to not be catched).
To get you further - google how to create AI for five-in-a-row or chess, your task will be very similar. Unless you want to create AI, I would strongly recommend to get all movements from client - even if you make a bit more movements than necessary.
I am experimenting on chess AI and currently trying to implement detection of possibility to claim a draw.
A player can claim a draw if:
There has been no capture or pawn move in the last fifty moves by each player.
The same board position has occurred three times.
So, the program must store the history of the previous positions to be able to verify these conditions. This is OK in the situation when a human player claims a draw. But the AI is evaluating millions of positions. So, on one hand it should be able to detect the possibility of such a claim by itself or by the adversary x moves ahead to be able to prevent it in a winning position or to try to enforce it in a losing position. On the other hand this verification can result in a huge loss in performance because of all the copies of history tables created during the deep search.
Is there a standard implementation/optimization of such a feature?
Note: If the answer is implementation-specific, my AI is based on a variation of minimax with alpha-beta pruning.
For item 1, the standard solution is to store a counter in the current state; the counter is reset to 0 each time there is a pawn move or capture, incremented by 1 each time there isn't, and checked at each move if it is greater than 50.
For item 2, some programs keep track and check at each move, but that costs time and space, as you noted. Thus, many programs only keep track of past moves and ignore the possibility of draw by three-fold repetition as they look ahead in the game tree. Another possibility is to keep track of any duplicate positions that have already occurred, and only check if a look-ahead position is the same as one of those.
For item 1, I suggest a counter that is reset whenever a capture/pawn move occurs. (As mentioned before)
For item 2, I suggest an Array of Zobrist Hashes in your Board class, where you keep track of the already occured Positions. If such a position is found (current zobrist == [iterate through all variation zobrists)], you raise the corresponding counter by one.
If a counter reaches 3, its draw => return 0 as positions value.
This method will work pretty fast during runtime because you compare obly ~50 integers maximum. Im you already have tts/hashtables, it will be easy to implement, since you then already have the code for the zobrist codes.
[Edit]:
Btw, these aren't the only draw possibilities, see "stalemate".
Chess programs keep track of identical positions anyway to avoid evaluation cost - e4e5, g4 is the same position as g4e5, e4 and therefore has the same value, avoiding completely the cost of evaluating the latter. All you do is keep track after how many moves the position arose, and if it is different by at least two moves you do some special case handling.
I am working on a connect 4 AI, and saw many people were using this data set, containing all the legal positions at 8 ply, and their eventual outcome.
I am using a standard minimax with alpha/beta pruning as my search algorithm. It seems like this data set could could be really useful for my AI. However, I'm trying to find the best way to implement it. I thought the best approach might be to process the list, and use the board state as a hash for the eventual result (win, loss, draw).
What is the best way for to design an AI to use a data set like this? Is my idea of hashing the board state, and using it in a traditional search algorithm (eg. minimax) on the right track? or is there is better way?
Update: I ended up converting the large move database to a plain test format, where 1 represented X and -1 O. Then I used a string of the board state, an an integer representing the eventual outcome, and put it in an std::unsorted_map (see Stack Overflow With Unordered Map to for a problem I ran into). The performance of the map was excellent. It built quickly, and the lookups were fast. However, I never quite got the search right. Is the right way to approach the problem to just search the database when the number of turns in the game is less than 8, then switch over to a regular alpha-beta?
Your approach seems correct.
For the first 8 moves, use alpha-beta algorithm, and use the look-up table to evaluate the value of each node at depth 8.
Once you have "exhausted" the table (exceeded 8 moves in the game) - you should switch to regular alpha-beta algorithm, that ends with terminal states (leaves in the game tree).
This is extremely helpful because:
Remember that the complexity of searching the tree is O(B^d) - where B is the branch factor (number of possible moves per state) and d is the needed depth until the end.
By using this approach you effectively decrease both B and d for the maximal waiting times (longest moves needed to be calculated) because:
Your maximal depth shrinks significantly to d-8 (only for the last moves), effectively decreasing d!
The branch factor itself tends to shrink in this game after a few moves (many moves become impossible or leading to defeat and should not be explored), this decreases B.
In the first move, you shrink the number of developed nodes as well
to B^8 instead of B^d.
So, because of these - the maximal waiting time decreases significantly by using this approach.
Also note: If you find the optimization not enough - you can always expand your look up table (to 9,10,... first moves), of course it will increase the needed space exponentially - this is a tradeoff you need to examine and chose what best serves your needs (maybe even store the entire game in file system if the main memory is not enough should be considered)
TrialPay posted a programming question about a snake cube puzzle on their blog.
Recently, one of our engineers introduced us to the snake cube. A snake cube is a puzzle composed of a chain of cubelets, connected by an elastic band running through each cubelet. Each cubelet can rotate 360° about the elastic band, allowing various structures to be built depending upon the way in which the chain is initially constructed, with the ultimate goal of arranging the cubes in such a way to create a cube.
Example:
This particular arrangement contains 17 groups of cubelets, composed of 8 groups of two cubelets and 9 groups of three cubelets. This arrangement can be expressed in a variety of ways, but for the purposes of this exercise, let '0' denote pieces whose rotation does not change the orientation of the puzzle, or may be considered a "straight" piece, while '1' will denote pieces whose rotation changes the puzzle configuration, or "bend" the snake. Using that schema, the snake puzzle above could be described as 001110110111010111101010100.
Challenge:
Your challenge is to write a program, in any language of your choosing, that takes the cube dimensions (X, Y, Z) and a binary string as input, and outputs '1' (without quotes) if it is possible to solve the puzzle, i.e. construct a proper XYZ cube given the cubelet orientation, and '0' if the current arrangement cannot be solved.
I posted a semi-detailed explanation of the solution, but how do I determine if the program solves the problem? I thought about getting more test cases, but I ran into some problems:
The snake cube example from TrialPay's blog has the same combination as the picture on Wikipedia's Snake Cube page and www.mathematische-basteleien.de.
It's very tedious to manually convert an image into a string.
I tried to make a program that would churn out a lot of combinations:
#We should start at the binary representation of 16777216 (00100...), because
#lesser numbers have more than 2 consecutive 0s (000111...)
i = 16777216
solved = []
while i <= 2**27:
s = str(bin(i))[2:]
#Add 0s
if len(s) < 27:
s = '0'*(27-len(s)) + s
#Check if there are more than 2 consecutive 0s
print s
if s.find("000") != -1:
if snake_cube_solution(3, 3, 3, s) == 1:
solved.append(s)
i += 1
But it just takes forever to finish executing. Is there a better way to verify the program?
Thanks in advance!
TL;DR: This isn't a programming problem, but a mathematical one. You may be better served at math.stackexchange.com.
Since the cube size and snake length are passed as input, the space of inputs a checker program would need to verify is essentially infinite. Even though checking the solutions's answer for a single input is reasonable, brute forcing this check across the entire input space is clearly not.
If your solution fails on certain cases, your checker program can help you find these. However it can't establish your program's correctness: if your solution is actually correct the checker will simply run forever and leave you wondering.
Unfortunately (or not, depending on your tastes), what you are looking for is not a program but a mathematical proof.
(Proving) Algorithm correctness is itself an entire field of study, and you can spend a long time in it. That said, proof by induction is often applicable (especially for recursive algorithms.)
Other times, navigating between state configurations can be restated as optimizing a utility function. Proving things about the space being optimized (such as it has only one extrema) can then translate to a proof of program correctness.
Your state configurations in this second approach could be snake orientations, or they might be some deeper structure. For example, the general strategy underneath solving a Rubik's cube
isn't usually stated on literal cube states, but on expressions of a group of relevant symmetries. This is what I personally expect your solution will eventually play out as.
EDIT: Years later, I feel I should point out that for a given, fixed cube size and snake length, of course the search space is actually finite. You can write a program to brute-force check all combinations. If you were clever, you could even argue that the times to check a set of cases can be treated as a set of independent random variables. From this you could build a reasonable progress bar to estimate how (very) long your wait would be.
I think your assertion that there can not be three consecutive 0's is false. Consider this arrangement:
000
100
101
100
100
101
100
100
100
One of the problems I'm having with this puzzle is the notation. A 1 indicates that the cubelet can change the puzzle's orientation, but about which axis? In my example above, assume that the Y axis is vertical and the X axis is horizontal. A 1 on the left indicates the ability to rotate about the cubelet's Y axis, and a 1 on the right indicates the ability to rotate about the cubelet's X axis.
I think it's possible to construct an arrangement similar to that above, but with three 000 groups. But I don't have the notation for it. Clearly, the example above could be modified so that the first three lines are:
001
000
101
With the first segment's 1 indicating rotation about the Y axis.
I wrote a Java application for the same problem not long ago.
I used the backtracking algorithm for this.
You just have to do an recursive search through the whole cube checking what directions are possible. If you have found one, you can stop and print the solution (I chose to print out all solutions).
For the 3x3x3 cubes my program solved them in under a second, for the bigger ones it takes about five seconds up to 15 minutes.
I'm sorry I couldn't find any code right now.
I have spent a whole day trying to implement minimax without really understanding it. Now, , I think I understand how minimax works, but not alpha-beta pruning.
This is my understanding of minimax:
Generate a list of all possible moves, up until the depth limit.
Evaluate how favorable a game field is for every node on the bottom.
For every node, (starting from the bottom), the score of that node is the highest score of it's children if the layer is max. If the layer is min, the score of that node is the lowest score of it's children.
Perform the move that has the highest score if you are trying to max it, or the lowest if you want the min score.
My understanding of alpha-beta pruning is that, if the parent layer is min and your node has a higher score than the minimum score, then you can prune it since it will not affect the result.
However, what I don't understand is, if you can work out the score of a node, you will need to know the score of all nodes on a layer lower than the node (in my understanding of minimax). Which means that you'llstill be using the same amount of CPU power.
Could anyone please point out what I am getting wrong? This answer ( Minimax explained for an idiot ) helped me understand minimax, but I don't get how alpha beta pruning would help.
Thank you.
To understand Alpha-Beta, consider the following situation. It's Whites turn, white is trying to maximize the score, black is trying to minimize the score.
White evaluates move A,B, and C and finds the best score is 20 with C. Now consider what happens when evaluating move D:
If white selects move D, we need to consider counter-moves by black. Early on, we find black can capture the white queen, and that subtree gets a MIN score of 5 due to the lost queen. However, we have not considered all of blacks counter-moves. Is it worth checking the rest? No.
We don't care if black can get a score lower than 5 because whites move "C" could keep the score to 20. Black will not choose a counter-move with a score higher than 5 because he is trying to MINimize the score and has already found move with a score of 5. For white, move C is preferred over move D as soon as the MIN for D (5 so far) goes below that of C (20 for sure). So we "prune" the rest of the tree there, pop back up a level and evaluate white moves E,F,G,H.... to the end.
Hope that helps.
You don't need to evaluate the entire subtree of a node to decide its value. Alpha Beta Pruning uses two dynamically computed bounds alpha and beta to bound the values that nodes can take.
Alpha is the minimum value that the max player is guaranteed (regardless of what the min player does) through another path through the game tree. This value is used to perform cutoffs (pruning) at the minimizing levels. When the min player has discovered that the score of a min node would necessarily be less than alpha, it need not evaluate any more choices from that node because the max player already has a better move (the one which has value alpha).
Beta is the maximum value that the min player is guaranteed and is used to perform cutoffs at the maximizing levels. When the max player has discovered that the score of a max node would necessarily be greater than beta, it can stop evaluating any more choices from that node because the min player would not allow it to take this path since the min player already has a path that guarantees a value of beta.
I've written a detailed explanation of Alpha Beta Pruning, its pseudocode and several improvements: http://kartikkukreja.wordpress.com/2014/06/29/alphabetasearch/
(Very) short explanation for mimimax:
You (the evaluator of a board position) have the choice of playing n moves. You try all of them and give the board positions to the (opponent) evaluator.
The opponent evaluates the new board positions (for him, the opponent side) - by doing essentially the same thing, recursively calling (his opponent) evaluator, unless the maximum depth or some other condition has been reached and a static evaluator is called - and then selects the maximum evaluation and sends the evaluations back to you.
You select the move that has the minimum of those evaluation. And that evaluation is the evaluation of the board you had to evaluate at the beginning.
(Very) short explanation for α-β-pruning:
You (the evaluator of a board position) have the choice of playing n moves. You try all of them one by one and give the board positions to the (opponent) evaluator - but you also pass along your current evaluation (of your board).
The opponent evaluates the new board position (for him, the opponent side) and sends the evaluation back to you. But how does he do that? He has the choice of playing m moves. He tries all of them and gives the new board positions (one by one) to (his opponent) evaluator and then chooses the maximum one.
Crucial step: If any of those evaluations that he gets back, is bigger than the minimum you gave him, it is certain that he will eventually return an evaluation value at least that large (because he wants to maximize). And you are sure to ignore that value (because you want to minimize), so he stops any more work for boards he hasn't yet evaluated.
You select the move that has the minimum of those evaluation. And that evaluation is the evaluation of the board you had to evaluate at the beginning.
Here's a short answer -- you can know the value of a node without computing the precise value of all its children.
As soon as we know that a child node cannot be better, from the perspective of the parent-node player, than the previously evaluated sibling nodes, we can stop evaluating the child subtree. It's at least this bad.
I think your question hints at misunderstanding of the evaluation function
if you can work out the score of a node, you will need to know the score of all nodes on a layer lower than the node (in my understanding of minimax)
I'm not completely sure what you meant there, but it sounds wrong. The evaluation function (EF) is usually a very fast, static position evaluation. This means that it needs only look at a single position and reach a 'verdict' from that. (IOW, you don't always evaluate a branch to n plys)
Now many times, the evaluation truly is static, which means that the position evaluation function is completely deterministic. This is also the reason why the evaluation results are easily cacheable (since they will be the same each time a position is evaluated).
Now, for e.g. chess, there is usually quite a bit of overt/covert deviation from the above:
a position might be evaluated differently depending on game context (e.g. whether the exact position did occur earlier during the game; how many moves without pawn moves/captures have occurred, en passant and castling opportunity). The most common 'trick' to tackle this is by actually incorporating that state into the 'position'1
a different EF is usually selected for the different phases of the game (opening, middle, ending); this has some design impact (how to deal with cached evaluations when changing the EF? How to do alpha/beta pruning when the EF is different for different plies?)
To be honest, I'm not aware how common chess engines solve the latter (I simply avoided it for my toy engine)
I'd refer to an online resources like:
Computer Chess Programming Theory
Alpha/Beta Pruning
Iterative Deepening
Transposition Table
1just like the 'check'/'stalemate' conditions, if they are not special cased outside the evaluation function anyways