Related
Two players take turns choosing one of the outer coins. At the end we calculate the difference
between the score two players get, given that they play optimally.
The greedy strategy of getting the max. value of coin often does not lead to the best results in my case.
Now I developed an algorithm:
Sample:{9,1,15,22,4,8}
We calculate the sum of coins in even index and that of coins in odd index.
Compare the two sum, (9+15+4)<(1+22+8) so sum of odd is greater. We then pick the coin with odd index, in our sample that would be 8.
the opponent, who plays optimally, will try to pick the greater coin, e.g. 9.
There is always a coin at odd index after the opponent finished, so we keep picking the coins
at odd index, that would be 1.
looping the above steps we will get a difference of (8+1+22) - (9+15+4) = 3.
6.vice versa if sum of even is greater in step 2.
I have compared the results generated by my algorithm with a 2nd algorithm similar to below one: https://www.geeksforgeeks.org/optimal-strategy-for-a-game-set-2/?ref=rp
And the results were congruent, until my test generated a random long array:
[6, 14, 6, 8, 6, 3, 14, 5, 18, 6, 19, 17, 10, 11, 14, 16, 15, 18, 7, 8, 6, 9, 0, 15, 7, 4, 19, 9, 5, 2, 0, 18, 2, 8, 19, 14, 4, 8, 11, 2, 6, 16, 16, 13, 10, 19, 6, 17, 13, 13, 15, 3, 18, 2, 14, 13, 3, 4, 2, 13, 17, 14, 3, 4, 14, 1, 15, 10, 2, 19, 2, 6, 16, 7, 16, 14, 7, 0, 9, 4, 9, 6, 15, 9, 3, 15, 11, 19, 7, 3, 18, 14, 11, 10, 2, 3, 7, 3, 18, 7, 7, 14, 6, 4, 6, 12, 4, 19, 15, 19, 17, 3, 3, 1, 9, 19, 12, 6, 7, 1, 6, 6, 19, 7, 15, 1, 1, 6]
My algorithm generated 26 as the result, while the 2nd algorithm generated 36.
Mine is nothing about dynamic programming and it requires less memory, whereas i also implemented the 2nd one with memoization.
This is confusing since mine is correct with most of the array cases until this one.
Any help would be appreciated!
If the array is of even length, your algorithm tries to produce a guaranteed win. You can prove that quite easily. But it doesn't necessarily produce the optimal win. In particular it won't find strategies where you want some coins that are on even indexes and others on odd indexes.
The following short example illustrates the point.
[10, 1, 1, 20, 1, 1]
Your algorithm will look at evens vs odds, realize that 10+1+1 < 1+20+1 and take the last element first. Guaranteeing a win by 10.
But you want both the 10 and the 20. Therefore the optimal strategy is to take the 10 leaving 1, 1, 20, 1, 1, whichever side the other person takes you take the other to get to 1, 20, 1, and then whichever side the other takes you take the middle. Resulting in you getting 10, 1, 20 and the other person getting 1, 1, 1. Guaranteeing a win by 28.
Suppose I have the following lists:
[1, 2, 3, 20, 23, 24, 25, 32, 31, 30, 29]
[1, 2, 3, 20, 23, 28, 29]
[1, 2, 3, 20, 21, 22]
[1, 2, 3, 14, 15, 16]
[16, 17, 18]
[16, 17, 18, 19, 20]
Order matters here. These are the nodes resulting from a depth-first search in a weighted graph. What I want to do is break down the lists into unique paths (where a path has at least 2 elements). So, the above lists would return the following:
[1, 2, 3]
[20, 23]
[24, 25, 32, 31, 30, 29]
[28, 29]
[20, 21, 22]
[14, 15, 16]
[16, 17, 18]
[19, 20]
The general idea I have right now is this:
Look through all pairs of lists to create a set of lists of overlapping segments at the beginning of the lists. For example, in the above example, this would be the output:
[1, 2, 3, 20, 23]
[1, 2, 3, 20]
[1, 2, 3]
[16, 17, 18]
The next output would be this:
[1, 2, 3]
[16, 17, 18]
Once I have the lists from step 2, I look through each input list and chop off the front if it matches one of the lists from step 2. The new lists look like this:
[20, 23, 24, 25, 32, 31, 30, 29]
[20, 23, 28, 29]
[20, 21, 22]
[14, 15, 16]
[19, 20]
I then go back and apply step 1 to the truncated lists from step 3. When step 1 doesn't output any overlapping lists, I'm done.
Step 2 is the tricky part here. What's silly is it's actually equivalent to solving the original problem, although on smaller lists.
What's the most efficient way to solve this problem? Looking at all pairs obviously requires O(N^2) time, and step 2 seems wasteful since I need to run the same procedure to solve these smaller lists. I'm trying to figure out if there's a smarter way to do this, and I'm stuck.
Seems like the solution is to modify a Trie to serve the purpose. Trie compression gives clues, but the kind of compression that is needed here won't yield any performance benefits.
The first list you add becomes it's own node (rather than k nodes). If there is any overlap, nodes split but never get smaller than holding two elements of the array.
A simple example of the graph structure looks like this:
insert (1,2,3,4,5)
graph: (1,2,3,4,5)->None
insert (1,2,3)
graph: (1,2,3)->(4,5), (4,5)->None
insert (3,2,3)
graph: (1,2,3)->(4,5), (4,5)->None, (3,32)->None
segments
output: (1,2,3), (4,5), (3,32)
The child nodes should also be added as an actual Trie, at least when there are enough of them to avoid a linear search when adding/removing from the data structure and potentially increasing the runtime by a factor of N. If that is implemented, then the data structure has the same big O performance as a Trie with a somewhat higher hidden constants. Meaning that it takes O(L*N), where L is the average size of the list and N is the number of lists. Obtaining the segments is linear in the number of segments.
The final data structure, basically a directed graph, for your example would looks like below, with the start node at the bottom.
Note that this data structure can be built as you run the DFS rather than afterwords.
I ended up solving this by thinking about the problem slightly differently. Instead of thinking about sequences of nodes (where an edge is implicit between each successive pair of nodes), I'm thinking about sequences of edges. I basically use the algorithm I posted originally. Step 2 is simply an iterative step where I repeatedly identify prefixes until there are no more prefixes left to identify. This is pretty quick, and dealing with edges instead of nodes really simplified everything.
Thanks for everyone's help!
I'm putting together a simple chess position evaluation function. This being the first time for me building a chess engine, I am feeling very tentative with putting in just any evaluation function. The one shown on this Chess Programming Wiki page looks like a good candidate. However this has an ellipsis at the end which makes me unsure of whether it will be a good one to use?
Once the whole engine is in place and functional, I intend to come back to the evaluation function and make a real attempt to sorting it out properly. But for now I need some sort of function which is good enough to play against an average amateur.
The most basic component of an evaluation function is material, obviously. This should be perfectly straightforward, but on its own does not lead to interesting play. The engine has no sense of position at all, and simply reacts to tactical lines. But we will start here:
value = white_material - black_material // calculate delta material
Next we introduce some positional awareness through piece-square tables. For example, this is a such a predefined table for pawns:
pawn_table = {
0, 0, 0, 0, 0, 0, 0, 0,
75, 75, 75, 75, 75, 75, 75, 75,
25, 25, 29, 29, 29, 29, 25, 25,
4, 8, 12, 21, 21, 12, 8, 4,
0, 4, 8, 17, 17, 8, 4, 0,
4, -4, -8, 4, 4, -8, -4, 4,
4, 8, 8,-17,-17, 8, 8, 4,
0, 0, 0, 0, 0, 0, 0, 0
}
Note that this assumes the common centipawn (value of pawn is ~100) value system. For each white pawn we encounter, we index into the table with the pawn's square and add the corresponding value.
for each p in white pawns
value += pawn_table[square(p)]
Note that we can use use a simple calculation to reflect the table when indexing for black pieces. Alternatively you can define separate tables.
For simple evaluation this will work very well and your engine will probably already be playing common openings. However, it's not too hard to make some simple improvements. For example, you can create tables for the opening and the endgame, and interpolate between them using some sort of phase calculation. This is especially effective for kings, where their place shifts from the corners to the middle of the board as the game progresses.
Thus our evaluation function may look something like:
evaluate(position, colour) {
phase = total_pieces / 32 // this is just an example
opening_value += ... // sum of evaluation terms
endgame_value += ...
final_value = phase * opening_value + (1 - phase) * endgame_value
return final_value * sign(colour) // adjust for caller's perspective
}
This type of evaluation, along with quiescence search, should be enough to annihilate most amateurs.
We are trying to create auto-correlated random values which will be used as timeseries.
We have no existing data we refer to and just want to create the vector from scratch.
On the one hand we need of course a random process with distribution and its SD.
On the other hand the autocorrelation influencing the random process has to be described. The values of the vector are autocorrelated with decreasing strengh over several timelags.
e.g. lag1 has 0.5, lag2 0.3, lag1 0.1 etc.
So in the end the vector should look something that:
2, 4, 7, 11, 10 , 8 , 5, 4, 2, -1, 2, 5, 9, 12, 13, 10, 8, 4, 3, 1, -2, -5
and so on.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Find the min number in all contiguous subarrays of size l of a array of size n
I have a (large) array of numeric data (size N) and would like to compute an array of running maximums with a fixed window size w.
More directly, I can define a new array out[k-w+1] = max{data[k-w+1,...,k]} for k >= w-1 (this assumes 0-based arrays, as in C++).
Is there a better way to do this than N log(w)?
[I'm hoping there should be a linear one in N without dependence on w, like for moving average, but cannot find it. For N log(w) I think there is a way to manage with a sorted data structure which will do insert(), delete() and extract_max() altogether in log(w) or less on a structure of size w -- like a sorted binary tree, for example].
Thank you very much.
There is indeed an algorithm that can do this in O(N) time with no dependence on the window size w. The idea is to use a clever data structure that supports the following operations:
Enqueue, which adds a new element to the structure,
Dequeue, which removes the oldest element from the structure, and
Find-max, which returns (but does not remove) the minimum element from the structure.
This is essentially a queue data structure that supports access (but not removal) of the maximum element. Amazingly, as seen in this earlier question, it is possible to implement this data structure such that each of these operations runs in amortized O(1) time. As a result, if you use this structure to enqueue w elements, then continuously dequeue and enqueue another element into the structure while calling find-max as needed, it will take only O(n + Q) time, where Q is the number of queries you make. If you only care about the minimum of each window once, this ends up being O(n), with no dependence on the window size.
Hope this helps!
I'll demonstrate how to do it with the list:
L = [21, 17, 16, 7, 3, 9, 11, 18, 19, 5, 10, 23, 20, 15, 4, 14, 1, 2, 22, 13, 8, 12, 6]
with length N=23 and W = 4.
Make two new copies of your list:
L1 = [21, 17, 16, 7, 3, 9, 11, 18, 19, 5, 10, 23, 20, 15, 4, 14, 1, 2, 22, 13, 8, 12, 6]
L2 = [21, 17, 16, 7, 3, 9, 11, 18, 19, 5, 10, 23, 20, 15, 4, 14, 1, 2, 22, 13, 8, 12, 6]
Loop from i=0 to N-1. If i is not divisible by W, then replace L1[i] with max(L1[i],L1[i-1]).
L1 = [21, 21, 21, 21, | 3, 9, 11, 18, | 19, 19, 19, 23 | 20, 20, 20, 20 | 1, 2, 22, 22 | 8, 12, 12]
Loop from i=N-2 to0. If i+1 is not divisible by W, then replace L2[i] with max(L2[i], L2[i+1]).
L2 = [21, 17, 16, 7 | 18, 18, 18, 18 | 23, 23, 23, 23 | 20, 15, 14, 14 | 22, 22, 22, 13 | 12, 12, 6]
Make a list L3 of length N + 1 - W, so that L3[i] = max(L2[i], L1[i + W - 1])
L3 = [21, 17, 16, 11 | 18, 19, 19, 19 | 23, 23, 23, 23 | 20, 15, 14, 22 | 22, 22, 22, 13]
Then this list L3 is the moving maxima you seek, L2[i] is the maximum of the range between i and the next vertical line, while l1[i + W - 1] is the maximum of the range between the vertical line and i + W - 1.