How to improve this Dynamic Programming solution? - algorithm

Original problem statement Pile it up
Summary: Two player A and B play a game consisting of 2 piles of X and Y coins. Each turn he can do one of the following:
Remove any number of coin from a single pile (at least 1 coin)
Remove the same amount of coin from both pile
Pass the turn. This still count as one turn.
The game ends when no move is possible and the player who cannot make a move loses. Both players play optimally. Both players calculate the outcome of the game before the game begins. The player who loses tries to maximize the number of turns in the game and player who wins tries to minimize the turns. No player can pass more than P times. A starts the game. Output the name of the winner and the number of moves in the game separated by a single space. 0 <= X, Y, P <= 1000
I came up with an O(n^3) DP solution but surely it is not good enough for this problem considering the bounds. Let d[i, j] be the minimum number of turns to win if this is your turn to play and there are i and j coins left in pile X and Y respectively. Then we have:
d[i, j] = 0 if i = j = 0
1 if (i = 0 && j > 0) or (i > 0 && j = 0) or (i = j && i > 0)
min(min(d[i-p, j]), min(d[i, j-q), min(i-r, j-r)) if a each substate is a losing position
max of all winning position substate if no losing substate is found
Finally, d[i,j] = d[i,j] + 2*P if [i,j] is a winning state and you will not win immediately from the start.
As you can see from the formulation above, this is a O(n^3) solution. I want to improve my solution to O(n^2), how can I reformulate the problem?

First winning positions are
1 empty pile
two piles with same number of coins
When will a player pass his turn?
Suppose position is (3,2), then thre are 3 options for him.
He can move to (2,2) but that makes it a winning position for his opponent.
He can move to (1,0) but again that is a winning position for his opponent.
If he chooses to skip the turn, then opponent can skip the turn too.This skipping can go for P turns at most.
Depending on parity of P(whether P is even or odd) and depending on who starts the skipping sequence, we can find out the person who won.Finding the number of turns isn't much harder from there.
Why is the skipping sequence optimal?
Well if you are losing you would want to stay in the game longer.(As given in rules of the game.)So even though I know based on the parity of P that I am going to lose, I can delay it by P turns.Make sense?
I encourage you to use this insight into making your algorithm faster, but please do ask questions if you have trouble implementing it!

Related

Game of choosing maximum amount after removing K coins optimally

I am given the following task to solve:
Two players play a game. In this game there are coins and each coin has a value. Each player takes turns and chooses 1 coin. The goal is to have the highest total value at the end. Each player is forced to play optionally(that means always choosing the highest value from the pile). I must find out the sum of the 2 players/the difference between their highest possible sums
Constraints: All values are natural integers and positive.
The task above is a classic greedy problem. From what I've tried it can be sorted with quickSort and then just picking the elements in order for the 2 players. If you need a better time on my tests Radix-Sort performs better. Ok so this task is pretty easy.
Now I have the same task as above BUT the first player must remove OPTIMALLY K coins such that the difference between their scores is maximal. Well this sounds like DP but my mind can't come up with the solution. I must find out again the maximal difference between their points(with both players playing optimally). Or the points of the 2 players in such a way that the difference between them is maximal.
Is there such an algorithm already implemented? Or can someone give me some tips on this issue?
Here is a DP approach solution. We consider n coins, sorted by descending order to simplify the notation (meaning coins[0] is the highest value coin, while coins[n-1] has the lowest value), and we want to remove k coins in order to win the game with a margin as big as possible.
We will consider a matrix M, of dimensions n-k per k.
M stores the following: M(i, j) is the best possible score after playing i turns, when j coins have been removed out of the i+j best coins. It may sound a bit counter-intuitive at first, but it actually is what we are looking for.
Indeed, we have already a value to initialize our matrix: M(0, 0) = 0.
We also can see that M(n-k, k) is actually the solution to the problem we want to solve.
We now need recurrence equations to fill up our matrix. We consider that we want to maximize the score difference for the first player. To maximize the score difference for the second player, the approach is the same, just modify some signs.
if i = 0 then:
M(i, j) = 0 // score difference is always 0 after playing 0 turns
else if j = 0 and i % 2 = 0: // player 1 plays
M(i, j) = M(i-1, j) + coins[i+j]
else if j = 0 and i % 2 = 1: // player 2 plays
M(i, j) = M(i-1, j) - coins[i+j]
else if i % 2 = 0:
M(i, j) = max(M(i, j-1), M(i-1, j) + coins[i+j])
else if i % 2 = 1:
M(i, j) = max(M(i, j-1), M(i-1, j) - coins[i+j])
This recurrence simply means that the best choice, at any point, is between removing the coin (in the case where the best value is M(i, j-1)), or not removing it(case where the best value is M(i-1, j) +/- coins[i+j]) .
That will give you the final score difference, but not the set of coins to remove. To find it, you must keep the 'optimal path' that your program used to calculate the matrix values (was the best value coming from M(i-1,j) or from M(i,j-1) ?).
This path can give you the set you are looking for. By the way, you can see this makes sense, as there are k among n possible ways to remove k coins out of n coins, and there are as well k among n paths from top left to bottom right in a k per n-k matrix if you're allowed to go right or down only.
This explanation might still be unclear, do not hesitate to ask precisions in the comment, I'll edit the answer for more clarity.

How do I find the maximum probability using dynamic programming?

For better understanding of this question, you can check out:-
1) https://math.stackexchange.com/questions/3100336/how-to-calculate-the-probability-in-this-case
John is playing a game against a magician.In this game, there are initially 'N' identical boxes in front of him and one of them contains a magic pill ― after eating this pill, he becomes immortal.
He has to determine which box contains the pill. He is allowed to perform at most 'M' moves. In each move, he may do one of the following:
1)
Choose one of the boxes that are in front of him uniformly randomly and guess that this box contains the pill. If the guess is correct, the game ends and he gets the pill. Otherwise, after this guess, the magician adds K empty boxes in front of him in such a way that John cannot determine which boxes were added; the box he guessed also remains in front of him and he cannot distinguish this box from the other boxes in subsequent moves either.
2) Choose a number X such that X is a positive multiple of K, but strictly less than the current number of boxes in front of John. The magician then removes X empty boxes. Of course, John must not perform this move if the current number of boxes is ≤K.
If John plays optimally, what will be the maximum probability of him getting the pill ? 'N' is always less than 'K'.
Example:- Let M=3, so 3 moves are allowed. K=20,N=3.
In his first move, John selects a box with probability, x = 1/3 ,(20 boxes have been added(20+3==23) then in the second move, he again selects a box again, with probability this time, y=1/23*(2/3). Here, 2/3 denotes the probability of failure in the first move.
In the third move, he does the same thing with probability , z = 1/43*(22/23)*(2/3) .
So the total probability is= x+y+z=l1
Lets say, in the above case, in the second move,he chooses to remove 20 boxes and do nothing else, then the new final probability is= 1/3+0(nothing is done in second move!) + 2/3*(1/3)=l2. Now, as l2 > l1 ,so 'l2' is the answer to our question.
Basically, we have to determine which sequence of moves will yield the maximum probability? Also,
P(Winning) =P(Game ending in 1st Move)+P(Game ending in 2nd Move)+P(Game ending in 3rd Move) =(1/3)+0+(2/3)*(1/3) =5/9
Given, N,K,M how can we find out the maximum probability?
Do we have to apply dynamic programming?
Let p(N, K, M) be John's probability if he plays optimally. We have the following recurrence relations:
p(N, K, 0) = 0
If there are no remaining moves, then he loses.
if M > 0 and N < X, then p(N, K, M) = 1/N + (N−1)/N · p(N+K, K, M−1)
If there's at least one remaining move, and option #2 is not allowed, then his probability of winning is the probability that he guesses correctly in this round, plus the probability that he guesses wrongly in this round but he wins in a later turn.
if M > 0 and N ≥ X, then p(N, K, M) is the greater of these two:
1/N + (N−1)/N · p(N+K, K, M−1)
If he takes option #1, then this is the same as the case where he was forced to take option #1.
p(N % K, K, M−1), where '%' is the "remainder" or "modulus" operator
If he takes option #2, then he certainly won't win in this step, so his probability of winning is equal to the probability that he wins in a later turn.
Note that we only need to consider N % K, because he should certainly choose the largest value of X that he's allowed to. There's never any benefit to letting the pool of boxes remain unnecessarily large.
Dynamic programming, or recursion plus memoization, is well-suited to this; you can apply the above recurrence relations directly.
Note that K never changes, so you don't need an array dimension for it; and N only changes by adding or subtracting integer multiples of K, so you're best off using array indices n such that N = (N0 % K) + nK.
Additionally, note that M decreases by exactly 1 in each turn, so if you're using a dynamic-programming approach and you only want the final probability, then you don't need to retain probabilities for all values of M; rather, when building the array for a given value of M, you only need to keep the array for M−1.

Dynamic Programming - Largest arrangement of bookcases

I'm trying to solve a problem, so I'm not looking for code, but for similar algorithms so I can solve it myself.
I am given n bookcases each with a size amount of books inside. I am to move SOME of these bookcases to a new room as follows:
The first bookcase will always be moved;
I will keep the order of the bookcases in the new room (I can't change positions in the new room, once I selected bookcase 6, I can't select any of the book from 0 to 5);
Bookcase i cannot be placed next to either of the bookcases i-1 or i+1 (ex: I can't place ?-4-5-?/?-5-6-?/?-4-5-6-?);
Which configuration of bookcases will offer me the largest amount of books?
I understand that this is solved using a dynamic programming algorithm, but I am not sure which one. I initially thought it would be similar to the knapsack problem, but I don't have a limit of books so it's clearly different (at least I think it is).
Any suggestions are greatly appreciated!
Make an array int M[n], and set M[0] = b[0] because the first bookcase is always moved. Then proceed as follows:
For each element b[i], where i > 0, set M[i] = b[i]
walk back through elements of M at indexes j ranging from 0 to i-2, inclusive; start at i-2 because you cannot take the bookcase that precedes b[i]
Set M[i] to the max of current M[i] and M[j] + b[i]. The meaning of this expression is "I take b[i] and attach it to the series of bookcases ending at j"
Once the loop is over, walk through M[], and find the highest element. This is your answer.
To print the sequence of bookcase indexes start at the position of max element of M[] (say, p) and print p
Now look back through M for a position k < p such that M[k] = M[p] - b[p]. There will be at least one such element because of the way the array M[] is constructed.
Print k, set p=k, and continue until you get to the beginning of the array.

A stone game - 2 players play perfectly

Recently I have learned about the nim game and grundy number
I am stuck on a problem. Please give me some ideas
Problem:
A and B play a game with a pile of stone. A starts the game and they alternate moves. In each move, a player has to remove at least one and no more than sqrt of number stones from the pile. So, for example if a pile contains 10 stones, then a player can take 1,2,3 stones from the pile.
Both A and B play perfectly. The player who cannot make a valid move loses. Now you are given the number of stone, you have to find the player who will win if both play optimally.
Example
n=3 A win,
n=5 B win
n<=10^12
I can solve this problem with small number of stone by using Grundy number https://www.topcoder.com/community/data-science/data-science-tutorials/algorithm-games/?
grundy function is g(x) with x is the remain stones.
call F(s) is set of number of remain stone that we can obtain from x stone.
if s is a terminal position, g(s)=0
if s is not a terminal position, Let X = {g(t)| t in F(s)}. Then, grundy number of s is the smallest integer greater than or equal to 0 which is not in X.
for example x=10 so F(x)={9,8,7} by take 1,2 or 3 stones. (sqrt(10)<=3)
if g(n)>0 => the first player win
g(0)=0
g(1)=1
g(2)=0
g(3)=1
g(4)=2
....
but this algorithm is to slow.
Thanks in advance.
I'm adding a second answer because my first answer provides the background theory without the optimization. But since OP clearly is looking for some optimization and for a very fast solution without a lot of recursion, I took my own advice:
Of course, the really fast way to do this is to do some more math and figure out some simple properties of n you can check that will determine whether or not it is a winner or a loser.
I'm going to use the terminology I defined there, so if this isn't making sense, read that answer! Specifically, n is the pile size, k is the number of stones to remove, n is a winner if there is a winning strategy for player A starting with a pile of size n and it is a loser otherwise. Let's start with the following key insight:
Most numbers are winners.
Why is this true? It isn't obvious for small numbers: 0 is a loser, 1 is a winner, 2 is a loser, 3 is a winner, so is 4, but 5 is a loser again. But for larger numbers, it becomes more obvious.
If some integer p is large and is a loser then p+1, p+2, ... p+k_m are all winners for some k_m that is around the size of sqrt(p). This is because once I find a loser, for any pile that isn't too much larger than that, I can remove a few stones to leave my opponent with that loser. The key is just determining what the largest valid value of k is, since k is defined in terms of the starting pile size n rather than the final pile size p.
So the question becomes, given some integer p, for which values of k is it true that k <= sqrt(n) where n = p+k. In other words, given p, what starting pile sizes n allow me to remove k and leave my opponent with p. Well, since n = p+k and the values are all nonnegative, we must have
k <= sqrt(n) = sqrt(p+k) ==> k^2 <= p + k ==> k^2 - k - p <= 0.
This is a quadratic equation in k for any fixed value of p. The endpoints of the solution set can be found using the quadratic formula:
k = (1 +- sqrt(1 + 4p))/2
So, the inequality is satisfied for values of k between (1-sqrt(1+4p))/2 and (1+sqrt(1+4p))/2. Of course, sqrt(1+4p) is at least sqrt(5) > 2, so the left endpoint is negative. So then k_m = floor((1+sqrt(1+4p))/2).
More importantly, I claim that the next loser after p is the number L = p + k_m + 1. Let's try to prove this:
Theorem: If p is a loser, then L = p + k_m + 1 is also a loser and every integer p < n < L is a winner.
Proof: We have already shown that every integer n in the interval [p+1, p+k_m] is a winner, so we only need to prove that L is a loser.
Suppose, to the contrary, that L is a winner. Then there exists some 1 <= k <= sqrt(L) such that L - k is a loser (by definition). Since we have proven that the integers p+1, ..., p+k_m are winners, we must have that L - k <= p since no number smaller than L and larger than p can be a loser. But this means that L <= p + k and so k satisfies the equation k <= sqrt(L) <= sqrt(p + k). We have already shown that solutions to the equation k <= sqrt(p + k) are no larger than (1+sqrt(1+4p))/2, so any integer solution must satisfy k <= k_m. But then L - k = p + k_m + 1 - k >= p + k_m + 1 - k_m = p + 1. This is a contradiction since p < L - k < L and we have already proved that there are no losers larger than p and smaller than L.
QED
The above theorem gives us a nice approach since we now know that winners fall into intervals of integers separated by a single loser and we know how to calculate the interval between two losers. In particular, if p is a loser, then p + k_m + 1 is a loser where
k_m = floor((1+sqrt(1+4p))/2).
Now we can rewrite the function in a purely iterative manner that should be fast and requires constant space. The approach is simply to compute the sequence of losers until we either find n (in which case it is a loser) or determine that n lies in the interval between two losers.
bool is_winner(int n) {
int p = 0;
// loop through losers until we find one at least as large as n
while (p < n) {
int km = floor((1+sqrt(1+4p))/2);
p = p + km + 1;
}
/* if we skipped n while computing losers,
* it is a winner that lies in the interval
* between two losers. So n is a winner as
* long as it isn't equal to p.*/
return (p != n);
}
You have to think this game recursively from the end: Clearly, to win, you have to take the last stone.
1 stone: First player wins. It's A's turn to take the only stone.
2 stones: Second player wins. A cannot take two stones but has to take one. So A is forced to take one stone and leave the other one for B to take.
3 stones: First player wins. There is still no choice. A has to take one stone, and smiles because they know that B can't win with two stones.
4 stones: First player wins. Now A has the choice to leave two or three stones. From the above, A knows that B will win if given three stones, but B will lose if given two stones, so A takes two stones.
5 stones: Second player wins. Even though A has the choice to leave three or four stones, B will win if given either amount.
As you can see, you can easily calculate who will win a game with n stones by complete knowledge of the outcomes of the games with 1 to n-1 stones.
An algorithmic solution will thus create a boolean array wins, where wins[i] is true if the player given i stones will win the game. wins[0] is initialized to false. The rest of the array is then filled iteratively from the start by scanning the reachable portion of the array for a false entry. If a false entry is found, the current entry is set to true, because A can leave the board in a loosing state for B, otherwise it is set to false.
I will build upon cmaster's answer because it is already pretty close. The question is how to efficiently calculate the values.
The answer is: We don't need the whole array. Only the false values are interesting. Let's analyze:
If we have a false value in the array, then the next few entries will be true because they can remove stones, such that the other player lands on the false value. The question is: How many true entries will be there?
If we are at the false entry z, then the entry x will be a true entry if x - sqrt(x) <= z. We can solve this for x and get:
x <= 1/2 * (1 + 2 * z + sqrt(1 + 4 * z))
This is the last true entry. E.g. for z = 2, this returns 4. The next entry will be false because the player can only remove stones, such that the opponent will come out at a true entry.
Knowing this, our algorithm is almost complete. Start at a known false value (e.g. 0). Then, iteratively move to the next false value until you reach n.
bool isWinner(long long n)
{
double loser = 0;
while(n > loser)
loser = floor(0.5 * (1 + 2 * loser + sqrt(1 + 4 * loser))) + 1;
return n != loser;
}
Games like this (Towers of Hanoi is another classic example) are meant to illustrate the mathematical principles of induction and recursion, with recursion being particularly relevant in programming.
We want to determine whether a pile of n stones is a winning pile or a losing pile. Intuitively, a winning pile is one so that no matter what sequence of choices your opponent makes you can always take some number of stones to guarantee you will win. Similarly, a losing pile is one such that no matter what choice you make, you always leave your opponent a winning strategy.
Obviously, n = 0 is a losing pile; you've already lost. And n = 1 is a winning pile since you take one stone and leave your opponent n=0. What about n=2? Well, you are only allowed to take one stone, at which point you have given your opponent a winning pile (n=1), so n=2 is a losing number. We can make this mathematically more precise.
Definition: An integer n is a loser if n=0 or for every integer k between 1 and sqrt(n), n-k is a winner. An integer n is a winner if there exists some integer k between 1 and sqrt(n) such that n-k is a loser.
In this definition, n is the size of the pile, k is the number of stones you choose to take. A pile is a losing pile if every valid number of stones to remove gives your opponent a winning pile and a winning pile is one where some choice gives your opponent a losing pile.
Of course, that definition should make you a little uneasy because we actually have no idea if it makes sense for anything other than n=0,1,2, which we already checked. Perhaps some number fits the definition of both a winner and a loser, or neither definition. This would certainly be confusing. This is where induction comes in.
Theorem: Every nonnegative integer is either a winner or a loser, but not both.
Proof: We'll use the principle of Strong or Complete Induction. We know that n=0 is a loser (by definition) and we've already shown that n=1 is a winner and n=2 is a loser directly. Those are our base cases.
Now let's consider some integer n_0 > 2 and we'll assume (using strong induction) that every nonnegative integer less than n_0 is either a winner or a loser, but not both. Let s = floor(sqrt(n_0)) and consider the set of integers P = {n_0-s, n_0-s+1, ..., n_0 - 1}. (Since {1, 2, ..., s} is the set of possible choices of stones to remove, P is the set of piles I can leave my opponent with.) By strong induction, since each value in P is a nonnegative integer less than n_0, each of them is either a winner or a loser (but not both). If any value in P is a loser, then by definition, n_0 is a winner (because you remove enough stones to leave your opponent that losing pile). If not, then every value in P is a winner, so then n_0 is a loser (because no matter how many stones you take, your opponent is still left with a winning pile). Therefore, n_0 is either a winner or a loser, but not both.
By strong induction, we conclude that every nonnegative integer is either a winner or a loser but not both.
QED
OK, that was pretty straightforward if you are comfortable with induction. But all we've shown is that our very intuitive definition actually makes sense and that every pile you get is either a winner (if you play it right) or a loser (if your opponent plays it right). How do we determine which is which?
Well, induction leads right to recursion. Let's write recursive functions for our two definitions: is n a winner or a loser? Here's some C-like pseudocode without error checking.
bool is_winner(int n) {
// check all valid numbers of stones to remove (k)
for (int k = 1; k <= sqrt(n); k++) {
if (is_loser(n-k)) {
// I can force the loser n-k on my opponent, so n is a winner
return true;
}
}
// I didn't find a way to force a loser on my opponent, so this must be a loser.
return false;
}
bool is_loser(int n) {
if (n == 0) { // this is our base case
return true;
}
for (int k = 1; k <= sqrt(n); k++) {
if (!is_winner(n-k)) {
// we found a way to give them a pile that ISN'T a winner, so this isn't a loser
return false;
}
}
// Nope: every pile we can give our opponent is a winner, so this pile is a loser
return true;
}
Of course, the code above is somewhat redundant since we've already shown that every number is either a winner or a loser. Therefore, it makes more sense to implement is_loser as just returning !is_winner or vice-versa. Perhaps we'll just do is_winner as a stand-alone implementation.
bool is_winner(int n) {
if (n < 0) {
// raise error
} else if (n == 0) {
return false; // 0 is a loser
} else {
for (int k = 1; k <= sqrt(n); k++) {
if (!is_winner(n-k)) {
// we can give opponent a loser, this is a winner
return true;
}
}
// all choices give our opponent a winner, this is a loser
return false;
}
}
To use this function to answer the question, if the game starts with n stones and player A goes first and both players play optimally, player A wins if is_winner(n) and player B wins if !is_winner(n). To figure out what their plays should be, if you have a winning pile, you should choose some valid k such that n-k is a loser (it doesn't matter which one, but the largest value will make the game end fastest) and if you are given a losing pile, it doesn't matter what you choose -- that is the point of a loser, but again, choosing the largest value of k will make the game end sooner.
None of this really considers performance. Since n could be quite large, there are a number of things you could consider. Like, for example, pre-computing the common small values of n that you are going to consider, or using Memoization, at least within a single recursive call. Furthermore, as I suggested previously, removing the largest value of k ends the game with fewer turns. Similarly, if you reverse the loops and check the largest allowed values of k first, you should be able to reduce the number of recursive calls.
Of course, the really fast way to do this is to do some more math and figure out some simple properties of n you can check that will determine whether or not it is a winner or a loser.
public class Solution {
public boolean canWinNim(int n) {
if(n % 4 == 0)
{
return false;
}
else
{
return true;
}
}
}

Algorithmic complexity of examining all possible lines on a game board

What is the time complexity of examining all possible lines of length l on game board of n x m?
For instance, a tic-tac-toe board is 3x3 and lines are defined as length 3; there are 8 possible lines. We could also imagine a board that is 9x9 and a rule that you need a line of length 5 in order to win. How would you characterize the complexity of examining every possible line with different values of n, m and l?
Note this is not considering traversing the game tree into future game states, just examining the current state of the board to see if there is a winner or not in the present state.
Clearly, you need to check horizontal, vertical, and diagonal lines.
Let us assume that the board is laid out with n always being the bigger number if the two are not equal, and that it is laying on its side" (lego style, not skyscraper style). So it is n across and m tall.
The horizontal lines will be n * (m - l) in number.
The vertical lines will be m * (n - l) in number.
The diagonal lines will be (m - l) * (n - l), or m*n - l*m - l*n + l*l
If we assume n >= m > l then we can safely say that it is within O(n^2), as we would expect for a two dimensional board.
We know that l > n >= m will have no results. If n = m = l we have a constant number (2*n + 2). If n = l > m we are left with an even better case, as we don't have to check the diagonals or the verticals, and you only have to check m lines. If n > l > m, then we can again exclude the verticals, but must consider the diagonals. In any event, it will be less than doing the diagonals, verticals, and horizontals.
There is an optimization that can be made, however, based on the phant0m's comment. It requires that you know what the last move made was.
You can safely assume that if a move was made, it was made at a time that there was no winner on the board. Therefore, if there is a win condition on the board after the move, it must have been formed as a result of the most recent move. Therefore, the worst case scenario given this information is that the winning line is formed with the most recent move on the end. You would therefore need to check the 4 line segments that extend l - 1 in each direction (horizontal, vertical, forward diagonal, and backward diagonal). This is a total of 4 * (2*l - 1), putting it safely in O(l). Considering you only need to store one additional coordinate (most recent move) this is a most wise optimization to make.

Resources