Question
A simple two-player game involves a pile of N matchsticks and two
players who have alternating turns. In each turn, a player removes 1,
2 or 3 matchsticks from the pile. The player who removes the last
matchstick loses the game.
A) What are the branching factor and depth of the game tree (give a general solution expressed in terms of N)? How large is the search
space?
B) How many unique states are there in the game? For large N what could be done to make the search more efficient?
Answer
A) I said the branching factor would be 3 but I justified this because the player could only ever remove up to 3 matches, meaning our tree would usually have three children. The second part with regards to the depth, I'm not sure.
B) N x 2 where N is the number of matches remaining. I am not sure how we could make the search more efficient though? Maybe introducing Alpha-beta pruning?
A :
For the depth, just imagine what the longest possible game would look like. It is the game that consists of both players only removing 1 match in each turn. Since there are n matches, such a game would take n turns : the tree has depth n.
B :
There are only 2*N states, each of them accessible from 3 states of higher matchstick count. Since the number of matches necessarily goes down as the game goes on, the graph of possible states is a DAG (Directed Acyclic Graph). A dynamic programming method is therefore possible to analyze this game. In the end, you will see that the optimal move only depends on N mod 4, with N the number of remaining matches.
EDIT : Proof idea for the N mod 4 :
Every position is either a losing or a winning position. A losing position is a situation where no matter what you play, if your adversary plays optimally, you will lose. Similarly, a winning position is a situation where if you play the right moves, the adversary cannot win. N=1 is a losing position (by definition of the game). Therefore, N=2,3,4 are winning positions because by removing the right amount of matches you put the adversary in a losing position. N=5 is a losing position because no matter what admissible number of matches you remove, you put the adversary in a winning position. N=6,7,8 are winning positions ... you get the idea.
Now it is just about making this proof formal : take as hypothesis that a position N is a losing position if and only if N mod 4 = 1. If that is true up to some integer k, you can prove that it is true for k+1. It's true up to k = 4 as we showed earlier. By recurrence, it is true for any N.
The state of the game at any time can be described by whose turn it is and the number of matches held by each player. After n moves there are 3^n possible histories, but for large n, many fewer than 3^n possible states, so you can save time by, for example, recognising that you are about to encounter a state that you have already encountered and worked out a value for before.
See also https://en.wikipedia.org/wiki/Nim - if this is Nim, or a variety of Nim, there are efficient strategies already worked out for it.
Related
Let's say we have N teams in a tournament and based on historical data we know what is the probability of each team winning any other team .Lets put all the probabilities in a matrix called P . P[a][b] is the probability team a winning team b. It is obvious that P[a][a] = 0 and P[a][b] = 1-P[b][a].
In this tournament at every round, two of teams compete against each other and the loser is eliminated. This two team are chosen randomly (with equal possibility of each team being picked). So at the first round we have n teams, next n-1 teams and so on until only one team remains and becomes the champion. What is the probability of each team becoming the champion? ( 1 <= N <= 18).
At first when I didn't know how to approach the problem but after some reading and search and keeping in mind that max n is 18 I figured at that using Dynamic programming and Bitmask is the way to go. How ever I couldn't figure at a solution. Here are my problems:
I have really hard time to figure at what are the sub problems and what sub problems should not be recomputed, basically I can't find a well defined recursive ( or not recursive) equation for the problem
In bitmask+dp problems we usually define something like dp[mask][n] or dp[n][mask]. I tried different approaches to define the mask but since the general solution is not clear to me there was no success
Some guidance on this two problems would be very helpful.
This is not really a dynamic programming problem.
If you have a vector V that gives the probability of each player being in the game after n rounds, then you can calculate the player probabilities for n+1 rounds by:
V'i = 2/((18-n)(17-n)) * sum over all j!=i of [ViVjPi,j]
That first factor is the probability that any given available match will be chosen, which depends on the number of previous rounds, because each successive round has fewer players to match up.
The second part is the probability of the players being available for each match, times the probability that the current player will win.
Just do this calculation 17 times to get the player probabilities after 17 rounds, which is the answer you're looking for. You can even drop that first factor, and fix it at the end by normalizing the vector so that the probabilities sum to 1.
It is ultimately a game of nim with certain modification.
Rules are as follows :
There are two players A and B.
The game is played with two piles of matches. Initially, the first pile contains N matches and the second one contains M matches.
The players alternate turns; A plays first.
On each turn, the current player must choose one pile and remove a positive number of matches (not exceeding the current number of matches on that pile) from it.
It is only allowed to remove X matches from a pile if the number of matches in the other pile divides X.
The player that takes the last match from any pile wins.
Both players play optimally.
My take :
let us say we have two piles 2 7. we have 3 cases to reduce the second pile to wiz : 2 1 , 2 3 , 2 5 If A is playing optimally he/she will go for 2 3 so that the only chance left for B is to do 2 1 and then A can go for 0 1 and win the game. The crust of the solution being that if A or B ever encounters any situation where it can directly lose in the next step then it will try their best to avoid it and use the situation to their advantage by just leaving at a state 1 step before that losing stage .
But this approach fails for some unknown test cases , is there any better approach to find the winner , or any other test case which defies this logic.
This is a classic dynamic programming problem. First, find a recurrence relationship that describes an outcome of a game in terms of smaller games. Your parameters here are X and Y, where X is the number of matches in one stack and Y in the other. How do I reduce this problem?
Well, suppose it is my turn with X matches, and suppose that Y is divisible by the numbers a1, a2, and a3, while x is divisible by b1, b2, b3. Then, I have six possible turns. The problem reduces to solving for (X-a1, Y) (X-a2, Y) (X-a3,Y), (X,Y-b1), (X,Y-b2), (X, Y-b3). Once these six smaller games are solved, if one of them is a winning game for me, then I make the corresponding move and win the game.
There is one more parameter, which is whose turn it is. This doubles the size of solvable problems.
The key is to find all possible moves, and recur for each of them, keeping a storage of already solved games for efficiency.
The base case needs to be figured out naturally.
I have a list of lists in the form l = [[1,2,3],[3,4,6],...]. There are m sublists each representing a player. Each player can perform a number of tasks (there are n tasks). I would like to find the shortest path through all the steps by minimizing the number of switches between players. So basically have the same player perform the tasks consecutively as often as possible. I'm trying to write an algorithm to optimize this that runs in polynomial time but I'm having a bit of trouble coming up with a good scheme. I was thinking it could be like Dijkstra's algorithm, but I'm not exactly sure how to adapt it to fit my case. Below a concrete example of what I want.
Example
n = 5 and m = 3 such that we have a list of lists l = [[1,2,5],[1,3,5],[2,3,4]]
The algorithm would return [0,2,2,2,0]
i.e. player 0 would be chosen first then swap to player 2 for 3 tasks than back to player 0 for the last task.
I'm just looking for pseudo code or a push in the right direction. Really struggling and brute force won't work for large numbers!
Since it is never beneficial to have a player perform fewer consecutive tasks than he can, a simple greedy algorithm suffices to find the optimal solution:
Starting with task 1, find the player that can execute the largest number of consecutive tasks starting with that first task.
Starting with the first task that the previously found player can't do, find the player that can execute the largest number of consecutive tasks starting with that task.
Repeat until all the tasks are done.
Here's a proof that this algorithm is optimal:
Let say there's an optimal solution that has player A performing tasks i through j and then player B performing tasks j+1 through k.
If there is any player (including A) that can perform tasks i through j+1, then we can use that player to do those tasks instead, and the solution will be as good or better. Either B will perform tasks j+2 through k, and the number of player switches will be the same, or j+1 = k and we won't need player B at all.
Therefore there is an optimal solution in which every chosen player maximizes the number of consecutive moves that can be performed by that player. In fact, since every such solution is equivalent, they are all optimal.
EDIT: As I was writing this, Pham suggests to use a segment tree, but no such complex data structure is necessary. If the sublists are sorted and you make an index from each task number to the sublist positions at which it can be found, then you can do this in O(N) time.
This is a simple game:
There is a set, A={a1,...,an}, the opponents can choose one of the first or last elements of set, and at the end the one who collect bigger numbers wins. Now say each participants dose his best, what I need to do is write a Dynamic algorithm to estimate their score.
any idea or clue is truly appreciated.
Here's a hint: to write a dynamic programming algorithm, you typically need a recurrence. Given
A={a1,...,an}
The recurrence would look something like this
f(A)= max( f({a1,...,a_n-1}) , f({a2,...,a_n}) )
Actually the recurrence relation given by dfb may not lead to right answer
as it is not leading to the right sub-optimal structure !
Assume the Player A begins the game :
the structure of problem for him is [a1,a2,...an]
After choosing an element , either a1 or an , its player B's turn to play , and then after that move it is player A's move.
So after two moves , Player A's turn will come again and this will be the right sub-problem for him .The right recurrence relation will be
Suppose from i to j elements are left :
A(i,j)= max(min( A(i+1,j-1),A(i+2,j)+a[i] ), min(A(i,j-2),A(i+1,j-1))+a[j])
Refer to the following link :
http://people.csail.mit.edu/bdean/6.046/dp/
EXAMPLE CODE
Here is Python code to compute the optimal score for first and second players.
A=[3,1,1,3,1,1,3]
cache={}
def go(a,b):
"""Find greatest difference between player 1 coins and player 2 coins when choosing from A[a:b]"""
if a==b: return 0 # no stacks left
if a==b-1: return A[a] # only one stack left
key=a,b
if key in cache:
return cache[key]
v=A[a]-go(a+1,b) # taking first stack
v=max(v,A[b-1]-go(a,b-1)) # taking last stack
cache[key]=v
return v
v = go(0,len(A))
n=sum(A)
print (n+v)/2,(n-v)/2
COUNTEREXAMPLE
Note that the code includes a counter example to one of the other answers to this question.
Consider the case [3,1,1,3,1,1,3].
By symmetry, the first players move always leaves the pattern [1,1,3,1,1,3].
For this the sum of even elements is 1+3+1=5, while the sum of odd is 1+1+3=5, so the argument is that from this position the second player will always win 5, and the first player will always win 5, so the first player will win (as he gets 5 in addition to the 3 from the first move).
However, this logic is flawed because the second player can actually get more.
First player takes 3, leaves [1,1,3,1,1,3] (only choice by symmetry)
Second player takes 3, leaves [1,1,3,1,1]
First player takes 1, leaves [1,3,1,1] (only choice by symmetry)
Second player takes 1, leaves [1,3,1]
First player takes 1, leaves [3,1] (only choice by symmetry)
Second player takes 3, leaves [1]
First player takes 1
So overall first player gets 3+1+1+1=6, while second gets 3+1+3=7 and second player wins.
The flaw is that although it is true that the second player can play such that they will win all even or all odd positions, this is not optimal play and they can actually do better than this in some cases.
Actually you do not need dynamic programming, because it is easy to find an explicit solution for the game above.
Case n is even or n = 1.
The second player to move will always lose.
Case n odd and n > 1.
The second player has a winning strategy iff one of the following 2 scenarios happen:
The elements with even index have bigger sum than all the elements with odd index
All odd elements except the last have bigger sum than all the remainings AND
All odd elements except the first have bigger sum than all the remainings.
Proof sketch:
Case n is even or n = 1: Let Sodd and Seven be the sum of all elements with even/odd indexes. Assume that Sodd > Seven, same argument hold otherwise. The first player has a winning strategy, since he can play in such a way that he will get all odd indexed items.
The case n is odd and n > 1 can also be resolved directly. In fact the first player has two options, he can get the first or last element of the set. Of the remaining elements, partition them the two subsets with odd and even indexes; by the argument above, the second player is going to take the subset with largest sum. If you expand the tree game you will end up with the statement above.
So, this is a common interview question. There's already a topic up, which I have read, but it's dead, and no answer was ever accepted. On top of that, my interests lie in a slightly more constrained form of the question, with a couple practical applications.
Given a two dimensional array such that:
Elements are unique.
Elements are sorted along the x-axis and the y-axis.
Neither sort predominates, so neither sort is a secondary sorting parameter.
As a result, the diagonal is also sorted.
All of the sorts can be thought of as moving in the same direction. That is to say that they are all ascending, or that they are all descending.
Technically, I think as long as you have a >/=/< comparator, any total ordering should work.
Elements are numeric types, with a single-cycle comparator.
Thus, memory operations are the dominating factor in a big-O analysis.
How do you find an element? Only worst case analysis matters.
Solutions I am aware of:
A variety of approaches that are:
O(nlog(n)), where you approach each row separately.
O(nlog(n)) with strong best and average performance.
One that is O(n+m):
Start in a non-extreme corner, which we will assume is the bottom right.
Let the target be J. Cur Pos is M.
If M is greater than J, move left.
If M is less than J, move up.
If you can do neither, you are done, and J is not present.
If M is equal to J, you are done.
Originally found elsewhere, most recently stolen from here.
And I believe I've seen one with a worst-case O(n+m) but a optimal case of nearly O(log(n)).
What I am curious about:
Right now, I have proved to my satisfaction that naive partitioning attack always devolves to nlog(n). Partitioning attacks in general appear to have a optimal worst-case of O(n+m), and most do not terminate early in cases of absence. I was also wondering, as a result, if an interpolation probe might not be better than a binary probe, and thus it occurred to me that one might think of this as a set intersection problem with a weak interaction between sets. My mind cast immediately towards Baeza-Yates intersection, but I haven't had time to draft an adaptation of that approach. However, given my suspicions that optimality of a O(N+M) worst case is provable, I thought I'd just go ahead and ask here, to see if anyone could bash together a counter-argument, or pull together a recurrence relation for interpolation search.
Here's a proof that it has to be at least Omega(min(n,m)). Let n >= m. Then consider the matrix which has all 0s at (i,j) where i+j < m, all 2s where i+j >= m, except for a single (i,j) with i+j = m which has a 1. This is a valid input matrix, and there are m possible placements for the 1. No query into the array (other than the actual location of the 1) can distinguish among those m possible placements. So you'll have to check all m locations in the worst case, and at least m/2 expected locations for any randomized algorithm.
One of your assumptions was that matrix elements have to be unique, and I didn't do that. It is easy to fix, however, because you just pick a big number X=n*m, replace all 0s with unique numbers less than X, all 2s with unique numbers greater than X, and 1 with X.
And because it is also Omega(lg n) (counting argument), it is Omega(m + lg n) where n>=m.
An optimal O(m+n) solution is to start at the top-left corner, that has minimal value. Move diagonally downwards to the right until you hit an element whose value >= value of the given element. If the element's value is equal to that of the given element, return found as true.
Otherwise, from here we can proceed in two ways.
Strategy 1:
Move up in the column and search for the given element until we reach the end. If found, return found as true
Move left in the row and search for the given element until we reach the end. If found, return found as true
return found as false
Strategy 2:
Let i denote the row index and j denote the column index of the diagonal element we have stopped at. (Here, we have i = j, BTW). Let k = 1.
Repeat the below steps until i-k >= 0
Search if a[i-k][j] is equal to the given element. if yes, return found as true.
Search if a[i][j-k] is equal to the given element. if yes, return found as true.
Increment k
1 2 4 5 6
2 3 5 7 8
4 6 8 9 10
5 8 9 10 11