I will compete in the OBI (Brazilian Olympiad of Informatics, in English) and I am trying a few exercises from the last years. But I can't find a solution for this exercise (I translated it, so there may be a few errors):
Chocolate Competition
Carlos and Paula just got a bag of chocolate balls. As they would eat
everything too quickly, they made a competition:
They will eat alternately, one after the other (Paula always starts).
Each time, they can only eat from 1 to M balls, M decided by Paula's mother, so they don't choke.
If one ate K balls in his/her turn, the next can't eat K balls.
Whoever can't play according the rules above loses.
In the example below with M = 5 and 20 balls, Carlos has won:
Who plays How many ate Balls left
20
Paula 5 15
Carlos 4 11
Paula 3 8
Carlos 4 4
Paula 2 2
Carlos 1 1
Note that in the end, Carlos couldn't eat 2 balls to win, because Paula ate 2 in her last turn. But Paula couldn't eat the last ball, because Carlos ate 1 in his last turn, so Paula can't play and loses.
Both are very smart and play optimally. If there is a sequence of turns that ensures him/her the victory independent of the other's turns, he/she will play these sequences.
Task:
Your task is to find out who will win the competition if both play optimally.
Input:
The input contains only a single test group, which should be read from the standard input (usually the keyboard).
The input has 2 integers N (2 ≤ N ≤ 1000000) and M (2 ≤ M ≤ 1000), being N the number of balls and M the number allowed per turn.
Output:
Your program should print, in the standard output, one line containing the name of the winner.
Examples:
Input: Output:
5 3 Paula
20 5 Carlos
5 6 Paula
I've been trying to solve the problem, but I have no idea how.
A solution in C can be found here: http://olimpiada.ic.unicamp.br/passadas/OBI2009/res_fase2_prog/programacao_n2/solucoes/chocolate.c.txt But I can't understand the algorithm. Someone posted a question about this problem in another site, but nobody replied.
Could you explain me the algorithm?
Here are the expected outputs of the program: http://olimpiada.ic.unicamp.br/passadas/OBI2009/res_fase2_prog/programacao_n2/gabaritos/chocolate.zip
Let's say we have a boolean function FirstPlayerWin (FPW) that takes two arguments: number of chocolates left (c) and the last move (l) i.e. the number of chocolates taken on the previous round, which is 0 at the first move. The routine returns true if and only if the first player to play at this situation is guaranteed a win.
The base case is that FPW(0, l) = false for any l != 1
Otherwise, to calculate FPW(c, l), FPW(c, l) is true if for any x <= M, x <= c, x != l, FPW(c - x, x) is false. Otherwise it is false. This is where dynamic programming kicks, because now the calculation of FPW is reduced to calculating FPW for smaller values of c.
However, storing the entries for this formulation would require N * M table entries, where as the solution you pointed to uses only 2N table entries.
The reason for this is that if FPW(c, 0) is true (first player wins if any move is available at chocolate count c) but FPW(c, x) is false for x > 0, FPW(c, y) for and y != x must be true. This is because if denying the move x makes the player lose, i.e. the player would win only by playing x, then the move x is available when y is banned instead. Therefore it is enough to store for any count 'c' at most one forbidden move that causes the player to lose there. So you can refactor the dynamic programming problem so that instead of storing the full 2-dimensional array FPW(c, x) you have two arrays, one stores the values FPW(c, 0) and the other stores the single forbidden moves that cause the first player to lose instead of winning, if any.
How you get to the exact text of the quoted C program is left as an exercise to the reader.
I think this is yet another thinly disguised exercise in dynamic programming. The state of the game is described by two quantities: the number of balls remaining, and the number of balls eaten in the previous move. When the number of balls remaining is <= M, the game is either won (if the number remaining is not equal to the number eaten in the previous move) or lost (if it is - you can't eat all the balls, and your opponent can eat the balls that you leave).
If you have worked out the win/lose situation for all numbers of balls up to H, and all possible numbers of balls eaten by the previous player and written this down in a table, then you can work out the answers for all numbers of balls up to H+1. A player with H+1 balls and k balls eaten in the previous move will consider all possibilities - eat i balls for i = 1 to M except for the illegal value of k, leaving a position with H+1-i balls and a previous move of i. They can use the table giving the win-lose situation for up to H balls left to try and find a legal k that gives them a win. If they can find such a value, the H+1/k position is a win. If not, it is a loss, so they can extend the table to cover up to H+1 balls, and so on.
I haven't worked through all the uncommented example code, but it looks like it could be doing something like this - using a dynamic programming like recursion to build a table.
Related
I have been working on the algorithm for this problem, but can't figure it out. The problem is below:
In a tournament with X player, each player is betting on the outcomes of basketball matches in the NBA.
Guessing the correct match outcome earns a player 3 points, guessing the MVP of the match earns 1 point and guessing both wrong - 0 points.
The algorithm needs to be able to determine if a certain player can't reach the number 1 spot in this betting game.
For example, let's say there are a total of 30 games in the league, so the max points a player can get for guessing right is (3+1)*30=120.
In the table below you can see players X,Y and Z.
Player X guessed correctly so far 20 matches so he have 80 points.
Players Y and Z have 26 and 15 points, and since there are only 10 matches left, even if they guess correctly all the remaining 10 it would not be enough to reach the number 1 spot.
Therefore, the algorithm determined that they are eliminated from the game.
Team
Points
Points per match
Total Games
Max Points possible
Games left
Points Available
Eliminated?
X
80
0-L 1-MVP 3-W
30
120
10
0-40
N
Y
26
0-L 1-MVP 3-W
30
120
10
0-40
Y
Z
15
0-L 1-MVP 3-W
30
120
10
0-40
Y
The baseball elimination problem seems to be the most similar to this problem, but it's not exactly it.
How should I build the reduction of the maximum-flow problem to suit this problem?
Thank you.
I don't get why you are looking at very complex max-flow algorithms. Those might be needed for very complex things (especially when pairings lead to zero-sum results and order/remaining parings start to matter -> !much! harder to do worst-case analysis).
Maybe the baseball problem you mention is one of those (did not check it). But your use-case sounds trivial.
1. Get current leader score LS
2. Get remaining matches N
3. For each player P
4. Get current player score PS
5. Eliminate iff PS + 3 * N < LS
(assumes parallel progress: standings always synced to all players P have played M games
-> easy to generalize though)
This is simple. Given your description there is nothing preventing us from asumming worst-case performance from every other player aka it's a valid scenario that every other player guesses wrong for all upcoming guesses -> score S of player P can stay at S for all remaining games.
Things might quickly change to NP-hard decision-problems when there are more complex side-constraints (e.g. statistical distributions / expectations)
I've been practicing some probabilities for last few weeks around 10/week usually did the job, however as the topics got harder I've started struggling and now I'm completely stuck. I've been looking online and i did find similar examples but nothing to touch my case in particular. I will continue looking for an answer so even if u can't answer my specific cases links to online literature will be appreciated.
answers are welcome, however i would rather be more interested in explanation of how a problem works.
urn contains m white and k black balls. two players are drawing the balls one after another without putting them back into the urn. the winner is the first person to draw white ball. what is the probability that second player will be the winner? (k=4, m=4)
women tend to vote with probability of a, men tend to do the same with probability of b. the probability c tells us that if we take a couple one of them will not go voting. what are the chances that at least one of them will vote? (a=0.49, b=0.61, c=0.75)
we are sending a message of n bytes. to obtain higher chance of sending entire message without ruining it we use k different wires. What is the probability of sending an entire message through one of the wires without ruining it, if the probability of ruining any of the bytes at any of the wires is p. (p=0.06, n=7, k=6)
the basketball game finals play to N wins. after m + n games the result is m : n. what is the probability of winning the finals for the lagging(the team that's behind) team if it is known that the leading team wins each match with a probability of p? (m=3 n=2 N=5 p=0.36)
sorry for my English and any help will be appreciated
Just A
So Given m = k and both = 4. And each player takes a ball out per turn. Given the worst case scenario 4 good balls will be picked (K) there will be 2 turns of both players picking the black ball then the first player will be guaranteed the white ball next turn. Therefore you need to work out the probability for the first two turns.
Turn 1
- P1 turn 1 Chance for black= 4/8
- P2 turn 2 Change for white = 4/7
Therefore turn 1 chance = (4/8)*(4/7). = 28.57%
Turn 2
- P1 turn 1 Chance for black= 2/6
- P2 turn 2 Change for white = 4/5
Therefore turn 2 chance = (4/6)*(4/5). = 26.664% but to get to turn 2 is a 28.57% chance
Therefore its the first probability add the second probability given the first case occurs. So 28.57% + (26.44%*28.57%)= 36.12%
The third case is a always lose. Top tip then be the first player!
Turn 3 Instance win for player 2 therefore a loss. = 0%
It is ultimately a game of nim with certain modification.
Rules are as follows :
There are two players A and B.
The game is played with two piles of matches. Initially, the first pile contains N matches and the second one contains M matches.
The players alternate turns; A plays first.
On each turn, the current player must choose one pile and remove a positive number of matches (not exceeding the current number of matches on that pile) from it.
It is only allowed to remove X matches from a pile if the number of matches in the other pile divides X.
The player that takes the last match from any pile wins.
Both players play optimally.
My take :
let us say we have two piles 2 7. we have 3 cases to reduce the second pile to wiz : 2 1 , 2 3 , 2 5 If A is playing optimally he/she will go for 2 3 so that the only chance left for B is to do 2 1 and then A can go for 0 1 and win the game. The crust of the solution being that if A or B ever encounters any situation where it can directly lose in the next step then it will try their best to avoid it and use the situation to their advantage by just leaving at a state 1 step before that losing stage .
But this approach fails for some unknown test cases , is there any better approach to find the winner , or any other test case which defies this logic.
This is a classic dynamic programming problem. First, find a recurrence relationship that describes an outcome of a game in terms of smaller games. Your parameters here are X and Y, where X is the number of matches in one stack and Y in the other. How do I reduce this problem?
Well, suppose it is my turn with X matches, and suppose that Y is divisible by the numbers a1, a2, and a3, while x is divisible by b1, b2, b3. Then, I have six possible turns. The problem reduces to solving for (X-a1, Y) (X-a2, Y) (X-a3,Y), (X,Y-b1), (X,Y-b2), (X, Y-b3). Once these six smaller games are solved, if one of them is a winning game for me, then I make the corresponding move and win the game.
There is one more parameter, which is whose turn it is. This doubles the size of solvable problems.
The key is to find all possible moves, and recur for each of them, keeping a storage of already solved games for efficiency.
The base case needs to be figured out naturally.
I am trying to find a solution to this board game problem this is how it goes.
Mike and Bob are playing a board game on an m by n board. The board has at least 3 rows and 3 columns. The bottom row is row 0 and the top row is row m − 1; the left-most column is column 0 and the right-most column is column n − 1.
Mike moves first, then Bob, then Mike, then Bob, etc. until the game is over. The game is over when one of two things happens: Bob wins or Mike wins.
Bob wins if he lands on the same square as Mike before Mike reaches the top row. Note that this winning condition is checked only after Bob moves; Bob can never win right after Mike moves, even if Mike lands on the same square as Bob
Mike wins if Mike reaches the top row before Bob wins, i.e., Mike reaches the top row without Bob
ever landing on the same square as Mike. As soon as Mike reaches the top row, Mike wins (Bob cannot move
anymore).
Mike’s move on each turn is fixed according to the following rule: he goes 1 square up and then, if not already at the
right edge, goes 1 square to the right as well.
Bob, by contrast, has eight choices of move to make on his turn:
1 up, 2 right
1 up, 2 left
1 down, 2 right
1 down, 2 left
2 up, 1 right
2 up, 1 left
2 down, 1 right
2 down, 1 left
Note that some moves may be unavailable depending on Bob’s location; for example, if Bob is already in column
n − 1, then any move that tries to go to the right is not allowed.
Given the starting locations of Mike and Bod, come up with a solution that determines the result of the game, as follows:
If it is possible for Bob to win, then report that Bob can win and give the minimum number of Bob moves required for him to win.
Otherwise, Mike wins; report the number of Bob moves that occur before Mike wins
we can assume that
Mike’s starting location is never in the top row
Bob’s starting location is never the same as Mike’s
we can use any type of methods we like, the only real restriction to solving this problem are mentioned above. How would we approach a question like this.
Notation
First note that Mike has a fixed, predefined movement strategy.
Let us denote by M0, M1, .., Mk the cells that Mike would go through in his path towards the top row. Here M0 is the starting cell of Mike, and Mk is the cell on the top row that Mike would reach if Bob would not intervene.
Additionally, let us denote by B0 the cell where Bob starts from. According to the statement it is guaranteed that B0 is different from M0.
Reformulation
The question is whether there is sequence of i moves for Bob, B0 -> B1 -> .. -> Bi such that Bi = Mi, for an i between 1 and k - 1.
Approach
One possible approach is to generate the list of all possible cells that can be reached in exactly i moves, for any i.
For i = 0, there is a single cell that can be reached by Bob, the starting position B0.
Afterwards, for i = 1, 2, .. k - 1, we should consider every cell that can be reached in i - 1 steps (we already know this set of cells), and perform each legal move for Bob. The resulting list of positions would be the set of cells reachable in i steps.
Now that we found the cells that can be reached in i steps by Bob, we should simply check whether Mi is in that set. If this happens for at least one i between 1 and k - 1, then Bob wins.
Implementation
For efficiency, it is important to make sure that if the same cell is reached from different sources in the same number of steps, i, then the duplicates are stored only once.
This removal of duplicates is essential, because otherwise the size of the list of reachable cells would grow exponentially with i.
One approach would be to maintain an m by n matrix of booleans for each i, specifying whether the respective cell can be reached in i steps. Improvements in terms of memory are possible, such that only the matrix of booleans for the current i is maintained (along with a temporary matrix to help advancing towards the next set), but a little bit more care would be needed in that case.
Old topic, but I feel like puzzling for a moment:
Start by general observations
On a 3 by 3 board Mike wins in exactly 4 starting positions, namey those where Bob has to move to (1, 2) (order of coordinates here m before n, i.e. not common to chess notations) und Mike can move there.
If Bob moves onto Mike's square, Mike can win the very next move or never at all, because he generally needs two moves from Bob's square to Bob's next square.
So as long as Bob moves diagonally it is just a matter of distance and speed (Mike is 50% faster, just calculate distance to Bob's diagonal).
On the right border Mike is 100% faster. He still has to catch up before the top. Plus Mike has no way of winning if both players end up on different colors on one and those each move.
This is a simple game:
There is a set, A={a1,...,an}, the opponents can choose one of the first or last elements of set, and at the end the one who collect bigger numbers wins. Now say each participants dose his best, what I need to do is write a Dynamic algorithm to estimate their score.
any idea or clue is truly appreciated.
Here's a hint: to write a dynamic programming algorithm, you typically need a recurrence. Given
A={a1,...,an}
The recurrence would look something like this
f(A)= max( f({a1,...,a_n-1}) , f({a2,...,a_n}) )
Actually the recurrence relation given by dfb may not lead to right answer
as it is not leading to the right sub-optimal structure !
Assume the Player A begins the game :
the structure of problem for him is [a1,a2,...an]
After choosing an element , either a1 or an , its player B's turn to play , and then after that move it is player A's move.
So after two moves , Player A's turn will come again and this will be the right sub-problem for him .The right recurrence relation will be
Suppose from i to j elements are left :
A(i,j)= max(min( A(i+1,j-1),A(i+2,j)+a[i] ), min(A(i,j-2),A(i+1,j-1))+a[j])
Refer to the following link :
http://people.csail.mit.edu/bdean/6.046/dp/
EXAMPLE CODE
Here is Python code to compute the optimal score for first and second players.
A=[3,1,1,3,1,1,3]
cache={}
def go(a,b):
"""Find greatest difference between player 1 coins and player 2 coins when choosing from A[a:b]"""
if a==b: return 0 # no stacks left
if a==b-1: return A[a] # only one stack left
key=a,b
if key in cache:
return cache[key]
v=A[a]-go(a+1,b) # taking first stack
v=max(v,A[b-1]-go(a,b-1)) # taking last stack
cache[key]=v
return v
v = go(0,len(A))
n=sum(A)
print (n+v)/2,(n-v)/2
COUNTEREXAMPLE
Note that the code includes a counter example to one of the other answers to this question.
Consider the case [3,1,1,3,1,1,3].
By symmetry, the first players move always leaves the pattern [1,1,3,1,1,3].
For this the sum of even elements is 1+3+1=5, while the sum of odd is 1+1+3=5, so the argument is that from this position the second player will always win 5, and the first player will always win 5, so the first player will win (as he gets 5 in addition to the 3 from the first move).
However, this logic is flawed because the second player can actually get more.
First player takes 3, leaves [1,1,3,1,1,3] (only choice by symmetry)
Second player takes 3, leaves [1,1,3,1,1]
First player takes 1, leaves [1,3,1,1] (only choice by symmetry)
Second player takes 1, leaves [1,3,1]
First player takes 1, leaves [3,1] (only choice by symmetry)
Second player takes 3, leaves [1]
First player takes 1
So overall first player gets 3+1+1+1=6, while second gets 3+1+3=7 and second player wins.
The flaw is that although it is true that the second player can play such that they will win all even or all odd positions, this is not optimal play and they can actually do better than this in some cases.
Actually you do not need dynamic programming, because it is easy to find an explicit solution for the game above.
Case n is even or n = 1.
The second player to move will always lose.
Case n odd and n > 1.
The second player has a winning strategy iff one of the following 2 scenarios happen:
The elements with even index have bigger sum than all the elements with odd index
All odd elements except the last have bigger sum than all the remainings AND
All odd elements except the first have bigger sum than all the remainings.
Proof sketch:
Case n is even or n = 1: Let Sodd and Seven be the sum of all elements with even/odd indexes. Assume that Sodd > Seven, same argument hold otherwise. The first player has a winning strategy, since he can play in such a way that he will get all odd indexed items.
The case n is odd and n > 1 can also be resolved directly. In fact the first player has two options, he can get the first or last element of the set. Of the remaining elements, partition them the two subsets with odd and even indexes; by the argument above, the second player is going to take the subset with largest sum. If you expand the tree game you will end up with the statement above.