Approaching algorithms with modulo usage - algorithm

I was doing some coding challenges and a problem came up that said roughly this:
"Two players each taking turns starting with player one. There are N
sticks given, each player takes 1, 2 or 3 sticks on their turn, the
player to take the last stick loses, the goal is to find an algorithm
that lets player one win with certainty (not always possible, player two is supposed to take turns that will ensure victory) and output 1, 2 or 3 as
the starting amount of sticks taken or 0 if it's impossible to win.
Input is N. Example: Input:2 Output:1"
I tried to think about it but all I came up with is that it would take checking every possible outcome because of all the possibilities that could be chained together if N is big. I also thought that if the last stick is to be taken by player 2 so as to not lose, that is N-1 is taken by player 1 (whether by taking N-1 only or N-1, N-2 or N-1, N-2, N-3) leaving N to player 2, that is the only way to ensure victory.
It turned out that the solution was (N-1) mod 4, but I can't understand why that is the case.
So my question is how do you approach a problem like that and why is the solution a modulo? Also is there a way to spot modulo problems like these? Other coders did it fairly quickly so I suppose practice makes perfect, but I have no idea where to start from.

It is modulo 4 because if one player has the advantage, he can keep the same advantage by taking 3 sticks if the first player took 1, 2 if the first player took 2, and 1 stick if the first player took 3. The other player simply doesn't have any control anymore.
Take the problem backwards :
You don't have to care about a big N, you just need to analyze what the situation looks like when only 4 sticks or less are left.
Who will win when there are 1, 2, 3 or 4 sticks left?
Who will win when there are 4n+1, 4n+2, 4n+3 or 4n+4 sticks left?

Related

How to design an algorithm to (almost) always win in the Kayles game (Normal Play Convention)

I have encountered an algorithm designing problem which is designing an algorithm to (almost) always win in the Kayles game, i.e., the goal is to always win if one goes first and have the best chance to win if one goes second. The rule of the game is, there is a line of pins, each pin has an index, two players take turns to knock them down, and a player can knock down one or two pins for each turn, if there are two pins to be knocked down, they have to be adjacent to each other, for example, if there are 1, 2, 3, 4 four pins, 2 and 3 are down, then 1 and 4 are not adjacent to each other, but 1 and 2, 2 and 3, etc.. The player who knocks down the last pin(s) wins.
The hints of the problem are represents the pins via an array of boolean, and the inputs may include the index or indices of the opponent's last move.
I have done some online research about this game, but so far I only know that the player who goes first can always win, by removing the middle pin(s), dividing the pins into two groups, then imitate the opponent's move in another group. I also know that if one is going second, the best strategy is roughly creating an even number of groups of pins, such as 1101101110111, which has two pairs of groups, then imitate the opponent's move in one of the pair of groups. But I don't really sure how to design an algorithm from here.
Thanks for reading to here!

Finding better approach of Modified Nim

It is ultimately a game of nim with certain modification.
Rules are as follows :
There are two players A and B.
The game is played with two piles of matches. Initially, the first pile contains N matches and the second one contains M matches.
The players alternate turns; A plays first.
On each turn, the current player must choose one pile and remove a positive number of matches (not exceeding the current number of matches on that pile) from it.
It is only allowed to remove X matches from a pile if the number of matches in the other pile divides X.
The player that takes the last match from any pile wins.
Both players play optimally.
My take :
let us say we have two piles 2 7. we have 3 cases to reduce the second pile to wiz : 2 1 , 2 3 , 2 5 If A is playing optimally he/she will go for 2 3 so that the only chance left for B is to do 2 1 and then A can go for 0 1 and win the game. The crust of the solution being that if A or B ever encounters any situation where it can directly lose in the next step then it will try their best to avoid it and use the situation to their advantage by just leaving at a state 1 step before that losing stage .
But this approach fails for some unknown test cases , is there any better approach to find the winner , or any other test case which defies this logic.
This is a classic dynamic programming problem. First, find a recurrence relationship that describes an outcome of a game in terms of smaller games. Your parameters here are X and Y, where X is the number of matches in one stack and Y in the other. How do I reduce this problem?
Well, suppose it is my turn with X matches, and suppose that Y is divisible by the numbers a1, a2, and a3, while x is divisible by b1, b2, b3. Then, I have six possible turns. The problem reduces to solving for (X-a1, Y) (X-a2, Y) (X-a3,Y), (X,Y-b1), (X,Y-b2), (X, Y-b3). Once these six smaller games are solved, if one of them is a winning game for me, then I make the corresponding move and win the game.
There is one more parameter, which is whose turn it is. This doubles the size of solvable problems.
The key is to find all possible moves, and recur for each of them, keeping a storage of already solved games for efficiency.
The base case needs to be figured out naturally.

How to use Manhattan Distance to solve this game?

Each player takes turns, removing 1 or 2 bananas from a basket of 50 bananas. The player who empties the basket wins.
What are the weights that should be used for the distances, and what should the matrix size be? Should the matrix change every time someone makes a move? Should player 1's moves be horizontal and player 2's moves vertical?
Thanks for the read
I'm not sure why you'd specifically want to use dynamic programming and/or Manhattan distance for this puzzle. This is a game for which you can find a fixed solution.
If you go first and there are 3 bananas, no matter what you play, I win. You pick one, I pick two, and vice versa. If there are six bananas, the same logic allows me to reduce the game to the 3 banana case. In fact, for any 3n bananas, I can reduce the game to 3(n-1) bananas. If the number of bananas isn't a multiple of three, then you can make it a multiple of three (By removing either one or two bananas), and ensure victory.
For k bananas, you always remove k % 3. If k % 3 == 0, you've lost unless your opponent makes a mistake, so play whatever you like. That's it.
I agree with #pdpi but if you insist of solving this problem with dynamic programing, then you should do something like this :
f(left_in_the_basket, mine)
if left_in_the_basket is 2 && turn is mine:
return 1
if left_in_the_basket is 1 && turn is mine:
return 1
if left_in_the_basket is 2 && turn is not mine:
return -1
if left_in_the_basket is 1 && turn is not mine:
return -1
return max (f(left_in_the_basket - 1, not mine), f(left_in_the_basket - 2, not mine))

picking element game

This is a simple game:
There is a set, A={a1,...,an}, the opponents can choose one of the first or last elements of set, and at the end the one who collect bigger numbers wins. Now say each participants dose his best, what I need to do is write a Dynamic algorithm to estimate their score.
any idea or clue is truly appreciated.
Here's a hint: to write a dynamic programming algorithm, you typically need a recurrence. Given
A={a1,...,an}
The recurrence would look something like this
f(A)= max( f({a1,...,a_n-1}) , f({a2,...,a_n}) )
Actually the recurrence relation given by dfb may not lead to right answer
as it is not leading to the right sub-optimal structure !
Assume the Player A begins the game :
the structure of problem for him is [a1,a2,...an]
After choosing an element , either a1 or an , its player B's turn to play , and then after that move it is player A's move.
So after two moves , Player A's turn will come again and this will be the right sub-problem for him .The right recurrence relation will be
Suppose from i to j elements are left :
A(i,j)= max(min( A(i+1,j-1),A(i+2,j)+a[i] ), min(A(i,j-2),A(i+1,j-1))+a[j])
Refer to the following link :
http://people.csail.mit.edu/bdean/6.046/dp/
EXAMPLE CODE
Here is Python code to compute the optimal score for first and second players.
A=[3,1,1,3,1,1,3]
cache={}
def go(a,b):
"""Find greatest difference between player 1 coins and player 2 coins when choosing from A[a:b]"""
if a==b: return 0 # no stacks left
if a==b-1: return A[a] # only one stack left
key=a,b
if key in cache:
return cache[key]
v=A[a]-go(a+1,b) # taking first stack
v=max(v,A[b-1]-go(a,b-1)) # taking last stack
cache[key]=v
return v
v = go(0,len(A))
n=sum(A)
print (n+v)/2,(n-v)/2
COUNTEREXAMPLE
Note that the code includes a counter example to one of the other answers to this question.
Consider the case [3,1,1,3,1,1,3].
By symmetry, the first players move always leaves the pattern [1,1,3,1,1,3].
For this the sum of even elements is 1+3+1=5, while the sum of odd is 1+1+3=5, so the argument is that from this position the second player will always win 5, and the first player will always win 5, so the first player will win (as he gets 5 in addition to the 3 from the first move).
However, this logic is flawed because the second player can actually get more.
First player takes 3, leaves [1,1,3,1,1,3] (only choice by symmetry)
Second player takes 3, leaves [1,1,3,1,1]
First player takes 1, leaves [1,3,1,1] (only choice by symmetry)
Second player takes 1, leaves [1,3,1]
First player takes 1, leaves [3,1] (only choice by symmetry)
Second player takes 3, leaves [1]
First player takes 1
So overall first player gets 3+1+1+1=6, while second gets 3+1+3=7 and second player wins.
The flaw is that although it is true that the second player can play such that they will win all even or all odd positions, this is not optimal play and they can actually do better than this in some cases.
Actually you do not need dynamic programming, because it is easy to find an explicit solution for the game above.
Case n is even or n = 1.
The second player to move will always lose.
Case n odd and n > 1.
The second player has a winning strategy iff one of the following 2 scenarios happen:
The elements with even index have bigger sum than all the elements with odd index
All odd elements except the last have bigger sum than all the remainings AND
All odd elements except the first have bigger sum than all the remainings.
Proof sketch:
Case n is even or n = 1: Let Sodd and Seven be the sum of all elements with even/odd indexes. Assume that Sodd > Seven, same argument hold otherwise. The first player has a winning strategy, since he can play in such a way that he will get all odd indexed items.
The case n is odd and n > 1 can also be resolved directly. In fact the first player has two options, he can get the first or last element of the set. Of the remaining elements, partition them the two subsets with odd and even indexes; by the argument above, the second player is going to take the subset with largest sum. If you expand the tree game you will end up with the statement above.

Generic solution to towers of Hanoi with variable number of poles?

Given D discs, P poles, and the initial starting positions for the disks, and the required final destination for the poles, how can we write a generic solution to the problem?
For example,
Given D=6 and P=4, and the initial starting position looks like this:
5 1
6 2 4 3
Where the number represents the disk's radius, the poles are numbered 1-4 left-right, and we want to stack all the disks on pole 1.
How do we choose the next move?
The solution is (worked out by hand):
3 1
4 3
4 1
2 1
3 1
(format: <from-pole> <to-pole>)
The first move is obvious, move the "4" on top of the "5" because that its required position in the final solution.
Next, we probably want to move the next largest number, which would be the "3". But first we have to unbury it, which means we should move the "1" next. But how do we decide where to place it?
That's as far as I've gotten. I could write a recursive algorithm that tries all possible places, but I'm not sure if that's optimal.
We can't.
More precisely, as http://en.wikipedia.org/wiki/Tower_of_Hanoi#Four_pegs_and_beyond says, for 4+ pegs, proving what the optimal solution is is an open problem. There is a known very good algorithm, that is widely believed to be optimal, for the simple case where the pile of disks is on one peg and you want to transfer the whole pile to another. However we do not have an algorithm, or even a known heuristic, for an arbitrary starting position.
If we did have a proposed algorithm, then the open problem would presumably be much easier.

Resources