I'm working on a task to develop predicates to help play this two player game outlined below.
Problem
A 2 person game is played with a set of identical stones arranged into a number of heaps. There may be any number of stones and any number of heaps. A move in this game consists of either removing any number of stones from one heap, or removing an equal number of stones from each of 2 heaps. The loser of this game is the player who picks up the last stone.
e.g
three heaps of sizes 3,2,1, there are 10 possible moves, leading to the below states:
Take from first heap only: [2,2,1],[1,2,1],[2,1]
Taking from 2nd heap only: [3,1,1],[3,1]
Taking from the 3rd heap only: [3,2]
Taking from the 1st and 2nd heaps: [2,1,1], [1,1]
Taking from the 1st and 3rd heaps: [2,2]
Taking from the 2nd and 3rd heaps: [3,1]
[3,1] occurs twice because there are three different ways of reaching it in one move.
Task1
Create a predicate move (S1,S2) that upon backtracking returns all states S2 reachable from S1 in one move.
So far what we have
change([_|T],T).
change([H|T],[H1|T]):-
between(1,H,W),
H1 is H-W,
W<H.
change([H|T],[H|T1]):-
change(T,T1).
change2([H|T],[H1|T1]):-
between(0,H,W),
H1 is H-W,
W<H,
change(T,T1).
So this produces results that will take any number of stones from any individual list and currently will take a differing number of stones from differing lists. So currently I cant get it to work where you want to take the same number of stones from differing lists.
I.e if I had [1,4,1,3] I'd like to be able to end up with [4,3].
Any help would be really appreciated on this task :).
Related
I am making an algorithm which allocates badminton players into games (2 x 2) in the following way:
Players are divided into pairs.
All possible pair combinations must be done so everyone plays with everyone. If there are 10 players, everyone will belong to 9 pairs.
So far, this is simple to implement.
Then, games should be allocated into 2 courts. This means, 2 games can be going on simultaneously but of course, a player can't be part of two games going on at the same time.
My algorithm idea was:
Create an array containing all possible pairs.
Allocate two pairs from the array into court 1 to play against each other. If the second pair has overlapping members with pair 1, take the next pair from the array. Iterate the array in order and remove allocated pairs from it.
Do the same for court 2 but also make sure that the pairs do not contain overlapping members with players in court 1.
Make a new round and continue from step 2.
This works quite well but as a side effect, on the last rounds, only court 1 will be used because it is impossible to find any more pairs into court 2 fulfilling the condition in step 3. So the capacity of court 2 will be wasted. I am a little unsure if it is even possible to find a perfect solution to this problem. If yes, how the algorithm could be improved?
So I had a programming competition ,I was stuck in a question and I trying to solve it these few days. because I want to know the answer so much, but I'm not able to do it. Any explanation or answer will be appreciated. Here is the question:
There’s a concert happening at a Hall. The hall is an infinitely long 1D number line, where each of the N students are standing at a certain point in the number line. Since they are all scattered around different positions in the number line, its getting harder to manage them. So the organizer Kyle wants to move them in the hall in such a way so that they all are in consecutive positions, for example, they are in positions 2,3,4,5 instead of being scattered around like 1,3,5,6. In simpler words, there is no gap between them. At one time Kyle can pick one person who is on the most outer positions on any side. For example in 1,3,5,6 the most outer positions are 1 and 6. He can pick either of them and move them to any unoccupied position so that they are not in the most outer position anymore. This makes them come closer and closer with each move until they are all in consecutive positions. Find the maximum and minimum number of moves that can accomplish this task.
The maximum is the count of gaps, since the most inefficient move will reduce the count of gaps between the outermost pair by one.
For the minimum, given k people, in linear time (using a sliding window) find the stretch of k consecutive positions with the most people, and call this count c. Then the minimum is k-c, where each move puts someone from outside this window to inside it.
I have some idea for you but I'm not totally sure I got it right..
So:
If number 1 is currently the most outer than he is then jumping to the highest number
and number 2 is now the outer and it goes infinitely and same goes for the other end the outer on the other side is moving to the next becoming last and so on.. make sense?
It is ultimately a game of nim with certain modification.
Rules are as follows :
There are two players A and B.
The game is played with two piles of matches. Initially, the first pile contains N matches and the second one contains M matches.
The players alternate turns; A plays first.
On each turn, the current player must choose one pile and remove a positive number of matches (not exceeding the current number of matches on that pile) from it.
It is only allowed to remove X matches from a pile if the number of matches in the other pile divides X.
The player that takes the last match from any pile wins.
Both players play optimally.
My take :
let us say we have two piles 2 7. we have 3 cases to reduce the second pile to wiz : 2 1 , 2 3 , 2 5 If A is playing optimally he/she will go for 2 3 so that the only chance left for B is to do 2 1 and then A can go for 0 1 and win the game. The crust of the solution being that if A or B ever encounters any situation where it can directly lose in the next step then it will try their best to avoid it and use the situation to their advantage by just leaving at a state 1 step before that losing stage .
But this approach fails for some unknown test cases , is there any better approach to find the winner , or any other test case which defies this logic.
This is a classic dynamic programming problem. First, find a recurrence relationship that describes an outcome of a game in terms of smaller games. Your parameters here are X and Y, where X is the number of matches in one stack and Y in the other. How do I reduce this problem?
Well, suppose it is my turn with X matches, and suppose that Y is divisible by the numbers a1, a2, and a3, while x is divisible by b1, b2, b3. Then, I have six possible turns. The problem reduces to solving for (X-a1, Y) (X-a2, Y) (X-a3,Y), (X,Y-b1), (X,Y-b2), (X, Y-b3). Once these six smaller games are solved, if one of them is a winning game for me, then I make the corresponding move and win the game.
There is one more parameter, which is whose turn it is. This doubles the size of solvable problems.
The key is to find all possible moves, and recur for each of them, keeping a storage of already solved games for efficiency.
The base case needs to be figured out naturally.
I have a list of lists in the form l = [[1,2,3],[3,4,6],...]. There are m sublists each representing a player. Each player can perform a number of tasks (there are n tasks). I would like to find the shortest path through all the steps by minimizing the number of switches between players. So basically have the same player perform the tasks consecutively as often as possible. I'm trying to write an algorithm to optimize this that runs in polynomial time but I'm having a bit of trouble coming up with a good scheme. I was thinking it could be like Dijkstra's algorithm, but I'm not exactly sure how to adapt it to fit my case. Below a concrete example of what I want.
Example
n = 5 and m = 3 such that we have a list of lists l = [[1,2,5],[1,3,5],[2,3,4]]
The algorithm would return [0,2,2,2,0]
i.e. player 0 would be chosen first then swap to player 2 for 3 tasks than back to player 0 for the last task.
I'm just looking for pseudo code or a push in the right direction. Really struggling and brute force won't work for large numbers!
Since it is never beneficial to have a player perform fewer consecutive tasks than he can, a simple greedy algorithm suffices to find the optimal solution:
Starting with task 1, find the player that can execute the largest number of consecutive tasks starting with that first task.
Starting with the first task that the previously found player can't do, find the player that can execute the largest number of consecutive tasks starting with that task.
Repeat until all the tasks are done.
Here's a proof that this algorithm is optimal:
Let say there's an optimal solution that has player A performing tasks i through j and then player B performing tasks j+1 through k.
If there is any player (including A) that can perform tasks i through j+1, then we can use that player to do those tasks instead, and the solution will be as good or better. Either B will perform tasks j+2 through k, and the number of player switches will be the same, or j+1 = k and we won't need player B at all.
Therefore there is an optimal solution in which every chosen player maximizes the number of consecutive moves that can be performed by that player. In fact, since every such solution is equivalent, they are all optimal.
EDIT: As I was writing this, Pham suggests to use a segment tree, but no such complex data structure is necessary. If the sublists are sorted and you make an index from each task number to the sublist positions at which it can be found, then you can do this in O(N) time.
This is a simple game:
There is a set, A={a1,...,an}, the opponents can choose one of the first or last elements of set, and at the end the one who collect bigger numbers wins. Now say each participants dose his best, what I need to do is write a Dynamic algorithm to estimate their score.
any idea or clue is truly appreciated.
Here's a hint: to write a dynamic programming algorithm, you typically need a recurrence. Given
A={a1,...,an}
The recurrence would look something like this
f(A)= max( f({a1,...,a_n-1}) , f({a2,...,a_n}) )
Actually the recurrence relation given by dfb may not lead to right answer
as it is not leading to the right sub-optimal structure !
Assume the Player A begins the game :
the structure of problem for him is [a1,a2,...an]
After choosing an element , either a1 or an , its player B's turn to play , and then after that move it is player A's move.
So after two moves , Player A's turn will come again and this will be the right sub-problem for him .The right recurrence relation will be
Suppose from i to j elements are left :
A(i,j)= max(min( A(i+1,j-1),A(i+2,j)+a[i] ), min(A(i,j-2),A(i+1,j-1))+a[j])
Refer to the following link :
http://people.csail.mit.edu/bdean/6.046/dp/
EXAMPLE CODE
Here is Python code to compute the optimal score for first and second players.
A=[3,1,1,3,1,1,3]
cache={}
def go(a,b):
"""Find greatest difference between player 1 coins and player 2 coins when choosing from A[a:b]"""
if a==b: return 0 # no stacks left
if a==b-1: return A[a] # only one stack left
key=a,b
if key in cache:
return cache[key]
v=A[a]-go(a+1,b) # taking first stack
v=max(v,A[b-1]-go(a,b-1)) # taking last stack
cache[key]=v
return v
v = go(0,len(A))
n=sum(A)
print (n+v)/2,(n-v)/2
COUNTEREXAMPLE
Note that the code includes a counter example to one of the other answers to this question.
Consider the case [3,1,1,3,1,1,3].
By symmetry, the first players move always leaves the pattern [1,1,3,1,1,3].
For this the sum of even elements is 1+3+1=5, while the sum of odd is 1+1+3=5, so the argument is that from this position the second player will always win 5, and the first player will always win 5, so the first player will win (as he gets 5 in addition to the 3 from the first move).
However, this logic is flawed because the second player can actually get more.
First player takes 3, leaves [1,1,3,1,1,3] (only choice by symmetry)
Second player takes 3, leaves [1,1,3,1,1]
First player takes 1, leaves [1,3,1,1] (only choice by symmetry)
Second player takes 1, leaves [1,3,1]
First player takes 1, leaves [3,1] (only choice by symmetry)
Second player takes 3, leaves [1]
First player takes 1
So overall first player gets 3+1+1+1=6, while second gets 3+1+3=7 and second player wins.
The flaw is that although it is true that the second player can play such that they will win all even or all odd positions, this is not optimal play and they can actually do better than this in some cases.
Actually you do not need dynamic programming, because it is easy to find an explicit solution for the game above.
Case n is even or n = 1.
The second player to move will always lose.
Case n odd and n > 1.
The second player has a winning strategy iff one of the following 2 scenarios happen:
The elements with even index have bigger sum than all the elements with odd index
All odd elements except the last have bigger sum than all the remainings AND
All odd elements except the first have bigger sum than all the remainings.
Proof sketch:
Case n is even or n = 1: Let Sodd and Seven be the sum of all elements with even/odd indexes. Assume that Sodd > Seven, same argument hold otherwise. The first player has a winning strategy, since he can play in such a way that he will get all odd indexed items.
The case n is odd and n > 1 can also be resolved directly. In fact the first player has two options, he can get the first or last element of the set. Of the remaining elements, partition them the two subsets with odd and even indexes; by the argument above, the second player is going to take the subset with largest sum. If you expand the tree game you will end up with the statement above.