Consider a game where the game board is a grid with n rows and m columns. You begin at the bottom left corner of the grid (assume this has coordinates (0,0)). You start with a score of 0, and inventory space I > 0.
At each cell of the grid there may be one of two things, the cell is either empty, or contains some loot. Loot has a size x and value v, you can think of each potential piece of loot being indexed by the cell, i.e. L(i,j) = (x(i,j), v(i,j)), where L(i,j) = (0, 0) if the cell is empty.
From cell (i, j) the player can move up 1 row to one of three cells, (i + 1, j − 1), (i + 1, j), or (i + 1, j + 1). The player cannot move back down the rows or move side to side. When the player reaches a cell with loot they can either choose to take it or leave it. Taking it the loot decreases the players inventory space by the size x, while increasing their score by the value v.
How would a dynamic programming algorithm apply to this?
I figure that the subproblems would be moving up the row, where as long as there is a row above, and one of the left/right each. It would choose whichever one has the highest value v and fits in our inventory x, otherwise it just leaves it.
Related
For better understanding of this question, you can check out:-
1) https://math.stackexchange.com/questions/3100336/how-to-calculate-the-probability-in-this-case
John is playing a game against a magician.In this game, there are initially 'N' identical boxes in front of him and one of them contains a magic pill ― after eating this pill, he becomes immortal.
He has to determine which box contains the pill. He is allowed to perform at most 'M' moves. In each move, he may do one of the following:
1)
Choose one of the boxes that are in front of him uniformly randomly and guess that this box contains the pill. If the guess is correct, the game ends and he gets the pill. Otherwise, after this guess, the magician adds K empty boxes in front of him in such a way that John cannot determine which boxes were added; the box he guessed also remains in front of him and he cannot distinguish this box from the other boxes in subsequent moves either.
2) Choose a number X such that X is a positive multiple of K, but strictly less than the current number of boxes in front of John. The magician then removes X empty boxes. Of course, John must not perform this move if the current number of boxes is ≤K.
If John plays optimally, what will be the maximum probability of him getting the pill ? 'N' is always less than 'K'.
Example:- Let M=3, so 3 moves are allowed. K=20,N=3.
In his first move, John selects a box with probability, x = 1/3 ,(20 boxes have been added(20+3==23) then in the second move, he again selects a box again, with probability this time, y=1/23*(2/3). Here, 2/3 denotes the probability of failure in the first move.
In the third move, he does the same thing with probability , z = 1/43*(22/23)*(2/3) .
So the total probability is= x+y+z=l1
Lets say, in the above case, in the second move,he chooses to remove 20 boxes and do nothing else, then the new final probability is= 1/3+0(nothing is done in second move!) + 2/3*(1/3)=l2. Now, as l2 > l1 ,so 'l2' is the answer to our question.
Basically, we have to determine which sequence of moves will yield the maximum probability? Also,
P(Winning) =P(Game ending in 1st Move)+P(Game ending in 2nd Move)+P(Game ending in 3rd Move) =(1/3)+0+(2/3)*(1/3) =5/9
Given, N,K,M how can we find out the maximum probability?
Do we have to apply dynamic programming?
Let p(N, K, M) be John's probability if he plays optimally. We have the following recurrence relations:
p(N, K, 0) = 0
If there are no remaining moves, then he loses.
if M > 0 and N < X, then p(N, K, M) = 1/N + (N−1)/N · p(N+K, K, M−1)
If there's at least one remaining move, and option #2 is not allowed, then his probability of winning is the probability that he guesses correctly in this round, plus the probability that he guesses wrongly in this round but he wins in a later turn.
if M > 0 and N ≥ X, then p(N, K, M) is the greater of these two:
1/N + (N−1)/N · p(N+K, K, M−1)
If he takes option #1, then this is the same as the case where he was forced to take option #1.
p(N % K, K, M−1), where '%' is the "remainder" or "modulus" operator
If he takes option #2, then he certainly won't win in this step, so his probability of winning is equal to the probability that he wins in a later turn.
Note that we only need to consider N % K, because he should certainly choose the largest value of X that he's allowed to. There's never any benefit to letting the pool of boxes remain unnecessarily large.
Dynamic programming, or recursion plus memoization, is well-suited to this; you can apply the above recurrence relations directly.
Note that K never changes, so you don't need an array dimension for it; and N only changes by adding or subtracting integer multiples of K, so you're best off using array indices n such that N = (N0 % K) + nK.
Additionally, note that M decreases by exactly 1 in each turn, so if you're using a dynamic-programming approach and you only want the final probability, then you don't need to retain probabilities for all values of M; rather, when building the array for a given value of M, you only need to keep the array for M−1.
there is N x N sized array filled with random number (-100 <= x <= 100)
starting from A[0][0],
it moves to adjacent indices by stages.
Restriction.
Can't move to visited index.
Can't go upside.
I have to get the biggest value when it finishes moving to A[N-1][N-1].
values in indices that i've visited should be added to the sum
what is the methodology to approach this problem?
[edit]
A more compact statement of a problem: given a square N*N matrix, find the maximum sum of the visited elements along any exploration path passing through adjacent nodes (no diagonals) starting from [0][0] and ending in [N-1][N-1] within the restrictions of:
when changing rows, row index will always increase
while on a row, col index will always either decrease or increase (i.e the path does not backtrack on already visited nodes)
You need a 2D state D[i][j], which keeps track of the maximum sum just before leaving row i at column j. The first row is easy to fill - it is just the prefix sum of the matrix' first row.
For all subsequent rows, you can use the following idea: You may have left the previous row at any column. If you know the exit column of the previous row and the exit column of the current row (defined by the state you want to calculate), you know that the sum consists of the accumulated value at the previous row's exit column plus all the values in the current row between the two exit columns. And from all possible exit columns of the previous row, choose the one that results in the maximum sum:
D[i][j] = max_k (D[i - 1][k] + Sum{m from j to k} A[i][m])
Note that this sum can be calculated incrementally for all possible k. The notation Sum{m from j to k} should also be valid for k smaller than j, which then means to traverse the row backwards.
Calculate these states row by row until you end up at D[N-1][N-1], which then holds the solution for your problem.
This is an optimization question I've simplified from a more specific problem i'm having, but I'm not sure specifically where this problem is classified under, or the method to obtain a solution (brute force, simulated annealing, linear programming?). Any help or references are appreciated!
We have two MxN matrices M1 and M2, where each entry is either 1 or 0.
I'm trying to get from matrix M1 to matrix M2 in the least amount of time possible.
The goal is to minimize the total time, where time is defined by the following:
0 -> 1 transition = 1s
1 -> 0 transition = 0.1s
The only way the matrix can be changed is by selecting a set of rows and columns, and all the elements at the intersections of the picked rows and columns are switched to 0 / 1, with the entire transition taking the time specified above.
Example:
M1
1 1 1
1 1 0
1 0 0
M2
0 0 1
0 1 1
1 1 1
First iteration:
Select rows 2 and 3, and columns 2 and 3 of M1.
Convert all intersecting elements to 1
takes 1s
M1
1 1 1
1 1 1
1 1 1
Second iteration:
Select rows 1, and columns 1 and 2 of M1.
Convert all intersecting elements to 0
takes 0.1s
M1
0 0 1
1 1 1
1 1 1
Third iteration:
Select row 2 and column 1 of M1.
Convert the selected element to 0
takes 0.1s
M1
0 0 1
0 1 1
1 1 1
Here, the total time is 1.2s.
For the sizes given, this looks like it will be very hard even to approximate. Anyway, here are a couple of ideas.
When a cell needs to change from 0 to 1, I'll write +, when it needs to change in the other direction I'll write -, and when it needs to stay as-is, I'll write either 0 or 1 (i.e. whatever it currently is). So e.g. the problem instance in the OP's question looks like
- - - 1
- - 1 +
- 1 + +
1 + + +
Let's consider a slightly easier monotone version of the problem, in which we never change a cell twice.
Generally requires many more moves, but gives a useful starting point and an upper bound.
In this version of the problem, it doesn't matter in which order we perform the moves.
Simple variations might be more effective as heuristics, e.g. performing a small number of initial 0->1 moves in which every + cell is changed to 1 and other cells are possibly changed too, followed by a series of 1->0 moves to change/fix all other cells.
Shrinking the problem safely
[EDIT 11/12/2014: Fixed the 3rd rule below. Unfortunately it's likely to apply much less often.]
The following tricks never cause a solution to become suboptimal, and may simplify the problem:
Delete any rows or columns that contain no +-cell or --cell: no move will ever use them.
If there are any identical rows or columns, collapse them: whatever you do to this single collapsed row or column, you can do to all rows or columns separately.
If there is any row with just a single +-cell and no 1-cells, you can immediately fix all +-cells in the entire column containing it with a single 0->1 move, since in the monotone problem it's not possible to fix this cell in the same 0->1 move as any +-cell in a different column. Likewise with rows and columns swapped, and with a single --cell and no 0-cells.
Applying these rules multiple times may yield further simplification.
A very simple heuristic
You can correct an entire row or column of + or --cells in a single move. Therefore it is always possible to solve the problem with 2*min(width, height) moves (the 2 is there because we may need both 0->1 and 1->0 moves). A slightly better approach would be to greedily find the row or column with the most cells needing correction, and correct it in a single move, switching between rows and columns freely.
The best possible move
Suppose we have two +-cells (i, j) and (k, l), with i <= k and j <= l. They can be changed in the same 0->1 move exactly when both of their "opposite corners" (i, l) and (k, j) are either + or 1. Also notice that if either or both of (i, j) and (k, l) are 1 (instead of +), then they could still be included in the same move, even though that move would have no effect for one or both of them. So if we build a graph G in which we have a vertex for every cell and an edge between two vertices (i, j) and (k, l) whenever (i, j), (k, l), (i, l) and (k, j) are all either + or 1, a clique in this graph corresponds to a set of cells that can all be changed to (or left at) 1 in a single 0->1 move. To find the best possible move -- that is, the move that changes the most possible 0s to 1s -- we don't quite want the maximum-sized clique in the graph; what we actually want is the clique that contains the largest number of +-cell vertices. We can formulate an ILP that will find this, using a 0/1 variable x_i_j to represent whether vertex (i, j) is in the clique:
Maximise the sum over all variables x_i_j such that (i, j) is a `+`-cell
Subject to
x_i_j + x_k_l <= 1 for all i, j, k, l s.t. there is no edge (i, j)-(k, l)
x_i_j in {0, 1} for all i, j
The constraints prevent any pair of vertices from both being included if there is no edge between them, and the objective function tries to find as large a subset of +-cell vertices as possible that satisfies them.
Of course, the same procedure works for finding 1->0 moves.
(You will already run into problems simply constructing a graph this size: with N and M around 1000, there are around a million vertices, and up to a million million edges. And finding a maximum clique is an NP-hard problem, so it's slow even for graphs with hundreds of edges...)
The fewest possible moves
A similar approach can tell us the smallest number of 0->1 (or 1->0) moves required, and at the same time give us a representative cell from each move. This time we look for the largest independent set in the same graph G:
Maximise the sum over all variables x_i_j such that (i, j) is a `+`-cell
Subject to
x_i_j + x_k_l <= 1 for all i, j, k, l s.t. there is an edge (i, j)-(k, l)
x_i_j in {0, 1} for all i, j
All that changed in the problem was that "no edge" changed to "an edge". This now finds a (there may be more than one) maximum-sized set of +-cell vertices that share no edge between them. No pair of such cells can be changed by the same 0->1 move (without also changing a 0-cell or --cell to a 1, which we forbid in the monotone version of the problem, because it would then need to be changed a second time), so however many vertices are returned, at least that many separate 0->1 moves are required. And because we have asked for the maximum independent set, no more moves are needed (if more moves were needed, there would be a larger independent set having that many vertices in it).
I have a n x n array. Each field has a cost associated with it (a natural number) and I here's my problem:
I start in the first column. I need to find the cheapest way to move through an array (from any field in the first column to any in the last column) following these two rules:
I can only make moves to the right, to the top right, the lower right an to the bottom.
In a path I can only make k (some constant) moves to the bottom.
Meaning when I'm at cell x I can moves to these cells o:
How do I find the cheapest way to move through an array? I thought of this:
-For each field of the n x n array I keep a helpful array of how many bottom moves it takes to get there in the cheapest path. For the first column it's all 0's.
-We go through each of the field in this orded : columns left to right and rows top to bottom.
-For each field we check which of their neighbours is 'the cheapest'. If it's the upper one (meaning we have to take a bottom route to get from him) we check if it took k bottom moves to get to him, if not then then we assign the cost of getting to analyzed field as the sum of getting to field at the top+cost of the field, and in the auxilary array for the record corresponding to the field the put the number of bottom moves as x+1, where x is how many bottom moves we took to get to his upper neightbour.
-If the upper neighbour is not the cheapest we assign the cost of the other cheapest neighbour and the number of bottom moves as the number of moves we took to get to him.
Time complexity is O(n^2), and so is memory.
Is this correct?
Here is DP solution in O(N^2) time and O(N) memory :-
Dist(i,j) = distance from point(i,j) to last column.
Dist(i,j) = cost(i,j) + min { Dist(i+1,j),Dist(i,j+1),Dist(i+1,j+1),Dist(i-1,j+1) }
Dist(i,N) = cost[i][N]
Cheapest = min { D(i,0) } i in (1,M)
This DP equation suggests that you need only values of next rows to get current row so O(N) space for maintaining previous calculation. It also suggests that higher row values in same column need to evaluated first.
Pseudo Code :-
int Prev[N],int Curr[N];
// last row calculation => Base Case for DP
for(i=0;i<M;i++)
Prev[i] = cost[i][N-1];
// Evaluate the rows and columns in descending manner
for(j=N-2;j>=0;j--) {
for(i=M-1;i>=0;i--) {
Curr[i][j] = cost[i][j] + min ( Curr[i+1][j],Prev[i][j+1],Prev[i-1][j+1],Prev[i+1][j+1] )
}
Prev = Curr
}
// find row with cheapest cost
Cheapest = min(Prev)
I'd like to generate the adjacency matrix for a boggle board. A boggle board is one where you have alphabets in a nxn matrix like this:
http://www.wordtwist.org/sample.gif
Each cell is connected to its neighbor cell; basically we move up/down/left/right/diagonally to connect to another cell.
If each cell is viewed as a vertex in a graph, then we can find the adjacency matrix of the boggle board.
I came up with the following formulas to find the adjacent cells:
Assume the index of the cells start with 0 and are numbered from left to right.
i = cell index, n = number of rows/columns. So in a 3x3 matrix, i=0 would be the first cell and n is 3.
up = i-n
down = i+n
left = i-1
right = i+1
diagonal 1 = i-(n+1), i+(n+1)
diagonal 2 = i-(n-1), i+(n-1)
The above formulas fail in case of corner cells. How to exclude the invalid cells for corner cases?
You shouldn't have to "exclude" anything, merely check your result to see whether it is in bounds or not, if it isn't then there is no valid cell. (i.e. If you are at the top left of your 3x3 matrix (i = 0) then up (i - n) is (0 - 3 = -3). Since -3 is outside the bounds of your matrix, there is no valid cell.
So if you are doing a search and want to travel along the "up" adjacent cell, first check whether that location is in bounds, if it is not then you are at the end.
To check if you are on the left or right edge of the matrix, use:
if i % (n-1) == 0 // Right edge
if i % (n) == 0 // Left edge