there is N x N sized array filled with random number (-100 <= x <= 100)
starting from A[0][0],
it moves to adjacent indices by stages.
Restriction.
Can't move to visited index.
Can't go upside.
I have to get the biggest value when it finishes moving to A[N-1][N-1].
values in indices that i've visited should be added to the sum
what is the methodology to approach this problem?
[edit]
A more compact statement of a problem: given a square N*N matrix, find the maximum sum of the visited elements along any exploration path passing through adjacent nodes (no diagonals) starting from [0][0] and ending in [N-1][N-1] within the restrictions of:
when changing rows, row index will always increase
while on a row, col index will always either decrease or increase (i.e the path does not backtrack on already visited nodes)
You need a 2D state D[i][j], which keeps track of the maximum sum just before leaving row i at column j. The first row is easy to fill - it is just the prefix sum of the matrix' first row.
For all subsequent rows, you can use the following idea: You may have left the previous row at any column. If you know the exit column of the previous row and the exit column of the current row (defined by the state you want to calculate), you know that the sum consists of the accumulated value at the previous row's exit column plus all the values in the current row between the two exit columns. And from all possible exit columns of the previous row, choose the one that results in the maximum sum:
D[i][j] = max_k (D[i - 1][k] + Sum{m from j to k} A[i][m])
Note that this sum can be calculated incrementally for all possible k. The notation Sum{m from j to k} should also be valid for k smaller than j, which then means to traverse the row backwards.
Calculate these states row by row until you end up at D[N-1][N-1], which then holds the solution for your problem.
Related
I am trying to work through this question in LeetCode.
119. Pascal's Triangle II
Given a non-negative index k where k ≤ 33, return the kth index row of the Pascal's triangle.
Note that the row index starts from 0.
In Pascal's triangle, each number is the sum of the two numbers directly above it.
Example:
Input: 3
Output: [1,3,3,1]
Follow up:
Could you optimize your algorithm to use only O(k) extra space?
import java.util.Arrays;
class Solution {
public List<Integer> getRow(int rowIndex) {
Integer[] dp = new Integer[rowIndex+1];
Arrays.fill(dp,1);
for(int i = 2; i <= rowIndex;i++){
for(int j = i- 1; j > 0;j--){
dp[j] = dp[j-1] + dp[j];
}
}
return Arrays.asList(dp);
}
}
And I see some one giving this working solution.
I can understand why it is correct.
But I am still quite unclear why the array is updating in this order.
In this case I know the transition of status is like:
P(n) = P(n-1) + P(n)
But how can this give clues on how to choose the direction of updating the array?
Why exactly the ascending order doesn't work in this case if we think in the way of DP. I know substantially this could cause duplicated calculation.
I know this may be subtle but still how anyone could at least cast even a little light on that.
Possibly the formula Pn = Pn-1 + Pn brings confusion, as it is not a true recurrence relationship. If it were, it would be infinite.
The true recurrence relationship is given by:
Prow, n = Prow-1, n-1 + Prow-1, n
Or in more complete terms:
∀ n ∈ {1, row-1}: Prow, n = Prow-1, n-1 + Prow-1, n
Prow, 0 = 1
Pn, n = 1
If you would implement this naively, you would create a 2-dimensional DP matrix. Starting with row 0, you would build up the DP matrix going from one row to the next, using the above recurrence relationship.
You then find that you only need the previous row's DP data to calculate the current row's. All the DP rows that come before the previous one are idle: they don't serve any purpose anymore. They are a waste of space.
So then you decide to not create a whole DP matrix, but just two rows. Once you have completed the second row of this DP structure, you make that row the first, and reset the second row in the DP structure. And then you can continue filling that second row, until it is complete again, and you repeat this "shift" of rows...
And now we come to the last optimisation, which brings us to your question:
You can actually do it with one DP row. That row will represent both the previous DP row as the current. For that to work, you need to update that row from right to left.
Every value that is updated is considered "current row", and every value that you read is considered "previous row". That way the right side of the recurrence relation refers to the previous DP row, and the left side (that is assigned) to the current.
This works only from right to left, because the recurrence formula never refers to n+1, but to n at the most. If it would have referred to n and n+1, then you would have had to go from left to right.
At the moment we read the value at n, it is still the value that corresponds to the previous DP row, and once we write to it, we will not need that previous value anymore. And when we read the value at n-1 we are sure it is still the previous row's value, since we come from the right, and did not update it yet.
You can imagine how we wipe-and-replace the values of the "previous" row with the new values of the "current" row.
Hope this clarifies it a bit.
I have a n x n array. Each field has a cost associated with it (a natural number) and I here's my problem:
I start in the first column. I need to find the cheapest way to move through an array (from any field in the first column to any in the last column) following these two rules:
I can only make moves to the right, to the top right, the lower right an to the bottom.
In a path I can only make k (some constant) moves to the bottom.
Meaning when I'm at cell x I can moves to these cells o:
How do I find the cheapest way to move through an array? I thought of this:
-For each field of the n x n array I keep a helpful array of how many bottom moves it takes to get there in the cheapest path. For the first column it's all 0's.
-We go through each of the field in this orded : columns left to right and rows top to bottom.
-For each field we check which of their neighbours is 'the cheapest'. If it's the upper one (meaning we have to take a bottom route to get from him) we check if it took k bottom moves to get to him, if not then then we assign the cost of getting to analyzed field as the sum of getting to field at the top+cost of the field, and in the auxilary array for the record corresponding to the field the put the number of bottom moves as x+1, where x is how many bottom moves we took to get to his upper neightbour.
-If the upper neighbour is not the cheapest we assign the cost of the other cheapest neighbour and the number of bottom moves as the number of moves we took to get to him.
Time complexity is O(n^2), and so is memory.
Is this correct?
Here is DP solution in O(N^2) time and O(N) memory :-
Dist(i,j) = distance from point(i,j) to last column.
Dist(i,j) = cost(i,j) + min { Dist(i+1,j),Dist(i,j+1),Dist(i+1,j+1),Dist(i-1,j+1) }
Dist(i,N) = cost[i][N]
Cheapest = min { D(i,0) } i in (1,M)
This DP equation suggests that you need only values of next rows to get current row so O(N) space for maintaining previous calculation. It also suggests that higher row values in same column need to evaluated first.
Pseudo Code :-
int Prev[N],int Curr[N];
// last row calculation => Base Case for DP
for(i=0;i<M;i++)
Prev[i] = cost[i][N-1];
// Evaluate the rows and columns in descending manner
for(j=N-2;j>=0;j--) {
for(i=M-1;i>=0;i--) {
Curr[i][j] = cost[i][j] + min ( Curr[i+1][j],Prev[i][j+1],Prev[i-1][j+1],Prev[i+1][j+1] )
}
Prev = Curr
}
// find row with cheapest cost
Cheapest = min(Prev)
Let M be an n x n matrix with each entry equal to either 0 or 1. Let m[i][j]
denote the entry in row i and column j. A diagonal entry is one of the
form m[i][i] for some i. Swapping rows i and j of the matrix M denotes the following action:
we swap the values m[i][k] and m[j][k] for k = 1, 2 ..... n. Swapping two columns
is defined analogously We say that M is re arrangeable if it is possible to swap some of the pairs of rows and some of the pairs of columns (in any sequence) so that,
after all the swapping, all the diagonal entries of M are equal to 1.
(a) Give an example of a matrix M that is not re arrangeable, but for
which at least one entry in each row and each column is equal to !.
(b) Give a polynomial-time algorithm that determines whether a matrix
M with 0-1 entries is re-arrangeable.
I tried a lot but could not reach to any conclusion please suggest me algorithm for that.
I think this post is on topic here because I think the answer is http://en.wikipedia.org/wiki/Assignment_problem. Consider the job of putting a 1 in column i, for each i. Each row could do some subset of those jobs. If you can find an assignment of rows such that there is a different row capable of putting a 1 in each column then you can make the matrix diagonal by rearranging the rows so that row i puts a 1 on column i.
Suppose that there is an assignment that solves the problem. Paint the cells that hold the 1s for the solution red. Notice that permuting rows leaves a single red cell in each row and in each column. Similarly permuting columns leaves a single red cell in each row and each column. Therefore no matter how much you permute rows and columns I can restore the diagonal by permuting rows. Therefore if there is any solution which places 1s on all the diagonals, no matter how much you try to disguise it by permuting both rows and columns I can restore a diagonal by permuting only rows. Therefore the assignment algorithm fails to solve this problem exactly when there is no solution, for example if the only 1s are in the top row and the leftmost column.
I have a table of dimension m * n as given below
2 6 9 13
1 4 12 21
10 14 16 -1
Few constraints about this table:
Elements in each row is sorted in increasing order (natural
ordering).
A -1 means the cell is of no significance for the purpose of
calculatio, i.e. no element exists there.
No element can appear in a row after a -1.
All the cells can have either a positive number between 0 and N or
a -1.
No two cells have the same positive numbe, i.e. a -1 can appear
multiple times but no other number can.
Question: I would like to find a set S of n numbers from the table where the set must contain only one number from each row and the max(S) - min(S) is as small as possible.
For e.g. the above table gives me S = 12,13,14.
I would really appreciate if this can be solved. My solution is complicated and it takes O(m^n) and this is too much. I want an optimal solution.
Here is a brute force O((m*n)^2 * nlog(m)) algorithm that I can prove works:
min <- INFINITY
For each 2 numbers in different rows, let them be a,b
for each other row:
check if there is a number between a and b
if there is a matching number in every other row:
min <- min{min,|a-b|}
Explanation:
Checking if there is a number between a and b can be done using binary search, and is O(logm)
There are O((n*m)^2) different possibilities for a,b.
The idea is to exhaustively check the pair which creates the maximal difference, and check if it gives a "feasible" solution(all other elements in this solution are in range [a,b]), and get the pair that minimizes the difference between all "feasible" solutions .
EDIT: removed the 2nd solution I proposed, which was greedy and wrong.
Put positions of all first elements of each row into priority queue (min-heap).
Remove smallest element from the queue and replace it with the next element from the same row.
Repeat step 2 until no more elements different from "-1" are left in some row. Calculate max(S) - min(S) for each iteration and if it is smaller than any previous value, update the best-so-far set S.
Time complexity is O(m*n*log(m)).
I'm trying to figure out this problem. I have a matrix with integer values. The goal is to get it so that every row sum and every column sum is non-negative. The only things I can do are change the signs of an entire row or an entire column.
Here's what I've tried. I look for a row or column with negative sum, and I flip it. This works on all the examples that I tried, but now I have to explain why, and I'm not sure. Sometimes when I do this the number of negative sums goes up, like when I flip a row, sometimes there are more bad columns afterwards. But I can't find an example where this doesn't work, and I don't know how else to do the problem.
Flipping a row or column with negative sum is correct and will always lead to a situation where all rows and columns have nonnegative (not necessarily positive -- consider the all 0's matrix) sums.
The problem is that you should not keep track of how many rows or columns you need to flip, but what the sum of all the entries is. Let A be the matrix, and let a be the sum of all the entries. When you flip a row or column with sum -s (s is positive), then this adds 2s to a. Since a is bounded above, eventually this process must terminate.
Suppose you have a row of integers, a, b, c... etc.
If a + b + c + ... = n, then by flipping all the signs you get
(-a) + (-b) + (-c) + (-)... = -(a+b+c+...) = -n
If n is negative, then -n is positive, so you just made the row positive by flipping all signs. That's the math behind your method, at least.
Is this what you're looking for?