What is the order of growth for this code segment? - algorithm

int sum=0;
for(int i=1;i<N;i*=2)
for(int j=0;j<N;j++)
sum++;
I need good explain for this.thanks in advance.

So, basically there you are adding the values of some cells of a matrix(2D table). A cell is where a row and a column meet together. i is responsible for the row number and j is responsible for the column number. In this matrix, you ignore the first row since it starts with i=1 (programming counting starts with 0). You first count the cells in the second row as a number (1st row as a programmer). Then each time we multiply the row number by 2. So first we count cells on the row[1] then cells on row [2] then row[4], row[8] etc. till the row number = N.
Can we make a different approach to this?
Yes, we can do it in another faster way.
If we actually know the number of columns in each row (in our case is N) with the mathematical function of the logarithm (log(N)) and we add 1 since we are not counting row[1] we can find the number of rows we want to count and multiply it with the number of cells per row. So the final result would be:
sum = (log(N) + 1) * N

Related

Dynamic programming: why update array in the reversed order

I am trying to work through this question in LeetCode.
119. Pascal's Triangle II
Given a non-negative index k where k ≤ 33, return the kth index row of the Pascal's triangle.
Note that the row index starts from 0.
In Pascal's triangle, each number is the sum of the two numbers directly above it.
Example:
Input: 3
Output: [1,3,3,1]
Follow up:
Could you optimize your algorithm to use only O(k) extra space?
import java.util.Arrays;
class Solution {
public List<Integer> getRow(int rowIndex) {
Integer[] dp = new Integer[rowIndex+1];
Arrays.fill(dp,1);
for(int i = 2; i <= rowIndex;i++){
for(int j = i- 1; j > 0;j--){
dp[j] = dp[j-1] + dp[j];
}
}
return Arrays.asList(dp);
}
}
And I see some one giving this working solution.
I can understand why it is correct.
But I am still quite unclear why the array is updating in this order.
In this case I know the transition of status is like:
P(n) = P(n-1) + P(n)
But how can this give clues on how to choose the direction of updating the array?
Why exactly the ascending order doesn't work in this case if we think in the way of DP. I know substantially this could cause duplicated calculation.
I know this may be subtle but still how anyone could at least cast even a little light on that.
Possibly the formula Pn = Pn-1 + Pn brings confusion, as it is not a true recurrence relationship. If it were, it would be infinite.
The true recurrence relationship is given by:
Prow, n = Prow-1, n-1 + Prow-1, n
Or in more complete terms:
∀ n ∈ {1, row-1}: Prow, n = Prow-1, n-1 + Prow-1, n
Prow, 0 = 1
Pn, n = 1
If you would implement this naively, you would create a 2-dimensional DP matrix. Starting with row 0, you would build up the DP matrix going from one row to the next, using the above recurrence relationship.
You then find that you only need the previous row's DP data to calculate the current row's. All the DP rows that come before the previous one are idle: they don't serve any purpose anymore. They are a waste of space.
So then you decide to not create a whole DP matrix, but just two rows. Once you have completed the second row of this DP structure, you make that row the first, and reset the second row in the DP structure. And then you can continue filling that second row, until it is complete again, and you repeat this "shift" of rows...
And now we come to the last optimisation, which brings us to your question:
You can actually do it with one DP row. That row will represent both the previous DP row as the current. For that to work, you need to update that row from right to left.
Every value that is updated is considered "current row", and every value that you read is considered "previous row". That way the right side of the recurrence relation refers to the previous DP row, and the left side (that is assigned) to the current.
This works only from right to left, because the recurrence formula never refers to n+1, but to n at the most. If it would have referred to n and n+1, then you would have had to go from left to right.
At the moment we read the value at n, it is still the value that corresponds to the previous DP row, and once we write to it, we will not need that previous value anymore. And when we read the value at n-1 we are sure it is still the previous row's value, since we come from the right, and did not update it yet.
You can imagine how we wipe-and-replace the values of the "previous" row with the new values of the "current" row.
Hope this clarifies it a bit.

How to approach routing path with Dynamic Programming

there is N x N sized array filled with random number (-100 <= x <= 100)
starting from A[0][0],
it moves to adjacent indices by stages.
Restriction.
Can't move to visited index.
Can't go upside.
I have to get the biggest value when it finishes moving to A[N-1][N-1].
values in indices that i've visited should be added to the sum
what is the methodology to approach this problem?
[edit]
A more compact statement of a problem: given a square N*N matrix, find the maximum sum of the visited elements along any exploration path passing through adjacent nodes (no diagonals) starting from [0][0] and ending in [N-1][N-1] within the restrictions of:
when changing rows, row index will always increase
while on a row, col index will always either decrease or increase (i.e the path does not backtrack on already visited nodes)
You need a 2D state D[i][j], which keeps track of the maximum sum just before leaving row i at column j. The first row is easy to fill - it is just the prefix sum of the matrix' first row.
For all subsequent rows, you can use the following idea: You may have left the previous row at any column. If you know the exit column of the previous row and the exit column of the current row (defined by the state you want to calculate), you know that the sum consists of the accumulated value at the previous row's exit column plus all the values in the current row between the two exit columns. And from all possible exit columns of the previous row, choose the one that results in the maximum sum:
D[i][j] = max_k (D[i - 1][k] + Sum{m from j to k} A[i][m])
Note that this sum can be calculated incrementally for all possible k. The notation Sum{m from j to k} should also be valid for k smaller than j, which then means to traverse the row backwards.
Calculate these states row by row until you end up at D[N-1][N-1], which then holds the solution for your problem.

Finding Squares on the chessboard attackable by a rook

The question is like this..
There is an NxN chessboard. Each square on the chessboard can be either empty or can have a rook. A rook as we know can attack either horizontally or vertically. Given a 2D matrix where 0 represents an empty square and 1 represents a rook, we have to fill in all the cells in the matrix with 1 which represent squares that can be attacked by any rook present on the chessboard.
Now, I could do this easily in O(n^3) time and constant space complexity. And then in O(n^2) time and O(2*n) space complexity. But I need to figure out a solution in O(n^2) time and constant space. Somebody please help.
Assume that you knew that all your rooks were either in the first column or in the first row. Then you would have a O(n^2) solution with no space overhead by just traversing the first row/column and filling you matrix every time you see a rook (except for filling the first row / first column, that you treat in a last step).
The same holds if all rooks are in the last column / last row, first column / last row, and last column / first row.
Now take your initial matrix and iterate over it until it contains a rook. Let i be the index of the row of this rook and j be the index of its column. Continue iterating over the matrix and for each rook that you find at position (i',j'), remove it replace it with a rook at position (i,j') and another rook at position (i',j).
The matrix you end up with will have ones only along the i-th row and the j-th column. Let A_1 be the submatrix of A formed by its i first rows and
j first columns. Then A_1 has the property that it contains ones only on its
last row / las column and thus you can solve for A_1 without space overhead. Do the same for the three other submatrices of A (the one with the n-i+1 last rows and j first colummns, and so on).
If this is not clear please tell me and I will give more details.
Try this solution
int main(){
int n;
int chess[64][64];
int hor[64],ver[64];
//read chessboard
for(int i=0; i<n; i++)
for(int j=0; j<n; j++)
if(chess[i][j] == 1){
hor[i] = 1;
ver[j] = 1;
}
int cntHor=0;
int cntVer=0;
for(int i=0; i<n; i++){
if(hor[i] == 1) cntHor++;
if(ver[i] == 1) cntVer++;
}
int result = (cntHor+cntVer)*n-cntHor*cntVer;
cout<<result;
return 0
}

Possibility of making diagonal elements of a square matrix 1,if matrix has only 0 or 1

Let M be an n x n matrix with each entry equal to either 0 or 1. Let m[i][j]
denote the entry in row i and column j. A diagonal entry is one of the
form m[i][i] for some i. Swapping rows i and j of the matrix M denotes the following action:
we swap the values m[i][k] and m[j][k] for k = 1, 2 ..... n. Swapping two columns
is defined analogously We say that M is re arrangeable if it is possible to swap some of the pairs of rows and some of the pairs of columns (in any sequence) so that,
after all the swapping, all the diagonal entries of M are equal to 1.
(a) Give an example of a matrix M that is not re arrangeable, but for
which at least one entry in each row and each column is equal to !.
(b) Give a polynomial-time algorithm that determines whether a matrix
M with 0-1 entries is re-arrangeable.
I tried a lot but could not reach to any conclusion please suggest me algorithm for that.
I think this post is on topic here because I think the answer is http://en.wikipedia.org/wiki/Assignment_problem. Consider the job of putting a 1 in column i, for each i. Each row could do some subset of those jobs. If you can find an assignment of rows such that there is a different row capable of putting a 1 in each column then you can make the matrix diagonal by rearranging the rows so that row i puts a 1 on column i.
Suppose that there is an assignment that solves the problem. Paint the cells that hold the 1s for the solution red. Notice that permuting rows leaves a single red cell in each row and in each column. Similarly permuting columns leaves a single red cell in each row and each column. Therefore no matter how much you permute rows and columns I can restore the diagonal by permuting rows. Therefore if there is any solution which places 1s on all the diagonals, no matter how much you try to disguise it by permuting both rows and columns I can restore a diagonal by permuting only rows. Therefore the assignment algorithm fails to solve this problem exactly when there is no solution, for example if the only 1s are in the top row and the leftmost column.

Need an algorithm to make all rows and columns have non-negative sums

I'm trying to figure out this problem. I have a matrix with integer values. The goal is to get it so that every row sum and every column sum is non-negative. The only things I can do are change the signs of an entire row or an entire column.
Here's what I've tried. I look for a row or column with negative sum, and I flip it. This works on all the examples that I tried, but now I have to explain why, and I'm not sure. Sometimes when I do this the number of negative sums goes up, like when I flip a row, sometimes there are more bad columns afterwards. But I can't find an example where this doesn't work, and I don't know how else to do the problem.
Flipping a row or column with negative sum is correct and will always lead to a situation where all rows and columns have nonnegative (not necessarily positive -- consider the all 0's matrix) sums.
The problem is that you should not keep track of how many rows or columns you need to flip, but what the sum of all the entries is. Let A be the matrix, and let a be the sum of all the entries. When you flip a row or column with sum -s (s is positive), then this adds 2s to a. Since a is bounded above, eventually this process must terminate.
Suppose you have a row of integers, a, b, c... etc.
If a + b + c + ... = n, then by flipping all the signs you get
(-a) + (-b) + (-c) + (-)... = -(a+b+c+...) = -n
If n is negative, then -n is positive, so you just made the row positive by flipping all signs. That's the math behind your method, at least.
Is this what you're looking for?

Resources