Given an N*N matrix having 1's an 0's in them and given an integer k,what is the best method to find a rectangular region such that it has k 1's in it ???
I can do it with O(N^3*log(N)), but sure the best solution is faster. First you create another N*N matrix B (the initial matrix is A). The logic of B is the following:
B[i][j] - is the number of ones on rectangle in A with corners (0,0) and (i,j).
You can evaluate B for O(N^2) by dynamic programming: B[i][j] = B[i-1][j] + B[i][j-1] - B[i-1][j-1] + A[i][j].
Now it is very easy to solve this problem with O(N^4) by iterating over all right-bottom (i=1..N, j=1..N, O(N^2)), left-bottom (z=1..j, O(N)), and right-upper (t=1..i, O(N)) and you get the number of ones in this rectangular with the help of B:
sum_of_ones = B[i][j] - B[i][z-1] - B[t-1][j] + B[t-1][z-1].
If you got exactly k: k==sum_of_ones, then out the result.
To make it N^3*log(N), you should find right-upper by binary search (so not just iterate all possible cells).
Consider this simpler problem:
Given a vector of size N containing only the values 1 and 0, find a subsequence that contains exactly k values of 1 in it.
Let A be the given vector and S[i] = A[1] + A[2] + A[3] + ... + A[i], meaning how many 1s there are in the subsequence A[1..i].
For each i, we are interested in the existence of a j <= i such that S[i] - S[j-1] == k.
We can find this in O(n) with a hash table by using the following relation:
S[i] - S[j-1] == k => S[j-1] = S[i] - k
let H = an empty hash table
for i = 1 to N do
if H.Contains (S[i] - k) then your sequence ends at i
else
H.Add(S[i])
Now we can use this to solve your given problem in O(N^3): for each sequence of rows in your given matrix (there are O(N^2) sequences of rows), consider that sequence to represent a vector and apply the previous algorithm on it. The computation of S is a bit more difficult in the matrix case, but it's not that hard to figure out. Let me know if you need more details.
Update:
Here's how the algorithm would work on the following matrix, assuming k = 12:
0 1 1 1 1 0
0 1 1 1 1 0
0 1 1 1 1 0
Consider the first row alone:
0 1 1 1 1 0
Consider it to be the vector 0 1 1 1 1 0 and apply the algorithm for the simpler problem on it: we find that there's no subsequence adding up to 12, so we move on.
Consider the first two rows:
0 1 1 1 1 0
0 1 1 1 1 0
Consider them to be the vector 0+0 1+1 1+1 1+1 1+1 0+0 = 0 2 2 2 2 0 and apply the algorithm for the simpler problem on it: again, no subsequence that adds up to 12, so move on.
Consider the first three rows:
0 1 1 1 1 0
0 1 1 1 1 0
0 1 1 1 1 0
Consider them to be the vector 0 3 3 3 3 0 and apply the algorithm for the simpler problem on it: we find the sequence starting at position 2 and ending at position 5 to be the solution. From this we can get the entire rectangle with simple bookkeeping.
Related
Suppose there is a 2D array (m x n) of bits.
For example:
1 0 0 1 0
1 0 1 0 0
1 0 1 1 0
0 0 0 0 1
here, m = 4, n = 5.
I can flip (0 becomes 1, 1 becomes 0) the bits in any row. When you flip the bits in a particular row, you flip all the bits.
My goal is to get the max OR value between a given pair of rows.
That is, if the given pair of rows is (r1, r2), then I can flip any number of rows between r1 and r2, and I should find the maximum possible OR value of all the rows between r1 and r2.
In the above example (consider arrays with 1-based index), if r1 = 1 and r2 = 4, I can flip the 1st row to get 0 1 1 0 1. Now, if I find the OR value of all the rows from 1 to 4, I get the value 31 as the maximum possible OR value (there can be other solutions).
Also, it would be nice to to compute the answer for (r1, r1), (r1, r1+1), (r1, r1+2), ... , (r1, r2-1) while calculating the same for (r1,r2).
Constraints
1 <= m x n <= 10^6
1 <= r1 <= r2 <= m
A simple brute force solution would have a time complexity of O(2^m).
Is there a faster way to compute this?
Since A <= A | B, the value of a number A will only go up as we OR more numbers to A.
Therefore, we can use binary search.
We can use a function to get the maximum between two rows and save the ORed result as a third row. Then compare two of these third rows to get a higher-level row, and then compare two of these higher-level rows, and so on, until only one is left.
Using your example:
array1 = 1 0 0 1 0 [0]
1 0 1 0 0 [1]
1 0 1 1 0 [2]
0 0 0 0 1 [3]
array2 = 1 1 0 1 1 <-- [0] | ~[1]
1 1 1 1 0 <-- [2] | ~[3]
array3 = 1 1 1 1 1 <-- [0] | [1]
And obviously you can truncate branches as necessary when m is not a power of 2.
So this would be O(m) time. And keep in mind that for large numbers of rows, there are likely not unique solutions. More than likely, the result would be 2 ^ n - 1.
An important optimization: If m >= n, then the output must be 2 ^ n - 1. Suppose we have two numbers A and B. If B has k number missing bits, then A or ~A will be guaranteed to fill at least one of those bits. By a similar token, if m >= log n, then the output must also be 2 ^ n - 1 since each A or ~A is guaranteed to fill at least half of the unfilled bits in B.
Using these shortcuts, you can get away with a brute-force search if you wanted. I'm not 100% the binary search algorithm works in every single case.
Considering the problem of flipping the rows in the entire matrix and then or-ing them together to get as many 1s as possible, I claim this is tractable when the number of columns is less than 2^m, where m is the number of rows. Consider the rows one by one. At stage i counting from 0 you have less than 2^(m-i) zeros to fill. Because flipping a row turns 0s into 1s and vice versa, either the current row or the flipped row will fill in at least half of those zeros. When you have worked through all the rows, you will have less than 1 zeros to fill, so this procedure is guaranteed to provide a perfect answer.
I claim this is tractable when the number of columns is at least 2^m, where m is the number of rows. There are 2^m possible patterns of flipped rows, but this is only O(N) where N is the number of columns. So trying all possible patterns of flipped rows gives you an O(N^2) algorithm in this case.
Imagine I am having a roulette wheel and I want to feed my algorithm three integers
p := probability to win one game
m := number of times the wheel is spun
k := number of consecutive wins I am interested in
and I am interested in the probability P that after spinning the wheel m times I win at least k consecutive games.
Let's go through an example where m = 5 and k = 3 and let's say 1 is a win and 0 a loss.
1 1 1 1 1
0 1 1 1 1
1 1 1 1 0
1 1 1 0 0
0 1 1 1 0
0 0 1 1 1
So in my intention, this would be all solution to win at least 3 consecutive games. For every k, I have (m-k+1) possible winning outcomes.
First question, is this true? Or would also 1 1 1 0 1 and 1 0 1 1 1 be possible solutions?
Next, how would a handy computation for this problem look like? First, I thought about the binomial distribution to solve this problem, where I just iterate over all k:
\textstyle {n \choose k}\,p^{k}(1-p)^{n-k}
But this somehow doesn't guarantee to have consecutive wins. Is the binomial distribution somehow adjustable to produce the output P I am looking for?
the following is an option you might want to consider:
you generate an array of length m, the entries 0 ...m representing the probability that at the ith time you have k-consecutive 1s.
all slots until k have probability 0, no chance for k consecutive wins.
slot k has the probability p^k.
all positions afterwards are computed based on dynamic programming approach: each position i as of position k+1 is calculated: sum consisting of position i-1 plus (p^k * (1-p) * (1 - the probability at position (i-1-k).
This way you iterate through the array and you will have in the last position the probability for at least k-consecutive 1s.
Or would also 1 1 1 0 1 and 1 0 1 1 1 be possible solutions?
yes, they would according to win at least k consecutive games.
First, I thought about the binomial distribution to solve this problem, where I just iterate over all k: \textstyle {n \choose k}\,p^{k}(1-p)^{n-k}
that might work if you combine if with acceptance-rejection method. Generate outcome from binomial, check for at least k winnings, accept if yes, drop otherwise.
Is the binomial distribution somehow adjustable to produce the output P I am looking for?
Frankly, I would take a look at Geometric distribution, which descirbes number of successful wins before loss (or vice versa).
Problem: You are given a natural number N and a set of elements of size M. Your task is to generate all possible values of a list of size N where each element belongs to set M (both with or without repetition).
Example: Let N = 2 and M = < 0, 1 >.
With repetition: N = [0,1] or N = [1,0] or N = [0,0] or N = [1,1]
Without repetition: N = [1,0] or N = [0,1]
I came up with solution (EDIT - which is wrong) for with repetition as follows.
It is in pseudocode so that it isn't biased to anyone without knowledge of the language.
Let I be an auxillary list of size N.
Let l denote the last value changed in I initialized as value N.
Fill I with value 1.
while l != 0
for i = 1 to N
N[i] = M[I[i]]
do_something(N)
if I[l] != M
I[l] += 1
elseif l == 1
break // So that it does not become not defined in else clause
else
l -= 1
I[l] += 1
It takes O(N^2 * M) time and O(N + M) space. If you have better one then please post it.
I was not able to come up with a good solution for without repetition case.
You would see that your algorithm has a problem if you had actually tried it on some very simple cases, like for example the one you actually gave as an example, with M=N=2. (Why didn't you do this first?) It will not generate the list 1 0.
Why? Because after decrementing l and incrementing the new I[l], you never decrease any of the later (further to the right) values in I[]. So e.g. for N=5 you would generate only the permutations
0 0 0 0 0
0 0 0 0 1
0 0 0 1 1
0 0 1 1 1
0 1 1 1 1
1 1 1 1 1
i have an array consist of only non negative integers. now i want to reduce every element to zero. the only operation allowed is 'decrement each element in the range i,j by 1' cost of each such operation is 1. now question is how to find the minimum number of such operation that can transform this array to all zero element array.
example: [1 2 3 0]
--->[0 1 2 0] (decrement all element in the range 0 to 2)
--->[0 0 1 0] (----------------------------range 1 to 2)
--->[0 0 0 0] (----------------------------range 2 to 2)
This feels like a duplicate, but I can find no evidence of that, so I'll answer instead.
In every optimal solution, there exists no pair of operations [a, b) and [b, c), since we could unite them as [a, c). There exists, moreover, an optimal solution that is laminar, namely, for each pair of operations, their scopes are nested or disjoint, but not partially overlapping. This is because we can convert operations on [a, c) and [b, d) to [a, d) and [b, c).
Subject to the latter restriction, there is only one optimal strategy up to permuting the operations, derived by repeatedly decreasing a maximal nonzero interval. (Proof by induction: consider the decrease operation whose interval argument is leftmost and maximal among other interval arguments. By the leftmost assumption, this interval must include the leftmost nonzero. If it excludes a contiguous nonzero element to its right, then how could that element get decreased? Not by an interval that starts to the left (that wouldn't be laminar) and not by an interval that starts with that element (that wouldn't be optimal), so not at all.)
All that we have to do algorithmically is to construct this optimal solution. In Python:
cost = 0
stackheight = 0
for x in lst:
cost += max(x - stackheight, 0)
stackheight = x
I tried a few small examples, and so far I haven't found any where a simple greedy algorithm does not produce the best solution. The approach I used is:
loop
find largest element
if largest element is zero
break
expand interval with non-zero values left and right around largest, until zero/boundary is encountered
decrement values in this interval
I found cases where there are other solutions with the same number of steps, but not better. For example, with this algorithm:
2 3 1 2 3
1 2 0 1 2
0 1 0 1 2
0 1 0 0 1
0 0 0 0 1
0 0 0 0 0
So that's 5 steps. This ties it, with a different sequence:
2 3 1 2 3
1 2 1 2 3
1 2 1 1 2
1 1 1 1 2
1 1 1 1 1
0 0 0 0 0
I would be interested in seeing counter-examples where this strategy does not produce a best solution.
I am trying to compute the number of nxm binary matrices with at most k consecutive values of 1 in each column. After a few researches, I've figured out that it will be enough to find the vectors with 1 column and n lines. For example, if we have p number of vectors the required number of matrices would be m^p.
Because n and m are very large (< 2.000.000) i can't find a suitable solution. I am trying to find a recurrence formula in order to build a matrix to help me compute the answer. So could you suggest me any solution?
There's a (k + 1)-state dynamic program (state = number of previous 1s, from 0 to k). To make a long story short, you can compute large terms of it quickly by taking the nth power of the k + 1 by k + 1 integer matrix like (example for k = 4)
1 1 0 0 0
1 0 1 0 0
1 0 0 1 0
1 0 0 0 1
1 0 0 0 0
modulo c and summing the first row.