Suppose there is a 2D array (m x n) of bits.
For example:
1 0 0 1 0
1 0 1 0 0
1 0 1 1 0
0 0 0 0 1
here, m = 4, n = 5.
I can flip (0 becomes 1, 1 becomes 0) the bits in any row. When you flip the bits in a particular row, you flip all the bits.
My goal is to get the max OR value between a given pair of rows.
That is, if the given pair of rows is (r1, r2), then I can flip any number of rows between r1 and r2, and I should find the maximum possible OR value of all the rows between r1 and r2.
In the above example (consider arrays with 1-based index), if r1 = 1 and r2 = 4, I can flip the 1st row to get 0 1 1 0 1. Now, if I find the OR value of all the rows from 1 to 4, I get the value 31 as the maximum possible OR value (there can be other solutions).
Also, it would be nice to to compute the answer for (r1, r1), (r1, r1+1), (r1, r1+2), ... , (r1, r2-1) while calculating the same for (r1,r2).
Constraints
1 <= m x n <= 10^6
1 <= r1 <= r2 <= m
A simple brute force solution would have a time complexity of O(2^m).
Is there a faster way to compute this?
Since A <= A | B, the value of a number A will only go up as we OR more numbers to A.
Therefore, we can use binary search.
We can use a function to get the maximum between two rows and save the ORed result as a third row. Then compare two of these third rows to get a higher-level row, and then compare two of these higher-level rows, and so on, until only one is left.
Using your example:
array1 = 1 0 0 1 0 [0]
1 0 1 0 0 [1]
1 0 1 1 0 [2]
0 0 0 0 1 [3]
array2 = 1 1 0 1 1 <-- [0] | ~[1]
1 1 1 1 0 <-- [2] | ~[3]
array3 = 1 1 1 1 1 <-- [0] | [1]
And obviously you can truncate branches as necessary when m is not a power of 2.
So this would be O(m) time. And keep in mind that for large numbers of rows, there are likely not unique solutions. More than likely, the result would be 2 ^ n - 1.
An important optimization: If m >= n, then the output must be 2 ^ n - 1. Suppose we have two numbers A and B. If B has k number missing bits, then A or ~A will be guaranteed to fill at least one of those bits. By a similar token, if m >= log n, then the output must also be 2 ^ n - 1 since each A or ~A is guaranteed to fill at least half of the unfilled bits in B.
Using these shortcuts, you can get away with a brute-force search if you wanted. I'm not 100% the binary search algorithm works in every single case.
Considering the problem of flipping the rows in the entire matrix and then or-ing them together to get as many 1s as possible, I claim this is tractable when the number of columns is less than 2^m, where m is the number of rows. Consider the rows one by one. At stage i counting from 0 you have less than 2^(m-i) zeros to fill. Because flipping a row turns 0s into 1s and vice versa, either the current row or the flipped row will fill in at least half of those zeros. When you have worked through all the rows, you will have less than 1 zeros to fill, so this procedure is guaranteed to provide a perfect answer.
I claim this is tractable when the number of columns is at least 2^m, where m is the number of rows. There are 2^m possible patterns of flipped rows, but this is only O(N) where N is the number of columns. So trying all possible patterns of flipped rows gives you an O(N^2) algorithm in this case.
Related
I found this problem in a hiring contest(which is over now). Here it is:
You are given two natural numbers N and X. You are required to create an array of N natural numbers such that the bitwise XOR of these numbers is equal to X. The sum of all the natural numbers that are available in the array is as minimum as possible.
If there exist multiple arrays, print the smallest one
Array A< Array B if
A[i] < B[i] for any index i, and A[i]=B[i] for all indices less than i
Sample Input: N=3, X=2
Sample output : 1 1 2
Explanation: We have to print 3 natural numbers having the minimum sum Thus the N-spaced numbers are [1 1 2]
My approach:
If N is odd, I put N-1 ones in the array (so that their xor is zero) and then put X
If N is even, I put N-1 ones again and then put X-1(if X is odd) and X+1(if X is even)
But this algorithm failed for most of the test cases. For example, when N=4 and X=6 my output is
1 1 1 7 but it should be 1 1 2 4
Anyone knows how to make the array sum minimum?
In order to have the minimum sum, you need to make sure that when your target is X, you are not cancelling the bits of X and recreating them again. Because this will increase the sum. For this, you have create the bits of X one by one (ideally) from the end of the array. So, as in your example of N=4 and X=6 we have: (I use ^ to show xor)
X= 7 = 110 (binary) = 2 + 4. Note that 2^4 = 6 as well because these numbers don't share any common bits. So, the output is 1 1 2 4.
So, we start by creating the most significant bits of X from the end of the output array. Then, we also have to handle the corner cases for different values of N. I'm going with a number of different examples to make the idea clear:
``
A) X=14, N=5:
X=1110=8+4+2. So, the array is 1 1 2 4 8.
B) X=14, N=6:
X=8+4+2. The array should be 1 1 1 1 2 12.
C) X=15, N=6:
X=8+4+2+1. The array should be 1 1 1 2 4 8.
D) X=15, N=5:
The array should be 1 1 1 2 12.
E) X=14, N=2:
The array should be 2 12. Because 12 = 4^8
``
So, we go as follows. We compute the number of powers of 2 in X. Let this number be k.
Case 1 - If k <= n (example E): we start by picking the smallest powers from left to right and merge the remaining on the last position in the array.
Case 2 - If k > n (example A, B, C, D): we compute h = n - k. If h is odd we put h = n-k+1. Now, we start by putting h 1's in the beginning of the array. Then, the number of places left is less than k. So, we can follow the idea of Case 1 for the remaining positions. Note that in case 2, instead of having odd number of added 1's we put and even number of 1's and then do some merging at the end. This guarantees that the array is the smallest it can be.
We have to consider that we have to minimize the sum of the array for solution and that is the key point.
First calculate set bits in N suppose if count of setbits are less than or equal to X then divide N in X integers based on set bits like
N = 15, X = 2
setbits in 15 are 4 solution is 1 14
if X = 3 solution is 1 2 12
this minimizes array sum too.
other case if setbits are greater than X
calculate difference = setbits(N) - X
If difference is even then add ones as needed and apply above algorithm all ones will cancel out.
If difference is odd then add ones but now you have take care of that 1 extra one in the answer array.
Check for the corner cases too.
Every element is an integer and should have a value of at least 1.
Constraints: 2 ≤ N ≤ 1000 and 1 ≤ M ≤ 1000000000.
We need to find the answer modulo 1000000007
May be we can calculate dp[len][type][typeValue], where type have only two states:
type = 0: this is means, that last number in sequence with length len equal or smaller than sqrt(M). And this number we save in typeValue
type = 1: this is means, that last number in sequence bigger than sqrt(M). And we save in typeValue number k = M / lastNumber (rounded down), which not greater than sqrt(M).
So, this dp have O(N sqrt(M)) states, but how can we calculate each 'cell' of this dp?
Firstly, consider some 'cell' dp[len][0][number]. This value can calculate as follows:
dp[len][0][number] = sum[1 <= i <= sqrt(M)] (dp[len - 1][0][i]) + sum[number <= i <= sqrt(M)] (dp[len - 1][1][i])
Little explanation: beacuse type = 0 => number <= sqrt(M), so we can put any number not greater than sqrt(M) next and only some small number greater.
For the dp[len][1][number] we can use next equation:
dp[len][1][k] = sum[1 <= i <= k] (dp[len - 1][0][i] * cntInGroup(k)) where cntInGroup(k) - cnt numbers x such that M / x = k
We can simply calculate cntInGroups(k) for all 1 <= k <= sqrt(M) using binary search or formulas.
But another problem is that out algorithm needs O(sqrt(M)) operations so result asymptotic is O(N M). But we can improve that.
Note that we need to calculate sum of some values on segments, which were processed on previous step. So, we can precalculate prefix sums in advance and after that we can calculate each 'cell' of dp in O(1) time.
So, with this optimization we can solve this problem with asymptotic O(N sqrt(M))
Here is an example for N = 4, M = 10:
1 number divides 10 into 10 equal parts with a remainder less than the part
1 number divides 10 into 5 equal parts with a remainder less than the part
1 number divides 10 into 3 equal parts with a remainder less than the part
2 numbers divide 10 into 2 equal parts with a remainder less than the part
5 numbers divide 10 into 1 part with a remainder less than the part
Make an array and update it for each value of n:
N 1 1 1 2 5
----------------------
2 10 5 3 2 1 // 10 div 1 ; 10 div 2 ; 10 div 3 ; 10 div 5,4 ; 10 div 6,7,8,9,10
3 27 22 18 15 10 // 10+5+3+2*2+5*1 ; 10+5+3+2*2 ; 10+5+3 ; 10+5 ; 10
4 147 97 67 49 27 // 27+22+18+2*15+5*10 ; 27+22+18+2*15 ; 27+22+18 ; 27+22 ; 27
The solution for N = 4, M = 10 is therefore:
147 + 97 + 67 + 2*49 + 5*27 = 544
My thought process:
For each number in the first array position, respectively, there could be the
following in the second:
1 -> 1,2..10
2 -> 1,2..5
3 -> 1,2,3
4 -> 1,2
5 -> 1,2
6 -> 1
7 -> 1
8 -> 1
9 -> 1
10 -> 1
Array position 3:
For each of 10 1's in col 2, there could be 1 of 1,2..10
For each of 5 2's in col 2, there could be 1 of 1,2..5
For each of 3 3's in col 2, there could be 1 of 1,2,3
For each of 2 4's in col 2, there could be 1 of 1,2
For each of 2 5's in col 2, there could be 1 of 1,2
For each of 1 6,7..10 in col 2, there could be one 1
27 1's; 22 2's; 18 3's; 15 4's; 15 5's; 10 x 6's,7's,8's,9's,10's
Array position 4:
1's = 27+22+18+15+15+10*5
2's = 27+22+18+15+15
3's = 27+22+18
4's = 27+22
5's = 27+22
6,7..10's = 27 each
Create a graph and assign the values from 0 to M to the vertices. An edge exists between two vertices if their product is not greater than M. The number of different arrays is then the number of paths with N steps, starting at the vertex with value 0. This number can be computed using a simple depth-first search.
The question is now whether this is efficient enough and whether it can be made more efficient. One way is to restructure the solution using matrix multiplication. The matrix to multiply with represents the edges above, it has a 1 when there is an edge, a 0 otherwise. The initial matrix on the left represents the starting vertex, it has a 1 at position (0, 0), zeros everywhere else.
Based on this, you can multiply the right matrix with itself to represent two steps through the graph. This means that you can combine two steps to make them more efficient, so you only need to multiply log(N) times, not N times. However, make sure you use known efficient matrix multiplication algorithms to implement this, the naive one will only perform for small M.
i have an array consist of only non negative integers. now i want to reduce every element to zero. the only operation allowed is 'decrement each element in the range i,j by 1' cost of each such operation is 1. now question is how to find the minimum number of such operation that can transform this array to all zero element array.
example: [1 2 3 0]
--->[0 1 2 0] (decrement all element in the range 0 to 2)
--->[0 0 1 0] (----------------------------range 1 to 2)
--->[0 0 0 0] (----------------------------range 2 to 2)
This feels like a duplicate, but I can find no evidence of that, so I'll answer instead.
In every optimal solution, there exists no pair of operations [a, b) and [b, c), since we could unite them as [a, c). There exists, moreover, an optimal solution that is laminar, namely, for each pair of operations, their scopes are nested or disjoint, but not partially overlapping. This is because we can convert operations on [a, c) and [b, d) to [a, d) and [b, c).
Subject to the latter restriction, there is only one optimal strategy up to permuting the operations, derived by repeatedly decreasing a maximal nonzero interval. (Proof by induction: consider the decrease operation whose interval argument is leftmost and maximal among other interval arguments. By the leftmost assumption, this interval must include the leftmost nonzero. If it excludes a contiguous nonzero element to its right, then how could that element get decreased? Not by an interval that starts to the left (that wouldn't be laminar) and not by an interval that starts with that element (that wouldn't be optimal), so not at all.)
All that we have to do algorithmically is to construct this optimal solution. In Python:
cost = 0
stackheight = 0
for x in lst:
cost += max(x - stackheight, 0)
stackheight = x
I tried a few small examples, and so far I haven't found any where a simple greedy algorithm does not produce the best solution. The approach I used is:
loop
find largest element
if largest element is zero
break
expand interval with non-zero values left and right around largest, until zero/boundary is encountered
decrement values in this interval
I found cases where there are other solutions with the same number of steps, but not better. For example, with this algorithm:
2 3 1 2 3
1 2 0 1 2
0 1 0 1 2
0 1 0 0 1
0 0 0 0 1
0 0 0 0 0
So that's 5 steps. This ties it, with a different sequence:
2 3 1 2 3
1 2 1 2 3
1 2 1 1 2
1 1 1 1 2
1 1 1 1 1
0 0 0 0 0
I would be interested in seeing counter-examples where this strategy does not produce a best solution.
I am trying to compute the number of nxm binary matrices with at most k consecutive values of 1 in each column. After a few researches, I've figured out that it will be enough to find the vectors with 1 column and n lines. For example, if we have p number of vectors the required number of matrices would be m^p.
Because n and m are very large (< 2.000.000) i can't find a suitable solution. I am trying to find a recurrence formula in order to build a matrix to help me compute the answer. So could you suggest me any solution?
There's a (k + 1)-state dynamic program (state = number of previous 1s, from 0 to k). To make a long story short, you can compute large terms of it quickly by taking the nth power of the k + 1 by k + 1 integer matrix like (example for k = 4)
1 1 0 0 0
1 0 1 0 0
1 0 0 1 0
1 0 0 0 1
1 0 0 0 0
modulo c and summing the first row.
Given an N*N matrix having 1's an 0's in them and given an integer k,what is the best method to find a rectangular region such that it has k 1's in it ???
I can do it with O(N^3*log(N)), but sure the best solution is faster. First you create another N*N matrix B (the initial matrix is A). The logic of B is the following:
B[i][j] - is the number of ones on rectangle in A with corners (0,0) and (i,j).
You can evaluate B for O(N^2) by dynamic programming: B[i][j] = B[i-1][j] + B[i][j-1] - B[i-1][j-1] + A[i][j].
Now it is very easy to solve this problem with O(N^4) by iterating over all right-bottom (i=1..N, j=1..N, O(N^2)), left-bottom (z=1..j, O(N)), and right-upper (t=1..i, O(N)) and you get the number of ones in this rectangular with the help of B:
sum_of_ones = B[i][j] - B[i][z-1] - B[t-1][j] + B[t-1][z-1].
If you got exactly k: k==sum_of_ones, then out the result.
To make it N^3*log(N), you should find right-upper by binary search (so not just iterate all possible cells).
Consider this simpler problem:
Given a vector of size N containing only the values 1 and 0, find a subsequence that contains exactly k values of 1 in it.
Let A be the given vector and S[i] = A[1] + A[2] + A[3] + ... + A[i], meaning how many 1s there are in the subsequence A[1..i].
For each i, we are interested in the existence of a j <= i such that S[i] - S[j-1] == k.
We can find this in O(n) with a hash table by using the following relation:
S[i] - S[j-1] == k => S[j-1] = S[i] - k
let H = an empty hash table
for i = 1 to N do
if H.Contains (S[i] - k) then your sequence ends at i
else
H.Add(S[i])
Now we can use this to solve your given problem in O(N^3): for each sequence of rows in your given matrix (there are O(N^2) sequences of rows), consider that sequence to represent a vector and apply the previous algorithm on it. The computation of S is a bit more difficult in the matrix case, but it's not that hard to figure out. Let me know if you need more details.
Update:
Here's how the algorithm would work on the following matrix, assuming k = 12:
0 1 1 1 1 0
0 1 1 1 1 0
0 1 1 1 1 0
Consider the first row alone:
0 1 1 1 1 0
Consider it to be the vector 0 1 1 1 1 0 and apply the algorithm for the simpler problem on it: we find that there's no subsequence adding up to 12, so we move on.
Consider the first two rows:
0 1 1 1 1 0
0 1 1 1 1 0
Consider them to be the vector 0+0 1+1 1+1 1+1 1+1 0+0 = 0 2 2 2 2 0 and apply the algorithm for the simpler problem on it: again, no subsequence that adds up to 12, so move on.
Consider the first three rows:
0 1 1 1 1 0
0 1 1 1 1 0
0 1 1 1 1 0
Consider them to be the vector 0 3 3 3 3 0 and apply the algorithm for the simpler problem on it: we find the sequence starting at position 2 and ending at position 5 to be the solution. From this we can get the entire rectangle with simple bookkeeping.