I want to maximize number of zigzag sequence in an array(without reordering).
I've a main array of random sequence of integers.I want a sub-array of index of main array that has zigzag pattern.
A sequence of integers is called zigzag sequence if each of its elements is either strictly less or strictly greater than its neighbors(and two adjacent of neighbors).
Example : The sequence 4 2 3 1 5 2 forms a zigzag, but 7 3 5 5 2 and 3 8 6 4 5
and 4 2 3 1 5 3 don't.
For a given array of integers we need to find (contiguous) sub-array of indexes that forms a zigzag sequence.
Can this be done in O(N) ?
Yes, this would seem to be solvable in O(n) time. I'll describe the algorithm as a dynamic program.
Setup
Let the array containing potential zig-zags be called Z.
Let U be an array such that len(U) == len(Z), and U[i] is an integer representing the largest contiguous left-to-right subsequence starting at i that is a zig-zag such that Z[i] < Z[i+1] (it zigs up).
Let D be similar to U, except that D[i] is an integer representing the largest contiguous left-to-right subsequence starting at i that is a zig-zag such that Z[i] > Z[i+1] (it zags down).
Subproblem
The subproblem is to find both U[i] and D[i] at each i. This can be done as follows:
U[i] = {
1 + D[i+1] if i < i+1
0 otherwise
}
L[i] = {
1 + U[i+1] if i > i+1
0 otherwise
}
The top version says that if we're looking for the largest sequence beginning with an up-zig, we see if the next element is larger (goes up), and then add a single zig to the size of the next down-zag sequence. The next one is the reverse.
Base Cases
If i == len(Z) (it is the last element), U[i] = L[i] = 0. The last element cannot have a left-to-right sequence after it because there is nothing after it.
Solution
To get the solution, first we find max(U[i]) and max(L[i]) for every i. Then get the maximum of those two values, store i, and store the length of this largest zig-zag (in a variable called length). The sequence begins at index i and ends at index i + length.
Runtime
There are n indexes, so there are 2n subproblems between U and L. Each subproblem takes O(1) time to solve, given that solutions to previously solved subproblems are memoized. Finally, iterating through U and L to get the final answer takes O(2n) time.
We thus have O(2n) + O(2n) time, or O(n).
This may be an overly complex solution, but it demonstrates that it can be done in O(n).
I found this problem in a hiring contest(which is over now). Here it is:
You are given two natural numbers N and X. You are required to create an array of N natural numbers such that the bitwise XOR of these numbers is equal to X. The sum of all the natural numbers that are available in the array is as minimum as possible.
If there exist multiple arrays, print the smallest one
Array A< Array B if
A[i] < B[i] for any index i, and A[i]=B[i] for all indices less than i
Sample Input: N=3, X=2
Sample output : 1 1 2
Explanation: We have to print 3 natural numbers having the minimum sum Thus the N-spaced numbers are [1 1 2]
My approach:
If N is odd, I put N-1 ones in the array (so that their xor is zero) and then put X
If N is even, I put N-1 ones again and then put X-1(if X is odd) and X+1(if X is even)
But this algorithm failed for most of the test cases. For example, when N=4 and X=6 my output is
1 1 1 7 but it should be 1 1 2 4
Anyone knows how to make the array sum minimum?
In order to have the minimum sum, you need to make sure that when your target is X, you are not cancelling the bits of X and recreating them again. Because this will increase the sum. For this, you have create the bits of X one by one (ideally) from the end of the array. So, as in your example of N=4 and X=6 we have: (I use ^ to show xor)
X= 7 = 110 (binary) = 2 + 4. Note that 2^4 = 6 as well because these numbers don't share any common bits. So, the output is 1 1 2 4.
So, we start by creating the most significant bits of X from the end of the output array. Then, we also have to handle the corner cases for different values of N. I'm going with a number of different examples to make the idea clear:
``
A) X=14, N=5:
X=1110=8+4+2. So, the array is 1 1 2 4 8.
B) X=14, N=6:
X=8+4+2. The array should be 1 1 1 1 2 12.
C) X=15, N=6:
X=8+4+2+1. The array should be 1 1 1 2 4 8.
D) X=15, N=5:
The array should be 1 1 1 2 12.
E) X=14, N=2:
The array should be 2 12. Because 12 = 4^8
``
So, we go as follows. We compute the number of powers of 2 in X. Let this number be k.
Case 1 - If k <= n (example E): we start by picking the smallest powers from left to right and merge the remaining on the last position in the array.
Case 2 - If k > n (example A, B, C, D): we compute h = n - k. If h is odd we put h = n-k+1. Now, we start by putting h 1's in the beginning of the array. Then, the number of places left is less than k. So, we can follow the idea of Case 1 for the remaining positions. Note that in case 2, instead of having odd number of added 1's we put and even number of 1's and then do some merging at the end. This guarantees that the array is the smallest it can be.
We have to consider that we have to minimize the sum of the array for solution and that is the key point.
First calculate set bits in N suppose if count of setbits are less than or equal to X then divide N in X integers based on set bits like
N = 15, X = 2
setbits in 15 are 4 solution is 1 14
if X = 3 solution is 1 2 12
this minimizes array sum too.
other case if setbits are greater than X
calculate difference = setbits(N) - X
If difference is even then add ones as needed and apply above algorithm all ones will cancel out.
If difference is odd then add ones but now you have take care of that 1 extra one in the answer array.
Check for the corner cases too.
Eg: Array : [0,1,0,1,1,0,0]
Final Array: [0,0,1,1,1,0,0] , So swaps required = 1
i need a O(n) or O(nlogn) solution
You can do it in O(n):
In one pass through the data, determine the number of 1s. Call this k (it is just the sum of the elements in the list).
In a second pass through the data, use a sliding window of width k to find the number, m which is the maximum number of 1s in any window of size k. Since this is homework, I'll leave the details to you, but it can be done in O(n).
Then: the minimal number of swaps is k-m.
EDIT This answer assumes that only two neighboring cells can be swapped. If the distance between the two swapped elements is arbitrary, see #JohnColeman's answer.
This can be done easily in linear time.
Suppose that the array is called a and its size is n.
Allocate integer array b of size n. Walk from left to right, save in b[i] the number of ones seen so far in a[0], ..., a[i].
Allocate integer array c of size n. Walk from right to left, save in c[i] the number of ones seen so far in a[i], ..., a[N - 1].
Initialize integer res = 0. Walk through a one last time. For each i with a[i] = 0, add res += min(b[i] c[i])
Output res
Why this works? Each zero must somehow bubble out of the block of ones. So, every zero must either "bubble-up" past all ones to the right of it, or it must "bubble-down" past all ones to the left of it. Swapping zeros with zeros is waste of time, therefore the process of zero-eviction from the homogeneous block of ones must start with those zeros that are as close to the first 1 or the last 1 as possible. This means, that every zero will have to make exactly min(b[i], c[i]) swaps with 1s to exit the homogeneous block of ones.
Example:
a = [0,1,0,1,1,0,1,0,1,0,1,0]
b = [0,1,1,2,3,3,4,4,5,5,6,6]
c = [6,6,5,5,4,3,3,2,2,1,1,0]
now, min(b,c) would be (no need to compute it explicitly):
m = [0,1,1,2,3,3,3,2,2,1,1,0]
^ ^ ^ ^ ^ ^
The interesting values of min(b[i], c[i]) which correspond to 0s are marked with ^. Summing it up yields: 0 + 1 + 3 + 2 + 1 + 0 = 7.
Indeed:
[0,1,0,1,1,0,1,0,1,0,1,0]
[0,0,1,1,1,0,1,0,1,0,1,0] 1
[0,0,1,1,1,0,1,0,1,1,0,0] 2 = 1 + 1
[0,0,1,1,1,0,1,1,0,1,0,0] 3
[0,0,1,1,1,0,1,1,1,0,0,0] 4 = 1 + 1 + 2
[0,0,1,1,0,1,1,1,1,0,0,0] 5
[0,0,1,0,1,1,1,1,1,0,0,0] 6
[0,0,0,1,1,1,1,1,1,0,0,0] 7 = 1 + 1 + 2 + 3
done: block of ones homogeneous.
Runtime for computation of the number res of swaps is obviously O(n). (Note: it does NOT say that the number of swaps is itself O(n)).
Let's consider each 1 as a potential static point. Then the cost for the left side of the static point would be the number of 1's to the left subtracted by the number of 1's already in the section it would naturally extend to, the length of which is the number of 1's on the left. Similarly for the right side.
Now find a way to do it efficiently for each potential static 1 :) Hint: think about how we could update those values as we iterate across the array.
1 0 1 0 1 1 0 0 1 0 1 1
x potential static point
<----- would extend to
-----> would extend to
left cost at x: 3 - 2 = 1
right cost at x: 3 - 1 = 2
Suppose there is a 2D array (m x n) of bits.
For example:
1 0 0 1 0
1 0 1 0 0
1 0 1 1 0
0 0 0 0 1
here, m = 4, n = 5.
I can flip (0 becomes 1, 1 becomes 0) the bits in any row. When you flip the bits in a particular row, you flip all the bits.
My goal is to get the max OR value between a given pair of rows.
That is, if the given pair of rows is (r1, r2), then I can flip any number of rows between r1 and r2, and I should find the maximum possible OR value of all the rows between r1 and r2.
In the above example (consider arrays with 1-based index), if r1 = 1 and r2 = 4, I can flip the 1st row to get 0 1 1 0 1. Now, if I find the OR value of all the rows from 1 to 4, I get the value 31 as the maximum possible OR value (there can be other solutions).
Also, it would be nice to to compute the answer for (r1, r1), (r1, r1+1), (r1, r1+2), ... , (r1, r2-1) while calculating the same for (r1,r2).
Constraints
1 <= m x n <= 10^6
1 <= r1 <= r2 <= m
A simple brute force solution would have a time complexity of O(2^m).
Is there a faster way to compute this?
Since A <= A | B, the value of a number A will only go up as we OR more numbers to A.
Therefore, we can use binary search.
We can use a function to get the maximum between two rows and save the ORed result as a third row. Then compare two of these third rows to get a higher-level row, and then compare two of these higher-level rows, and so on, until only one is left.
Using your example:
array1 = 1 0 0 1 0 [0]
1 0 1 0 0 [1]
1 0 1 1 0 [2]
0 0 0 0 1 [3]
array2 = 1 1 0 1 1 <-- [0] | ~[1]
1 1 1 1 0 <-- [2] | ~[3]
array3 = 1 1 1 1 1 <-- [0] | [1]
And obviously you can truncate branches as necessary when m is not a power of 2.
So this would be O(m) time. And keep in mind that for large numbers of rows, there are likely not unique solutions. More than likely, the result would be 2 ^ n - 1.
An important optimization: If m >= n, then the output must be 2 ^ n - 1. Suppose we have two numbers A and B. If B has k number missing bits, then A or ~A will be guaranteed to fill at least one of those bits. By a similar token, if m >= log n, then the output must also be 2 ^ n - 1 since each A or ~A is guaranteed to fill at least half of the unfilled bits in B.
Using these shortcuts, you can get away with a brute-force search if you wanted. I'm not 100% the binary search algorithm works in every single case.
Considering the problem of flipping the rows in the entire matrix and then or-ing them together to get as many 1s as possible, I claim this is tractable when the number of columns is less than 2^m, where m is the number of rows. Consider the rows one by one. At stage i counting from 0 you have less than 2^(m-i) zeros to fill. Because flipping a row turns 0s into 1s and vice versa, either the current row or the flipped row will fill in at least half of those zeros. When you have worked through all the rows, you will have less than 1 zeros to fill, so this procedure is guaranteed to provide a perfect answer.
I claim this is tractable when the number of columns is at least 2^m, where m is the number of rows. There are 2^m possible patterns of flipped rows, but this is only O(N) where N is the number of columns. So trying all possible patterns of flipped rows gives you an O(N^2) algorithm in this case.