I have to swap numbers in an array 'd' times, so that left rotation of the array can be done. 'd' is the number of rotations of the array. Suppose if the array is 1->2->3->4->5 and if the d=1 then after one left rotation the array will be 2->3->4->5->1.
I have used the following code for performing the above operation:
for (int rotation = 0; rotation < d; rotation++) {
for (int i = 1; i < a.length; i++) {
int bucket = a[i - 1];
a[i - 1] = a[i];
a[i] = bucket;
}
}
But the efficiency of this algorithm is too high, probably O(n^d) worst case. How to improve the efficiency of the algorithm, especially in the worst case?
I am looking for a RECURSIVE approach for this algorithm. I came up with:
public static void swapIt(int[] array, int rotations){
for(int i=1; i<array.length; i++){
int bucket = array[i-1];
array[i-1] = array[i];
array[i] = bucket;
}
rotations--;
if(rotations>0){
swapIt(array,rotations);
}
else{
for(int i=0; i<array.length; i++){
System.out.print(array[i]+" ");
}
}
}
This RECURSIVE algorithm worked, but again efficiency is the issue. Can not use it for larger arrays.
Komplexity of your algorithm looks like O(n*d) to me.
My approach would be not to rotate by one d times, but to rotate by d one time.
You can calculate the destination of an element by:
So instead of a[i - 1] = a[i];
You would do this:
a[(i + a.length - d) % a.length] = a[i];
the Term (i + a.length - d) % a.length handles that you always get values in the intervall: 0... a.length-1
Explanation:
i + a.length - d is always positive (as long as d is <= a.length)
but it could be greater/equal than a.length what would not be allowed.
So take the reminder of the division with a.length.
This way you get for every i= 0.. a.length-1 the right new position.
As mentioned by Satyarth Agrahari:
If d>n you need to reduce d. d= d % a.length to ensure that (i + a.length - d) % a.length is in the wanted interval 0... a.length-1. The result is the same because rotation by a.length is like doing nothing at all.
to add upon the answer by #mrsmith42, you should probably check that d lies in range 1 <= d <= N-1. you can trim it down by taking modulo as d = d % N
Related
Let L be a list of positive integers.
We are allowed to merge two elements of L if they have adjacent indices.
The cost of this operation is the sum of both elements.
For example: [1,2,3,4] -> [3,3,4] with a cost of 3.
We are looking for the minimum cost to merge L into one integer.
Is there a fast way of doing this? I came up with this naive recursive approach but that should
be O(n!).
I have noticed that it benefits a lot from memoization so I think there must be a way to avoid trying all possible permutations which will always result in O(n!).
def solveR(l):
if len(l) <= 2:
return sum(l)
else:
return sum(l) + min(solveR(l[1:]), solveR(l[:-1]),
solveR(l[len(l) // 2:]) + solveR(l[:len(l) // 2]))
This is much like this LeetCode problem, but with K = 2. The comments suggest that the time complexity is O(n^3). Here is some C++ code that implements the algorithm:
class Solution {
public:
int mergeStones(vector<int>& stones, int K) {
K = 2;
int N = stones.size();
if((N-1)%(K-1) > 0) return -1;
int sum[N+1] = {0};
for(int i = 1; i <= N; i++)
sum[i] = sum[i-1] + stones[i-1];
vector<vector<int>> dp(N, vector<int>(N,0));
for(int L=K; L<= N; L++)
for(int i=0, j=i+L-1; j<N; i++,j++) {
dp[i][j] = INT_MAX;
for (int k = i; k < j; k += (K-1))
dp[i][j] = min(dp[i][j], dp[i][k] + dp[k+1][j]);
if ((L-1)%(K-1) == 0)
dp[i][j] += (sum[j+1] - sum[i]); // add sum in [i,j]
}
return dp[0][N-1];
}
};
I recently came across this problem:
You are given height of n histograms each of width 1. You have to choose any two histograms such that if it starts raining and all other histograms(except the two you have selected) are removed, then the water collected between the two histograms is maximised.
Input:
9
3 2 5 9 7 8 1 4 6
Output:
25
Between third and last histogram.
This is a variant of Trapping rain water problem.
I tried two solutions but both had worst case complexity of N^2. How can we optimise further.
Sol1: Brute force for every pair.
int maxWaterCollected(vector<int> hist, int n) {
int ans = 0;
for (int i= 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
ans = max(ans, min(hist[i], hist[j]) * (j - i - 1));
}
}
return ans;
}
Sol2: Keep a sequence of histograms in increasing order of height. For every histogram, find its best histogram in this sequence. now, if all histograms are in increasing order then this solution also becomes N^2.
int maxWaterCollected(vector<int> hist, int n) {
vector< pair<int, int> > increasingSeq(1, make_pair(hist[0], 0)); // initialised with 1st element.
int ans = 0;
for (int i = 1; i < n; i++) {
// compute best result from current increasing sequence
for (int j = 0; j < increasingSeq.size(); j++) {
ans = max(ans, min(hist[i], increasingSeq[j].first) * (i - increasingSeq[j].second - 1));
}
// add this histogram to sequence
if (hist[i] > increasingSeq.back().first) {
increasingSeq.push_back(make_pair(hist[i], i));
}
}
return ans;
}
Use 2 iterators, one from begin() and one from end() - 1.
until the 2 iterator are equal:
Compare current result with the max, and keep the max
Move the iterator with smaller value (begin -> end or end -> begin)
Complexity: O(n).
Jarod42 has the right idea, but it's unclear from his terse post why his algorithm, described below in Python, is correct:
def candidates(hist):
l = 0
r = len(hist) - 1
while l < r:
yield (r - l - 1) * min(hist[l], hist[r])
if hist[l] <= hist[r]:
l += 1
else:
r -= 1
def maxwater(hist):
return max(candidates(hist))
The proof of correctness is by induction: the optimal solution either (1) belongs to the candidates yielded so far or (2) chooses histograms inside [l, r]. The base case is simple, because all histograms are inside [0, len(hist) - 1].
Inductively, suppose that we're about to advance either l or r. These cases are symmetric, so let's assume that we're about to advance l. We know that hist[l] <= hist[r], so the value is (r - l - 1) * hist[l]. Given any other right endpoint r1 < r, the value is (r1 - l - 1) * min(hist[l], hist[r1]), which is less because r - l - 1 > r1 - l - 1 and hist[l] >= min(hist[l], hist[r1]). We can rule out all of these solutions as suboptimal, so it's safe to advance l.
I have to sort this array in O(n) time and O(1) space.
I know how to sort an array in O(n) but that doesn't work with missing and repeated numbers. If I find the repeated and missing numbers first (It can be done in O(n)) and then sort , that seems costly.
static void sort(int[] arr)
{
for(int i=0;i<arr.length;i++)
{
if(i>=arr.length)
break;
if(arr[i]-1 == i)
continue;
else
{
while(arr[i]-1 != i)
{
int temp = arr[arr[i]-1];
arr[arr[i]-1] = arr[i];
arr[i] = temp;
}
}
}
}
First, you need to find missing and repeated numbers. You do this by solving following system of equations:
Left sums are computed simultaneously by making one pass over array. Right sums are even simpler -- you may use formulas for arithmetic progression to avoid looping. So, now you have system of two equations with two unknowns: missing number m and repeated number r. Solve it.
Next, you "sort" array by filling it with numbers 1 to n left to right, omitting m and duplicating r. Thus, overall algorithm requires only two passes over array.
void sort() {
for (int i = 1; i <= N; ++i) {
while (a[i] != a[a[i]]) {
std::swap(a[i], a[a[i]]);
}
}
for (int i = 1; i <= N; ++i) {
if (a[i] == i) continue;
for (int j = a[i] - 1; j >= i; --j) a[j] = j + 1;
for (int j = a[i] + 1; j <= i; ++j) a[j] = j - 1;
break;
}
}
Explanation:
Let's denote m the missing number and d the duplicated number
Please note in the while loop, the break condition is a[i] != a[a[i]] which covers both a[i] == i and a[i] is a duplicate.
After the first for, every non-duplicate number i is encountered 1-2 time and moved into the i-th position of the array at most 1 time.
The first-found number d is moved to d-th position, at most 1 time
The second d is moved around at most N-1 times and ends up in m-th position because every other i-th slot is occupied by number i
The second outer for locate the first i where a[i] != i. The only i satisfies that is i = m
The 2 inner fors handle 2 cases where m < d and m > d respectively
Full implementation at http://ideone.com/VDuLka
After
int temp = arr[arr[i]-1];
add a check for duplicate in the loop:
if((temp-1) == i){ // found duplicate
...
} else {
arr[arr[i]-1] = arr[i];
arr[i] = temp;
}
See if you can figure out the rest of the code.
I was wondering how could I get the longest positive-sum subsequence in a sequence:
For example I have -6 3 -4 4 -5, so the longest positive subsequence is 3 -4 4. In fact the sum is positive (3), and we couldn't add -6 neither -5 or it would have become negative.
It could be easily solvable in O(N^2), I think could exist something much more faster, like in O(NlogN)
Do you have any idea?
EDIT: the order must be preserved, and you can skip any number from the substring
EDIT2: I'm sorry if I caused confusion using the term "sebsequence", as #beaker pointed out I meant substring
O(n) space and time solution, will start with the code (sorry, Java ;-) and try to explain it later:
public static int[] longestSubarray(int[] inp) {
// array containing prefix sums up to a certain index i
int[] p = new int[inp.length];
p[0] = inp[0];
for (int i = 1; i < inp.length; i++) {
p[i] = p[i - 1] + inp[i];
}
// array Q from the description below
int[] q = new int[inp.length];
q[inp.length - 1] = p[inp.length - 1];
for (int i = inp.length - 2; i >= 0; i--) {
q[i] = Math.max(q[i + 1], p[i]);
}
int a = 0;
int b = 0;
int maxLen = 0;
int curr;
int[] res = new int[] {-1,-1};
while (b < inp.length) {
curr = a > 0 ? q[b] - p[a-1] : q[b];
if (curr >= 0) {
if(b-a > maxLen) {
maxLen = b-a;
res = new int[] {a,b};
}
b++;
} else {
a++;
}
}
return res;
}
we are operating on input array A of size n
Let's define array P as the array containing the prefix sum until index i so P[i] = sum(0,i) where `i = 0,1,...,n-1'
let's notice that if u < v and P[u] <= P[v] then u will never be our ending point
because of the above we can define an array Q which has Q[n-1] = P[n-1] and Q[i] = max(P[i], Q[i+1])
now let's consider M_{a,b} which shows us the maximum sum subarray starting at a and ending at b or beyond. We know that M_{0,b} = Q[b] and that M_{a,b} = Q[b] - P[a-1]
with the above information we can now initialise our a, b = 0 and start moving them. If the current value of M is bigger or equal to 0 then we know we will find (or already found) a subarray with sum >= 0, we then just need to compare b-a with the previously found length. Otherwise there's no subarray that starts at a and adheres to our constraints so we need to increment a.
Let's make a naive implementation and then improve it.
We move from the left to the right calculating partial sums and for each position we find the most-left partial sum such as the current partial sum is greater than that.
input a
int partialSums[len(a)]
for i in range(len(a)):
partialSums[i] = (i == 0 ? 0 : partialSums[i - 1]) + a[i]
if partialSums[i] > 0:
answer = max(answer, i + 1)
else:
for j in range(i):
if partialSums[i] - partialSums[j] > 0:
answer = max(answer, i - j)
break
This is O(n2). Now the part of finding the left-most "good" sum could be actually maintained via BST, where each node would be represented as a pair (partial sum, index) with a comparison by partial sum. Also each node should support a special field min that would be the minimum of indices in this subtree.
Now instead of the straightforward search of an appropriate partial sum we could descend the BST using the current partial sum as a key following the next three rules (assuming C is the current node, L and R are the roots of the left and the right subtrees respectively):
Maintain the current minimal index of "good" partial sums found in curMin, initially +∞.
If C.partial_sum is "good" then update curMin with C.index.
If we go to R then update curMin with L.min.
And then update the answer with i - curMin, also add the current partial sum to the BST.
That would give us O(n * log n).
We can easily have a O(n log n) solution for longest subsequence.
First, sort the array, remember their indexes.
Pick all the largest numbers, stop when their sum are negative, and you have your answer.
Recover their original order.
Pseudo code
sort(data);
int length = 0;
long sum = 0;
boolean[] result = new boolean[n];
for(int i = n ; i >= 1; i--){
if(sum + data[i] <= 0)
break;
sum += data[i];
result[data[i].index] = true;
length++;
}
for(int i = 1; i <= n; i++)
if(result[i])
print i;
So, rather than waiting, I will propose a O(n log n) solution for longest positive substring.
First, we create an array prefix which is the prefix sum of the array.
Second, we using binary search to look for the longest length that has positive sum
Pseudocode
int[]prefix = new int[n];
for(int i = 1; i <= n; i++)
prefix[i] = data[i];
if(i - 1 >= 1)
prefix[i] += prefix[i - 1];
int min = 0;
int max = n;
int result = 0;
while(min <= max){
int mid = (min + max)/2;
boolean ok = false;
for(int i = 1; i <= n; i++){
if(i > mid && pre[i] - pre[i - mid] > 0){//How we can find sum of segment with mid length, and end at index i
ok = true;
break;
}
}
if(ok){
result = max(result, mid)
min = mid + 1;
}else{
max = mid - 1;
}
}
Ok, so the above algorithm is wrong, as pointed out by piotrekg2 what we need to do is
create an array prefix which is the prefix sum of the array.
Sort the prefix array, and we need to remember the index of the prefix array.
Iterate through the prefix array, storing the minimum index we meet so far, the maximum different between the index is the answer.
Note: when we comparing value in prefix, if two indexes have equivalent values, so which has smaller index will be considered larger, this will avoid the case when the sum is 0.
Pseudo code:
class Node{
int val, index;
}
Node[]prefix = new Node[n];
for(int i = 1; i <= n; i++)
prefix[i] = new Node(data[i],i);
if(i - 1 >= 1)
prefix[i].val += prefix[i - 1].val;
sort(prefix);
int min = prefix[1].index;
int result = 0;
for(int i = 2; i <= n; i ++)
if(prefix[i].index > min)
result = max(prefix[i].index - min + 1, result)
min = min(min, prefix[i].index);
I have this problem , where given an array of positive numbers i have to find the maximum sum of elements such that no two adjacent elements are picked. The maximum has to be less than a certain given K. I tried thinking on the lines of the similar problem without the k , but i have failed so far.I have the following dp-ish soln for the latter problem
int sum1,sum2 = 0;
int sum = sum1 = a[0];
for(int i=1; i<n; i++)
{
sum = max(sum2 + a[i], sum1);
sum2 = sum1;
sum1 = sum;
}
Could someone give me tips on how to proceed with my present problem??
The best I can think of off the top of my head is an O(n*K) dp:
int sums[n][K+1] = {{0}};
int i, j;
for(j = a[0]; j <= K; ++j) {
sums[0][j] = a[0];
}
if (a[1] > a[0]) {
for(j = a[0]; j < a[1]; ++j) {
sums[1][j] = a[0];
}
for(j = a[1]; j <= K; ++j) {
sums[1][j] = a[1];
}
} else {
for(j = a[1]; j < a[0]; ++j) {
sums[1][j] = a[1];
}
for(j = a[0]; j <= K; ++j) {
sums[1][j] = a[0];
}
}
for(i = 2; i < n; ++i) {
for(j = 0; j <= K && j < a[i]; ++j) {
sums[i][j] = max(sums[i-1][j],sums[i-2][j]);
}
for(j = a[i]; j <= K; ++j) {
sums[i][j] = max(sums[i-1][j],a[i] + sums[i-2][j-a[i]]);
}
}
sums[i][j] contains the maximal sum of non-adjacent elements of a[0..i] not exceeding j. The solution is then sums[n-1][K] at the end.
Make a copy (A2) of the original array (A1).
Find largest value in array (A2).
Extract all values before the it's preceeding neighbour and the values after it's next neighbour into a new array (A3).
Find largest value in the new array (A3).
Check if sum is larger that k. If sum passes the check you are done.
If not you will need to go back to the copied array (A2), remove the second larges value (found in step 3) and start over with step 3.
Once there are no combinations of numbers that can be used with the largest number (i.e. number found in step 1 + any other number in array is larger than k) you remove it from the original array (A1) and start over with step 0.
If for some reason there are no valid combinations (e.g. array is only three numbers or no combination of numbers are lower than k) then throw an exception or you return null if that seems more appropriate.
First idea: Brute force
Iterate all legal combination of indexes and build the sum on the fly.
Stop with one sequence when you get over K.
keep the sequence until you find a larger one, that is still smaller then K
Second idea: maybe one can force this into a divide and conquer thing ...
Here is a solution to the problem without the "k" constraint which you set out to do as the first step: https://stackoverflow.com/a/13022021/1110808
The above solution can in my view be easily extended to have the k constraint by simply amending the if condition in the following for loop to include the constraint: possibleMax < k
// Subproblem solutions, DP
for (int i = start; i <= end; i++) {
int possibleMaxSub1 = maxSum(a, i + 2, end);
int possibleMaxSub2 = maxSum(a, start, i - 2);
int possibleMax = possibleMaxSub1 + possibleMaxSub2 + a[i];
/*
if (possibleMax > maxSum) {
maxSum = possibleMax;
}
*/
if (possibleMax > maxSum && possibleMax < k) {
maxSum = possibleMax;
}
}
As posted in the original link, this approach can be improved by adding memorization so that solutions to repeating sub problems are not recomputed. Or can be improved by using a bottom up dynamic programming approach (current approach is a recursive top down approach)
You can refer to a bottom up approach here: https://stackoverflow.com/a/4487594/1110808