What is the runtime/memory complexity of the Maximum subarray problem using brute force?
Can they be optimized more? Especially the memory complexity?
Thanks,
Brute force is Omega(n^2). Using Divide and conquer you can do it with Theta(n lg n) complexity. Further details are available in many books such as Introduction to Algorithms, or in various resources on the Web, such as this lecture.
As suggested in this answer you can use Kadane's algorithm which has O(n) complexity. An implementation in Java:
public int[] kadanesAlgorithm (int[] array) {
int start_old = 0;
int start = 0;
int end = 0;
int found_max = 0;
int max = array[0];
for(int i = 0; i<array.length; i++) {
max = Math.max(array[i], max + array[i]);
found_max = Math.max(found_max, max);
if(max < 0)
start = i+1;
else if(max == found_max) {
start_old=start;
end = i;
}
}
return Arrays.copyOfRange(array, start_old, end+1);
}
Related
This question already has answers here:
How to calculate the median of an array?
(16 answers)
Closed 3 months ago.
I wrote a code that finds the median value of an unsorted array. What is this code's big O? Can you explain? Can we optimize runtime complexity?
public static int medianElement(int[] array,int low, int high) {
int[] tmpArray = new int[high - low + 1];
for (int i = 0; i < high - low; i++) {
tmpArray[i] = array[low + i];
}
boolean changed = true;
while (changed) {
changed = false;
for (int i = 0; i < high - low; i++) {
if (tmpArray[i] > tmpArray[i + 1]) {
changed = true;
swap(tmpArray, i, i + 1);
}
}
}
return tmpArray[(high - low + 1) / 2];
}
public static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
Your sorting algorithm is called Bubble sort https://en.wikipedia.org/wiki/Bubble_sort. Its runtime is O(n^2) in the worst case and O(n) in the best case.
To improve the average time this code takes to run, you could use a sorting algorithm with better worst case performance, such as merge sort https://en.wikipedia.org/wiki/Merge_sort. Merge sort performs at O(n log n) in the worse case.
I want to write this simple function in a way that makes the big O notation of it O(n^2), how can I make that possible ?
int getSum(int n){
int sum = (n*(n+1))/2;
return sum;
any ideas?
I'm not really sure why you want this, but you could do it with two nested loops:
int getSum(int n) {
int sum = 0;
for(int i = 1; i <= n; i++) {
int x = 0;
while(x++ < i) {
sum++;
}
}
return sum;
}
This runs 1+2+3+...+n times, which simplifies to (n^2+n)/2, hence O(n^2)
Let L be a list of positive integers.
We are allowed to merge two elements of L if they have adjacent indices.
The cost of this operation is the sum of both elements.
For example: [1,2,3,4] -> [3,3,4] with a cost of 3.
We are looking for the minimum cost to merge L into one integer.
Is there a fast way of doing this? I came up with this naive recursive approach but that should
be O(n!).
I have noticed that it benefits a lot from memoization so I think there must be a way to avoid trying all possible permutations which will always result in O(n!).
def solveR(l):
if len(l) <= 2:
return sum(l)
else:
return sum(l) + min(solveR(l[1:]), solveR(l[:-1]),
solveR(l[len(l) // 2:]) + solveR(l[:len(l) // 2]))
This is much like this LeetCode problem, but with K = 2. The comments suggest that the time complexity is O(n^3). Here is some C++ code that implements the algorithm:
class Solution {
public:
int mergeStones(vector<int>& stones, int K) {
K = 2;
int N = stones.size();
if((N-1)%(K-1) > 0) return -1;
int sum[N+1] = {0};
for(int i = 1; i <= N; i++)
sum[i] = sum[i-1] + stones[i-1];
vector<vector<int>> dp(N, vector<int>(N,0));
for(int L=K; L<= N; L++)
for(int i=0, j=i+L-1; j<N; i++,j++) {
dp[i][j] = INT_MAX;
for (int k = i; k < j; k += (K-1))
dp[i][j] = min(dp[i][j], dp[i][k] + dp[k+1][j]);
if ((L-1)%(K-1) == 0)
dp[i][j] += (sum[j+1] - sum[i]); // add sum in [i,j]
}
return dp[0][N-1];
}
};
I am trying to learn Big(O) notation. While searching for some articles online, I came across two different articles , A and B
Strictly speaking in terms of loops - it seems that they almost have the same kind of flow.
For example
[A]'s code is as follows (its done in JS)
function allPairs(arr) {
var pairs = [];
for (var i = 0; i < arr.length; i++) {
for (var j = i + 1; j < arr.length; j++) {
pairs.push([arr[i], arr[j]]);
}
}
return pairs;
}
[B]'s code is as follows (its done in C)- entire code is here
for(int i = 0; i < n-1 ; i++) {
char min = A[i]; // minimal element seen so far
int min_pos = i; // memorize its position
// search for min starting from position i+1
for(int j = i + 1; j < n; j++)
if(A[j] < min) {
min = A[j];
min_pos = j;
}
// swap elements at positions i and min_pos
A[min_pos] = A[i];
A[i] = min;
}
The article on site A mentions that time complexity is O(n^2) while the article on site B mentions that its O(1/2·n2).
Which one is right?
Thanks
Assuming that O(1/2·n2) means O(1/2·n^2), the two time complexity are equal. Remember that Big(O) notation does not care about constants, so both algorithms are O(n^2).
You didn't read carefully. Article B says that the algorithm performs about N²/2 comparisons and goes on to explain that this is O(N²).
So I've been trying to get a handle on Big Oh calculations. I feel I have the basics down but am stumped on what seems a really easy calculation. So if the calculation below has a big oh of O(n log n) (I really hope I've at least got that right) what does changing the order of the loops do to the complexity? Thanks so much in advance for your time.
int ONLogN(int N) //O(n log n)
{
int iIterations = 0;
for (int i = 0; i < N; ++i)
{
++iIterations;
for (int j = 1; j < N + 1; j *= 2)
++iIterations;
}
return iIterations;
}
int WhatBigOhIsThis(int N) //???
{
int iIterations = 0;
for (int j = 1; j < N + 1; j *= 2)
{
++iIterations;
for (int i = 0; i < N; ++i)
++iIterations;
}
return iIterations;
}
The index variables on the two loops are independent, hence the resulting complexity is necessarily the same.
You're still looping for the same number of iterations. Changing the order of the loops would have no effect on complexity