Is Kadane's Algorithm Greedy or Optimised DP? - algorithm

I feel like Kadane's algorithm is a modified version of the true dynamic programming solution of maximum subarray problem.Why do I feel so?
I feel because the way to calculate the maximum subarray can be taken by:
for(i=0;i<N;i++)
{
DP[i][A[i]]=true;
for(j= -ve maximum ;j<= +ve maximum ;j++)
if(DP[i-1][j])
DP[i][j+A[i]]=true;
}
The recurrence being if it is possible to form j with a subarray ending at i-1 elements i can form j+A[i] using the i th element and also form A[i] alone by starting a subarray at i th position
And at last we can search this DP array for maximum j that is marked true!
Note : DP[i][j] represents if it is possible to make j using a sub array ending at i! Here I assume j can be negative too.! Now one can easily derive that sum+ a negative number < sum . That implies adding any negative indices wont help getting a better sum thats why we can drop them! Morover we care about the maximum j till i-1 th position and connect it with i th element which makes me feel it is kind of making a greedy choice ( Just because maximum + element gives me a maximum).
NOTE: I haven't studied Greedy algorithms by now but I have an idea what a greedy choice is!
EDIT: SOmeone said my algorithm doesn't makes any sense so I am trying to post my code to make myself clear. I haven't taken j as -ve as they aren't fruitful.
I repeat my state is defined as is it possible to make j using a subarray ending at i.
#include<bits/stdc++.h>
using namespace std;
int DP[101][101];
int main()
{
int i,j,ans=INT_MIN;
int A[]={3,-1,2,-1,5,-3};
int N=sizeof(A)/sizeof(int);
for(i=1;i<=N;i++)
{
if(A[i-1]>=0)
DP[i][A[i-1]]++;
for(j=0;j<=100;j++)
{
if(DP[i-1][j])
{
if(j+A[i-1]>=0)
DP[i][j+A[i-1]]++;
}
if(DP[i][j])
ans=max(ans,j);
}
}
cout<<ans<<"\n";
return 0;
}
Output 8

Kadane's is an iterative dynamic programming algorithm.
It is very common to optimize iterative DP algorithms to remove one dimension of the DP matrix along the major axis of the algorithm's progression.
The usual 'longest common subsequence' algorithm, for example, is usually described with a 2D matrix, but if the algorithm progresses from left to right, then you really only need space for 2 columns.
Kadane's algorithm is a similar optimization applied to a 1D problem, so the whole DP array disappears. The DP code in your question has a 2D matrix for some reason. I don't know why -- it doesn't really make sense.
This site does a pretty good job of explaining the derivation: https://hackernoon.com/kadanes-algorithm-explained-50316f4fd8a6

I think that it is a greedy algorithm because kadanes algorithm finds the maximum sum at each step and then finds the overall solution .

Kadane’s Algorithm can be viewed both as a greedy and DP. As we can see that we are keeping a running sum of integers and when it becomes less than 0, we reset it to 0 (Greedy Part). This is because continuing with a negative sum is way more worse than restarting with a new range. Now it can also be viewed as a DP, at each stage we have 2 choices: Either take the current element and continue with previous sum OR restart a new range. These both choices are being taken care of in the implementation.
Greedy Sol
# Python program to find maximum contiguous subarray
# Function to find the maximum contiguous subarray
from sys import maxint
def maxSubArraySum(a,size):
max_so_far = -maxint - 1
max_ending_here = 0
for i in range(0, size):
max_ending_here = max_ending_here + a[i]
if (max_so_far < max_ending_here):
max_so_far = max_ending_here
if max_ending_here < 0:
max_ending_here = 0
return max_so_far
# Driver function to check the above function
a = [-13, -3, -25, -20, -3, -16, -23, -12, -5, -22, -15, -4, -7]
print "Maximum contiguous sum is", maxSubArraySum(a,len(a))
DP Sol
# Python program to print largest contiguous array sum
from sys import maxsize
# Function to find the maximum contiguous subarray
# and print its starting and end index
def maxSubArraySum(a,size):
max_so_far = -maxsize - 1
max_ending_here = 0
start = 0
end = 0
s = 0
for i in range(0,size):
max_ending_here += a[i]
if max_so_far < max_ending_here:
max_so_far = max_ending_here
start = s
end = i
if max_ending_here < 0:
max_ending_here = 0
s = i+1
print ("Maximum contiguous sum is %d"%(max_so_far))
print ("Starting Index %d"%(start))
print ("Ending Index %d"%(end))
# Driver program to test maxSubArraySum
a = [-2, -3, 4, -1, -2, 1, 5, -3]
maxSubArraySum(a,len(a))

As defined in Introduction to Algorithms, "A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution."
Kadane's algorithm does look for a locally optimal solution by current_sum = max(0, current_sum + x); at the same time, this could also be seen as a space-optimized dynamic programming solution - dp[i] only relies on dp[i-1] hence we use an integer variable to save space.
Therefore I feel like the transition function of DP happens to have a greedy manner, which makes it look like both a DP and a greedy as well.

I think that it is hard to say what exactly this algorithm is.
But most of the book classify this algorithm in DP section, because you combine solution from dp[n-1] to make solution for dp[n].
Note: I'm not understand why you use this version of algorithm which is O(n^2)
You can simplify this algorithm to O(n)
curmax=0
sol=0
for x in array
curmax+=a[x]
if(curmax<0)curmax=0
if(curmax>sol)sol=curmax

Related

Subset with smallest sum greater or equal to k

I am trying to write a python algorithm to do the following.
Given a set of positive integers S, find the subset with the smallest sum, greater or equal to k.
For example:
S = [50, 103, 85, 21, 30]
k = 140
subset = [85, 50, 21] (with sum = 146)
The numbers in the initial set are all integers, and k can be arbitrarily large. Usually there will be about 100 numbers in the set.
Of course there's the brute force solution of going through all possible subsets, but that runs in O(2^n) which is unfeasable. I have been told that this problem is NP-Complete, but that there should be a Dynamic Programing approach that allows it to run in pseudo-polynomial time, like the knapsack problem, but so far, attempting to use DP still leads me to solutions that are O(2^n).
Is there such a way to appy DP to this problem? If so, how? I find DP hard to understand so I might have missed something.
Any help is much appreciated.
Well seeing that numbers are not integers but reals, best I can think of is O(2^(n/2) log (2^(n/2)).
It might look worse at first glance but notice that 2^(n/2) == sqrt(2^n)
So to achieve such complexity we will use technique known as meet in the middle:
Split set into 2 parts of sizes n/2 and n-n/2
Use brute force to generate all subsets (including empty one) and store them in arrays, let's call them A and B
Let's sort array B
Now for each element a in A, if B[-1] + a >=k we can use binary search to find smallest element b in B that satisfies a + b >= k
out of all such a + b pairs we found choose the smallest
OP changed question a little now its integers so here goes dynamic solution:
well not much to say, classical knapsack.
for each i in [1,n] we have 2 options for set item i:
1. Include in subset, state changes from (i, w) to (i+1, w + S[i])
2. Skip it, state changes from (i, w) to (i+1, w)
Every time we reach some w that`s >= k, we update answer
Pseudo-code:
visited = Set() //some set/hashtable object to store visited states
S = [...]//set of integers from input
int ats = -1;
void solve(int i, int w) //theres atmost n*k different states so complexity is O(n*k)
{
if(w >= k)
{
if(ats==-1)ats=w;
else ats=min(ats,w);
return;
}
if(i>n)return;
if(visited.count(i,w))return; //we already visited this state, can skip
visited.insert(i,w)=1;
solve(i+1, w + S[i]); //take item
solve(i+1, w); //skip item
}
solve(1,0);
print(ats);

Count combinations for 0-1 knapsack

I wonder what is the most efficent (time and memory) way to count the number of subsets with the sum less than or equal than some limit. For example, for the set {1, 2, 4} and limit of 3 such number whould be 4 (subsets are {}, {1}, {2}, {1, 2}). I tried coding a subsets in a bit vector (mask) and finding an answer in a following way (pseudo code):
solve(mask, sum, limit)
if visited[mask]
return
if sum <= limit
count = count + 1
visited[mask] = true
for i in 0..n - 1
if there is i-th bit
sum = sum - array[i]
mask = mask without i-th bit
count (mask, sum, limit)
solve(2^n - 1, knapsack sum, knapsack limit)
Arrays are zero-based, count can be a global variable and visited is an array of length 2^n. I understand that the problem has an exponential complexity but is there a better approach/ improvement to my idea? The algorithm runs fast for n ≤ 24 but my approach is pretty brute-force and I was thinking about existance of some clever way to find an answer for n = 30 for instance.
The most efficient for space is a recursive traversal of all subsets that just keeps a count. This will be O(2^n) time and O(n) memory where n is the size of the overall set.
All known solutions can be exponential in time because your program is a variation of subset-sum. That is known to be NP complete. But a pretty efficient DP solution is as follows in pseudocode with comments.
# Calculate the lowest sum and turn all elements positive.
# This turns the limit problem into one with only non-negative elements.
lowest_sum = 0
for element in elements:
if element < 0:
lowest_sum += element
element = -element
# Sort and calculate trailing sums. This allows us to break off
# the details of lots of ways to be below our bound.
elements = sort elements from largest to smallest
total = sum(elements)
trailing_sums = []
for element in elements:
total -= element
push total onto trailing_sums
# Now do dp
answer = 0
ways_to_reach_sum = {lowest_sum: 1}
n = length(answer)
for i in range(0, n):
new_ways_to_reach_sum = {}
for (sum, count) in ways_to_reach_sum:
# Do we consider ways to add this element?
if bound <= elements[i] + sum:
new_ways_to_reach_sum[sum] += count
# Make sure we keep track of ways to not add this element
if bound <= sum + trailing_sums[i]:
# All ways to compute the subset are part of the answer
answer += count * 2**(n - i)
else:
new_ways_to_reach_sum[sum] += count
# And finish processing this element.
ways_to_reach_sum = new_ways_to_reach_sum
# And just to be sure
for (sum, count) in ways_to_reach_sum:
if sum <= bound:
answer += count
# And now answer has our answer!

Coin change(Dynamic programming)

I have a question about the coin change problem where we not only have to print the number of ways to change $n with the given coin denominations for eg {1,5,10,25}, but also print the ways
For example if the target = $50, and the coins are {1,5,10,25}, then the ways to actually get use the coins to get the target are
2 × $25
1 × $25 + 2 × $10 + 1 × $5
etc.
What is the best time complexity we could get to solve this problem?
I tried to modify the dynamic programming solution for the coin change problem where we only need the number of ways but not the actual ways
I am having trouble figuring out the time complexity.
I do use memorization so that I don't have to solve the same problem again for the given coin and sum value but still we need to iterate through all the solution and print them. So the time complexity is definitely more than O(ns) where n is the number of coins and s is the target
Is it exponential? Any help will be much appreciated
Printing Combinations
def coin_change_solutions(coins, S):
# create an S x N table for memoization
N = len(coins)
sols = [[[] for n in xrange(N + 1)] for s in xrange(S + 1)]
for n in range(0, N + 1):
sols[0][n].append([])
# fill table using bottom-up dynamic programming
for s in range(1, S+1):
for n in range(1, N+1):
without_last = sols[s][n - 1]
if (coins[n - 1] <= s):
with_last = [list(sol) + [coins[n-1]] for sol in sols[s - coins[n - 1]][n]]
else:
with_last = []
sols[s][n] = without_last + with_last
return sols[S][N]
print coin_change_solutions([1,2], 4)
# => [[1, 1, 1, 1], [1, 1, 2], [2, 2]]
without: we don't need to use the last coin to make the sum. All the coin solutions are found directly by recursing to solution[s][n-1]. We take all those coin combinations and copy them to with_last_sols.
with: we do need to use the last coin. So that coin must be in our solution. The remaining coins are found recursively via sol[s - coins[n - 1]][n]. Reading this entry will give us many possible choices for what the remaining coins should be. For each possible choice , sol, we append the last coin, coin[n - 1]:
# For example, suppose target is s = 4
# We're finding solutions that use the last coin.
# Suppose the last coin has a value of 2:
#
# find possible combinations that add up to 4 - 2 = 2:
# ===> [[1,1], [2]]
# then for each combination, add the last coin
# so that the combination adds up to 4)
# ===> [[1,1,2], [2,2]]
The final list of combinations is found by taking the combinations for the first case and the second case and concatenating the two lists.
without_last_sols = [[1,1,1,1]]
with_last_sols = [[1,1,2], [2,2]]
without_last_sols + with_last_sols = [[1,1,1,1], [1,1,2], [2,2]]
Time Complexity
In the worst case we have a coin set with all coins from 1 to n: coins
= [1,2,3,4,...,n] – the number of possible coin sum combinations, num solutions, is equal to the number of integer partitions of s, p(s).
It can be shown that the number of integer partitions, p(s) grows exponentially.
Hence num solutions = p(s) = O(2^s). Any solution must have this at a minimum so that it can print out all these possible solutions. Hence the problem is exponential in nature.
We have two loops: one loop for s and the other loop for n.
For each s and n, we compute sols[s][n]:
without: We look at the O(2^s) combinations in sol[s - coins[n - 1]][n]. For each combination, we copy it in O(n) time. So overall this takes: O(n×2^s) time.
with: We look at all O(2^s) combinations in sol[s][n]. For each combination list sol, we create copy of that new list in O(n) time and then append the last coin. Overall this case takes O(n×2^s).
Hence the time complexity is O(s×n)×O(n2^s + n2^s) = O(s×n^2×2^s).
Space Complexity
The space complexity is O(s×n^2×2^s) because we have a s×n table with
each entry storing O(2^s) possible combinations, (e.g. [[1, 1, 1, 1], [1, 1, 2], [2, 2]]), with each combination, (e.g. [1,1,1,1]) taking O(n) space.
What I tend to do is solve the problem recursively and then build a memoization solution from there.
Starting with a recursive one the approach is simple, pick a coin subtract from target and dont pick a coin.
Whilst you pick a coin you add it to a vector or your list, when you dont pick one you pop the one you added before. The code looks something like:
void print(vector<int>& coinsUsed)
{
for(auto c : coinsUsed)
{
cout << c << ",";
}
cout << endl;
}
int helper(vector<int>& coins, int target, int index, vector<int>& coinsUsed)
{
if (index >= coins.size() || target < 0) return 0;
if (target == 0)
{
print(coinsUsed);
return 1;
}
coinsUsed.push_back(coins[index]);
int with = helper(coins, target - coins[index], index, coinsUsed);
coinsUsed.pop_back();
int without = helper(coins, target, index + 1, coinsUsed);
return with + without;
}
int coinChange(vector<int>& coins, int target)
{
vector<int> coinsUsed;
return helper(coins, target, 0, coinsUsed);
}
You can call it like:
vector<int> coins = {1,5,10,25};
cout << "Total Ways:" << coinChange(coins, 10);
So this gives you the total ways and also the coins used in the process to reach the target stored in coinsUsed you can now memoize this as you please by storing the passed in values in a cache.
The time complexity of the recursive solution is exponential.
link to the running program: http://coliru.stacked-crooked.com/a/5ef0ed76b7a496fe
Let d_i be a denomination, the value of a coin in cents. In your example d_i = {1, 5, 10, 25}.
Let k be the number of denominations (coins), here k = 4.
We will use a 2D array numberOfCoins[1..k][0..n] to determine the minimum number of coins required to make a change. The optimal solution is given by:
numberOfCoins[k][n] = min(numberOfCoins[i − 1][j], numberOfCoins[i][j − d_i] + 1)
The equation above represents the fact that to build an optimal solution we either do not use d_i, so we need use a smaller coin (this is why i is decremented below):
numberOfCoins[i][j] = numberOfCoins[i − 1][j] // eq1
or we use d_i, so we add +1 to the number of coins needed and we decrement by d_i (the value of the coin we just used):
numberOfCoins[i][j] = numberOfCoins[i][j − d_i] + 1 // eq2
The time complexity is O(kn) but in cases where k is small, as is the case in your example, we have O(4n) = O(n).
We will use another 2D array, coinUsed, having the same dimensions as numberOfCoins, to mark which coins were used. Each entry will either tell us that we did not use the coin in coinUsed[i][j] by setting a "^" in that position (this correspond to eq1). Or we mark that the coin was used by setting a "<" in that position (corresponding to eq2).
Both arrays can be built as the algorithm is working. We will only have constant more instructions in the inner loop, therefore the time complexity of building both arrays is still O(kn).
To print the solution we need to iterate, in the worse case scenario over k + n+1 elements. For example, when the optimal solution is using all 1 cent denominations. But note that printing is done after building so the overall time complexity is O(kn) + O(k + n+1). As before, if k is small the complexity is O(kn) + O(k + n+1) = O(kn) + O(n+1) = O(kn) + O(n) = O((k+1)n) = O(n).

Find subset with elements that are furthest apart from eachother

I have an interview question that I can't seem to figure out. Given an array of size N, find the subset of size k such that the elements in the subset are the furthest apart from each other. In other words, maximize the minimum pairwise distance between the elements.
Example:
Array = [1,2,6,10]
k = 3
answer = [1,6,10]
The bruteforce way requires finding all subsets of size k which is exponential in runtime.
One idea I had was to take values evenly spaced from the array. What I mean by this is
Take the 1st and last element
find the difference between them (in this case 10-1) and divide that by k ((10-1)/3=3)
move 2 pointers inward from both ends, picking out elements that are +/- 3 from your previous pick. So in this case, you start from 1 and 10 and find the closest elements to 4 and 7. That would be 6.
This is based on the intuition that the elements should be as evenly spread as possible. I have no idea how to prove it works/doesn't work. If anyone knows how or has a better algorithm please do share. Thanks!
This can be solved in polynomial time using DP.
The first step is, as you mentioned, sort the list A. Let X[i,j] be the solution for selecting j elements from first i elements A.
Now, X[i+1, j+1] = max( min( X[k,j], A[i+1]-A[k] ) ) over k<=i.
I will leave initialization step and memorization of subset step for you to work on.
In your example (1,2,6,10) it works the following way:
1 2 6 10
1 - - - -
2 - 1 5 9
3 - - 1 4
4 - - - 1
The basic idea is right, I think. You should start by sorting the array, then take the first and the last elements, then determine the rest.
I cannot think of a polynomial algorithm to solve this, so I would suggest one of the two options.
One is to use a search algorithm, branch-and-bound style, since you have a nice heuristic at hand: the upper bound for any solution is the minimum size of the gap between the elements picked so far, so the first guess (evenly spaced cells, as you suggested) can give you a good baseline, which will help prune most of the branches right away. This will work fine for smaller values of k, although the worst case performance is O(N^k).
The other option is to start with the same baseline, calculate the minimum pairwise distance for it and then try to improve it. Say you have a subset with minimum distance of 10, now try to get one with 11. This can be easily done by a greedy algorithm -- pick the first item in the sorted sequence such that the distance between it and the previous item is bigger-or-equal to the distance you want. If you succeed, try increasing further, if you fail -- there is no such subset.
The latter solution can be faster when the array is large and k is relatively large as well, but the elements in the array are relatively small. If they are bound by some value M, this algorithm will take O(N*M) time, or, with a small improvement, O(N*log(M)), where N is the size of the array.
As Evgeny Kluev suggests in his answer, there is also a good upper bound on the maximum pairwise distance, which can be used in either one of these algorithms. So the complexity of the latter is actually O(N*log(M/k)).
You can do this in O(n*(log n) + n*log(M)), where M is max(A) - min(A).
The idea is to use binary search to find the maximum separation possible.
First, sort the array. Then, we just need a helper function that takes in a distance d, and greedily builds the longest subarray possible with consecutive elements separated by at least d. We can do this in O(n) time.
If the generated array has length at least k, then the maximum separation possible is >=d. Otherwise, it's strictly less than d. This means we can use binary search to find the maximum value. With some cleverness, you can shrink the 'low' and 'high' bounds of the binary search, but it's already so fast that sorting would become the bottleneck.
Python code:
def maximize_distance(nums: List[int], k: int) -> List[int]:
"""Given an array of numbers and size k, uses binary search
to find a subset of size k with maximum min-pairwise-distance"""
assert len(nums) >= k
if k == 1:
return [nums[0]]
nums.sort()
def longest_separated_array(desired_distance: int) -> List[int]:
"""Given a distance, returns a subarray of nums
of length k with pairwise differences at least that distance (if
one exists)."""
answer = [nums[0]]
for x in nums[1:]:
if x - answer[-1] >= desired_distance:
answer.append(x)
if len(answer) == k:
break
return answer
low, high = 0, (nums[-1] - nums[0])
while low < high:
mid = (low + high + 1) // 2
if len(longest_separated_array(mid)) == k:
low = mid
else:
high = mid - 1
return longest_separated_array(low)
I suppose your set is ordered. If not, my answer will be changed slightly.
Let's suppose you have an array X = (X1, X2, ..., Xn)
Energy(Xi) = min(|X(i-1) - Xi|, |X(i+1) - Xi|), 1 < i <n
j <- 1
while j < n - k do
X.Exclude(min(Energy(Xi)), 1 < i < n)
j <- j + 1
n <- n - 1
end while
$length = length($array);
sort($array); //sorts the list in ascending order
$differences = ($array << 1) - $array; //gets the difference between each value and the next largest value
sort($differences); //sorts the list in ascending order
$max = ($array[$length-1]-$array[0])/$M; //this is the theoretical max of how large the result can be
$result = array();
for ($i = 0; i < $length-1; $i++){
$count += $differences[i];
if ($length-$i == $M - 1 || $count >= $max){ //if there are either no more coins that can be taken or we have gone above or equal to the theoretical max, add a point
$result.push_back($count);
$count = 0;
$M--;
}
}
return min($result)
For the non-code people: sort the list, find the differences between each 2 sequential elements, sort that list (in ascending order), then loop through it summing up sequential values until you either pass the theoretical max or there arent enough elements remaining; then add that value to a new array and continue until you hit the end of the array. then return the minimum of the newly created array.
This is just a quick draft though. At a quick glance any operation here can be done in linear time (radix sort for the sorts).
For example, with 1, 4, 7, 100, and 200 and M=3, we get:
$differences = 3, 3, 93, 100
$max = (200-1)/3 ~ 67
then we loop:
$count = 3, 3+3=6, 6+93=99 > 67 so we push 99
$count = 100 > 67 so we push 100
min(99,100) = 99
It is a simple exercise to convert this to the set solution that I leave to the reader (P.S. after all the times reading that in a book, I've always wanted to say it :P)

Algorithm required for this task?

Inputs: n (int) and n values (float) that represent exchange rates
(different between them) with a random value between 4 and 5.
Output: compute the maximum number of values that can be used (in the
same order) to represent an ascending then descending curve?
e.x. The eight values
4.5 4.6 4.3 4.0 4.8 4.4 4.7 4.1
should output
5 (4.5 4.6 4.8 4.4 4.1)
My approach
If I try successive ifs, I get a random array that respects the curve condition, but not the longest.
I have not tried backtracking because I am not that familiar with it, but something tells me I have to compute all the solutions with it then pick the longest.
And lastly: brute force, but since it is an assignment for algorithm design; I may as well not hand it in. :)
Is there a simpler/more efficient/faster method?
Here's my try based on Daniel Lemire's algorithm. It seems it doesn't take into account the positions 0, i and n. I'm sure the ifs are the problem, how can I fix them?
for(int i = 0; i<n-1; i++){
int countp=0; // count ascending
int countn=0; // count descending
for(int j=0;j<=i;j++){
if(currency[j]<currency[j+1]){
countp++;
System.out.print(j+" ");
}
}
System.out.print("|| ");
for(int j=i;j<n-1;j++){
if(currency[j]>currency[j+1]){
countn++;
System.out.print(j+" ");
}
}
System.out.println();
if(countn+countp>maxcount) maxcount=countn+countp;
}
Firstly, you want to be able to compute the longest monotonic subsequence from one point to another. (Whether it is increasing or decreasing does not affect the problem much.) To do this, you may use dynamic programming. For example, to solve the problem given indexes 0 to i, you start by solving the problem from 0 to 0 (trivial!), then from 0 to 1, then from 0 to 2, and so on, each time recording (in an array) your best solution.
For example, here is some code in python to compute the longest non-decreasing sequence going from index 0 to index i. We use an array (bbest) to store the solution from 0 to j for all j's from 0 to i: that is, the length of the longest non-decreasing subsequence from 0 to j. (The strategy used is dynamic programming.)
def countasc(array,i):
mmin = array[0] # must start with mmin
mmax= array[i] # must end with mmax
bbest=[1] # going from 0 to 0 the best we can do is length 1
for j in range(1,i+1): # j goes from 1 to i
if(array[j]>mmax):
bbest.append(0) # can't be used
continue
best = 0 # store best result
for k in range(j-1,-1,-1): # count backward from j-1 to 0
if(array[k]>array[j]) :
continue # can't be used
if(bbest[k]+1>best):
best = bbest[k]+1
bbest.append(best)
return bbest[-1] # return last value of array bbest
or equivalently in Java (provided by request):
int countasc(float[] array,int i) {
float mmin = array[0];
float mmax = array[i];
ArrayList<Integer> bbest= new ArrayList<Integer>();
bbest.add(1);
for (int j = 1; j<=i;++j) {
if(array[j]>mmax){
bbest.add(0);
continue;
}
int best = 0;
for(int k = j-1; k>=0;--k) {
if(array[k]>array[j])
continue;
if(bbest.get(k).intValue()+1>best)
best = bbest.get(k).intValue()+1;
}
bbest.add(best);
}
return bbest.get(bbest.size()-1);
}
You can write the same type of function to find the longest non-increasing sequence from i to n-1 (left as an exercise).
Note that countasc runs in linear time.
Now, we can solve the actual problem:
Start with S, an empty array
For i an index that goes from 0 to n-1 :
compute the length of the longest increasing subsequence from 0 to i (see function countasc above)
compute the length of the longest decreasing subsequence from n-1 to i
add these two numbers, add the sum to S
return the max of S
It has quadratic complexity. I am sure you can improve this solution. There is a lot of redundancy in this approach. For example, for speed, you should probably not repeatedly call countasc with an uninitialized array bbest: it can be computed once. Possibly you can bring down the complexity to O(n log n) with some more work.
A first step is to understand how to solve the related longest increasing subsequence problem. For this problem, there is a simple algorithm that is O(n^2) though the optimal algorithm is O(n log n). Understanding these algorithms should put you on the right track to a solution.

Resources