Efficient algorithm for finding a set of non adjacent subarrays maximizing their total sum - algorithm

I've come across this problem in a programming contest site and been trying different things for a few days but none of them seem to be efficient enough.
Here is the question: You are given a large array of integers and a number k. The goal is to divide the array into subarrays each containing no more than k elements, such that the sum of all the elements in all the sub arrays is maximal. Another condition is that none of these sub arrays can be adjacent to each other. In other words, we have to drop a few terms from the original array.
Its been bugging me for a while and would like to hear your perspective on approaching this problem.

Dynamic programming should do the trick. Short explanation why:
The key property of a problem susceptible to dynamic programming is that the optimal solution to the problem (here: the whole array) can always be expressed as composition of two optimal solutions to subproblems (here: two subarrays.) Not every split needs to have this property - it is sufficient for one such split to exist for any optimal solution.
Clearly if you split the optimal solution between arrays (on an element that has been dropped), then the subsolutions are optimal within both subarrays.
The algorithm:
Try every element of the array in turn as the splitting element, looking for the one that yields the best result. Solve the problem recursively for both parts of the array (the recursion stops when the subarray is no longer than k). Memoize solutions to avoid exponential time (the recursion will obviously try the same subarray many times.)

This is not a solution, but a clue.
Consider solving the following problem:
From an array X choose elements a subset of elements such that none of them are adjacent to each other and their sum is maximum.
Now, the above problem is a special case of your problem where K=1. Think how you can expand the solution to a general case. Let me know if you don't know how to solve the simpler case.

I don't have time to explain why this works and should be the accepted answer:
def maxK(a, k):
states = k+1
myList = [0 for i in range(states)]
for i in range(0, len(a)):
maxV = max (myList)
myList = [a[i] + j for j in myList]
myList[(states-i) % k] = maxV
return max(myList)
This works with negative numbers too. This is linear in size(a) times k. The language I used is Python because at this level it can be read as if it were pseudo code.

Related

Reverse Huffman's algorithm?

I have a problem simlar to Huffman's encoding, I'm not sure exactly how it can be solved or if it is a reverse Huffman's encoding. But it definitely can be solved using a greedy approach.
Consider a set of length, each associated with a probability. i.e.
X={a1=(100,1/4),a2=(500,1/4),a3=(200,1/2)}
Obviously, the sum of all the probabilities = 1.
Arrange the lengths together on a line one after the other from a starting point.
For example: {a2,a1,a3} in that order from start to finish.
Define the cost of an element a_i as its the total length from the starting line to the end of this element multiplied by its probability.
So from the previous arrangement:
cost(a2) = (500)*(1/4)
cost(a1) = (500+100)*(1/4)
cost(a3) = (500+100+200)*(1/2)
Define the total cost as the sum of all costs. e.g. cost(X) = cost(a2) + cost(a1) + cost(a3). Give an algorithm that finds an arrangement that minimizes cost(X)
I've tried forming some alternative huffman trees but it doesn't work.
Sorting by probability will fail (consider X={(100,0.4),(300,0.6)}).
Sorting by length will also fail (consider X={(100,0.1),(300,0.9)}).
If anyone can help or hint towards an optimal solution algorithm, it would be great.
Consider what happens if you swap two adjacent elements. The calculations before and after the two elements are the same, so it just depends on the two elements.
Taking two elements in isolation, the costs are P1L1 + P2(L1 + L2) and P2L2 + P1(L1 + L2). If you subtract this and simplify if I have got the algebra right you want to swap 1 to first when L1/P1 < L2/P2. Check - this at least gets the right answer when L1 = 0.
So I think you want to sort the elements into increasing order of Li/Pi, because if that is not the case you can improve the answer by swapping adjacent elements.

Minimal non-contiguous sequence of exactly k elements

The problem I'm having can be reduced to:
Given an array of N positive numbers, find the non-contiguous sequence of exactly K elements with the minimal sum.
Ok-ish: report the sum only. Bonus: the picked elements can be identified (at least one set of indices, if many can realize the same sum).
(in layman terms: pick any K non-neighbouring elements from N values so that their sum is minimal)
Of course, 2*K <= N+1 (otherwise no solution is possible), the problem is insensitive to positive/negative (just shift the array values with the MIN=min(A...) then add back K*MIN to the answer).
What I got so far (the naive approach):
select K+2 indexes of the values closest to the minimum. I'm not sure about this, for K=2 this seems to be the required to cover all the particular cases, but I don't know if it is required/sufficient for K>2**
brute force the minimal sum from the values of indices resulted at prev step respecting the non-contiguity criterion - if I'm right and K+2 is enough, I can live brute-forcing a (K+1)*(K+2) solution space but, as I said. I'm not sure K+2 is enough for K>2 (if in fact 2*K points are necessary, then brute-forcing goes out of window - the binomial coefficient C(2*K, K) grows prohibitively fast)
Any clever idea of how this can be done with minimal time/space complexity?
** for K=2, a non-trivial example where 4 values closest to the absolute minimum are necessary to select the objective sum [4,1,0,1,4,3,4] - one cannot use the 0 value for building the minimal sum, as it breaks the non-contiguity criterion.
PS - if you feel like showing code snippets, C/C++ and/or Java will be appreciated, but any language with decent syntax or pseudo-code will do (I reckon "decent syntax" excludes Perl, doesn't it?)
Let's assume input numbers are stored in array a[N]
Generic approach is DP: f(n, k) = min(f(n-1, k), f(n-2, k-1)+a[n])
It takes O(N*K) time and has 2 options:
for lazy backtracking recursive solution O(N*K) space
for O(K) space for forward cycle
In special case of big K there is another possibility:
use recursive back-tracking
instead of helper array of N*K size use map(n, map(k, pair(answer, list(answer indexes))))
save answer and list of indexes for this answer
instantly return MAX_INT if k>N/2
This way you'll have lower time than O(NK) for K~=N/2, something like O(Nlog(N)). It will increase up to O(N*log(N)Klog(K)) for small K, so decision between general approach or special case algorithm is important.
There should be a dynamic programming approach to this.
Work along the array from left to right. At each point i, for each value of j from 1..k, find the value of the right answer for picking j non-contiguous elements from 1..i. You can work out the answers at i by looking at the answers at i-1, i-2, and the value of array[i]. The answer you want is the answer at n for an array of length n. After you have done this you should be able to work out what the elements are by back-tracking along the array to work out whether the best decision at each point involves selecting the array element at that point, and therefore whether it used array[i-1][k] or array[i-2][k-1].

Can I get an explanation for how optimal substructure is used to find the longest increasing subsequence in this powerpoint slide?

I'm learning about finding optimal solutions in my algorithms class at the moment and one of the topics is about finding optimal substructures in problems.
My understanding of it so far is that we see if we can find an optimal solution for a problem of size n. If we can, then we increase the size of the problem by 1 so it's n+1. If the optimal solution for n+1 includes the entire optimal solution of n plus the new solution introduced by the +1, then we have optimal substructure.
I was given an example of using optimal substructure to find the longest increasing subsequence given a set of numbers. This is shown on the powerpoint slide here:
Can someone explain to me the notation on the bottom of the slide and give me a proof that this problem can be solved using optimal substructure?
Lower(i) means a set of positions j in S to the left of the current index i such that Sj is less than Si. In other words, elements Sj and Si are in increasing order, even though there may be other elements in between them.
The expression with the brace on the left explains how we construct the answer:
First line says that if the set Lower(i) is empty (i.e. Si is the largest number in the sequence so far) then the answer is 1. This is the base case: a single number is treated as one-element sequence
Second line says that if Lower(i) is not empty, then we pick the max element from it, and add 1. In other words, we look to the left of the number Si for another number Sj that is smaller than Si, and ends the longest ascending subsequence among Lower(i).
All of this is incredibly long way of writing these six lines of pseudocode:
L[0] = 1
for i = 1..N
L[i] = 1
for j = i..0
if S[i] > S[j] // Member of Lower(i) ?
L[i] = MAX(L[i], L[j]+1)
Just to add to #dasblinkenlight answer:
This is an iterative approach based on optimal substructure because at any given iteration i, we will figure out the length of the longest increasing subsequence ending at index i. Hence by the time we reach this iteration all corresponding LIS are already established for any index j < i. Using this information we find the answer for index i, i+1 and so on. Now the original question is asking for the LIS, but it has to have an ending index, so it is enough to take the maximum LIS among all indexes.
Such approach is strongly correlated with Mathematical Induction and quite broad programming/algorithm method Dynamic Programming.
P.S.
There exists another, slightly more complicated approach, which allows to compute LIS in a more efficient way using binary search. The algorithm from the slides is O(n^2), when O(n*log(n)) algorithm does exist as well.

Knapsack with unique elements

I'm trying to solve the following:
The knapsack problem is as follows: given a set of integers S={s1,s2,…,sn}, and a given target number T, find a subset of S that adds up exactly to T. For example, within S={1,2,5,9,10} there is a subset that adds up to T=22 but not T=23. Give a correct programming algorithm for knapsack that runs in O(nT) time.
but the only algorithm I could come up with is generating all the 1 to N combinations and try the sum out (exponential time).
I can't devise a dynamic programming solution since the fact that I can't reuse an object makes this problem different from a coin rest exchange problem and from a general knapsack problem.
Can somebody help me out with this or at least give me a hint?
The O(nT) running time gives you the hint: do dynamic programming on two axes. That is, let f(a,b) denote the maximum sum <= b which can be achieved with the first a integers.
f satisfies the recurrence
f(a,b) = max( f(a-1,b), f(a-1,b-s_a)+s_a )
since the first value is the maximum without using s_a and the second is the maximum including s_a. From here the DP algorithm should be straightforward, as should outputting the correct subset of S.
I did find a solution but with O(T(n2)) time complexity. If we make a table from bottom to top. In other words If we sort the array and start with the greatest number available and make a table where columns are the target values and rows the provided number. We will need to consider the sum of all possible ways of making i- cost [j] +j . Which will take n^2 time. And this multiplied with target.

How can I find the maximum sum of a sub-sequence using dynamic programming?

I'm re-reading Skiena's Algorithm Design Manual to catch up on some stuff I've forgotten since school, and I'm a little baffled by his descriptions of Dynamic Programming. I've looked it up on Wikipedia and various other sites, and while the descriptions all make sense, I'm having trouble figuring out specific problems myself. Currently, I'm working on problem 3-5 from the Skiena book. (Given an array of n real numbers, find the maximum sum in any contiguous subvector of the input.) I have an O(n^2) solution, such as described in this answer. But I'm stuck on the O(N) solution using dynamic programming. It's not clear to me what the recurrence relation should be.
I see that the subsequences form a set of sums, like so:
S = {a,b,c,d}
a a+b a+b+c a+b+c+d
b b+c b+c+d
c c+d
d
What I don't get is how to pick which one is the greatest in linear time. I've tried doing things like keeping track of the greatest sum so far, and if the current value is positive, add it to the sum. But when you have larger sequences, this becomes problematic because there may be stretches of negative numbers that would decrease the sum, but a later large positive number may bring it back to being the maximum.
I'm also reminded of summed area tables. You can calculate all the sums using only the cumulative sums: a, a+b, a+b+c, a+b+c+d, etc. (For example, if you need b+c, it's just (a+b+c) - (a).) But don't see an O(N) way to get it.
Can anyone explain to me what the O(N) dynamic programming solution is for this particular problem? I feel like I almost get it, but that I'm missing something.
You should take a look to this pdf back in the school in http://castle.eiu.edu here it is:
The explanation of the following pseudocode is also int the pdf.
There is a solution like, first sort the array in to some auxiliary memory, then apply Longest Common Sub-Sequence method to the original array and the sorted array, with sum(not the length) of common sub-sequence in the 2 arrays as the entry into the table (Memoization). This can also solve the problem
Total running time is O(nlogn)+O(n^2) => O(n^2)
Space is O(n) + O(n^2) => O(n^2)
This is not a good solution when memory comes into picture. This is just to give a glimpse on how problems can be reduced to one another.
My understanding of DP is about "making a table". In fact, the original meaning "programming" in DP is simply about making tables.
The key is to figure out what to put in the table, or modern terms: what state to track, or what's the vertex key/value in DAG (ignore these terms if they sound strange to you).
How about choose dp[i] table being the largest sum ending at index i of the array, for example, the array being [5, 15, -30, 10]
The second important key is "optimal substructure", that is to "assume" dp[i-1] already stores the largest sum for sub-sequences ending at index i-1, that's why the only step at i is to decide whether to include a[i] into the sub-sequence or not
dp[i] = max(dp[i-1], dp[i-1] + a[i])
The first term in max is to "not include a[i]", the second term is to "include a[i]". Notice, if we don't include a[i], the largest sum so far remains dp[i-1], which comes from the "optimal substructure" argument.
So the whole program looks like this (in Python):
a = [5,15,-30,10]
dp = [0]*len(a)
dp[0] = max(0,a[0]) # include a[0] or not
for i in range(1,len(a)):
dp[i] = max(dp[i-1], dp[i-1]+a[i]) # for sub-sequence, choose to add or not
print(dp, max(dp))
The result: largest sum of sub-sequence should be the largest item in dp table, after i iterate through the array a. But take a close look at dp, it holds all the information.
Since it only goes through items in array a once, it's a O(n) algorithm.
This problem seems silly, because as long as a[i] is positive, we should always include it in the sub-sequence, because it will only increase the sum. This intuition matches the code
dp[i] = max(dp[i-1], dp[i-1] + a[i])
So the max. sum of sub-sequence problem is easy, and doesn't need DP at all. Simply,
sum = 0
for v in a:
if v >0
sum += v
However, what about largest sum of "continuous sub-array" problem. All we need to change is just a single line of code
dp[i] = max(dp[i-1]+a[i], a[i])
The first term is to "include a[i] in the continuous sub-array", the second term is to decide to start a new sub-array, starting a[i].
In this case, dp[i] is the max. sum continuous sub-array ending with index-i.
This is certainly better than a naive approach O(n^2)*O(n), to for j in range(0,i): inside the i-loop and sum all the possible sub-arrays.
One small caveat, because the way dp[0] is set, if all items in a are negative, we won't select any. So for the max sum continuous sub-array, we change that to
dp[0] = a[0]

Resources