I'm studying for my exam coming up and I am practicing a problem that wants me to implement a greedy algorithm.
I am given an unsorted array of different weights where 0 < weight_i for all i. I have to place all of them such that I use the least number of piles. I can not place two weights in a pile where the one on top is greater than the one below. I also have to respect the ordering of the weights, so they must be placed in order. There is no height limit for the pile.
An example: If I have the weights {53, 21, 40, 10, 18} I cannot place 40 above 21 because the pile must be in descending order, and I cannot place 21 above 40 because that does not respect the order. An optimal solution would have pile 1: 53, 21, 10 and pile 2: 40 18
My general solution is iterate through the array and always pick the first pile the weight is allowed to go. I believe this would give me an optimal solution (although I haven't proved it yet). I could not find a counter example to this. But this would be O(n^2) because worst case I have to iterate through every element and every pile (I think)
My question is, is there a way to get this down to O(n) or O(nlogn)? If there is I'm just not seeing it and need some help.
Your algorithm will give a correct result.
Now note the following: when visiting the piles in order and stopping at the first one where the next value can be stacked, you will always have a situation where the stacks are ordered by their current top (last) value, in ascending order.
You can use this property to avoid an iteration of the piles from "left to right". Instead use a binary search, among the piles, to find that first pile that can take the next value.
This will give you a O(nlogn) time complexity.
Believe it or not, the problem you describe is equivalent to computing the length of the longest increasing subsequence. There's a neat little greedy idea as to why.
Consider the longest increasing subsequence (LIS) of the array. Because the elements are ascending in index and also ascending in value, they must all be in different piles. As a result the minimum number of piles needed is equal to the number of elements in the LIS.
LIS is easily solvable in O(NlogN) using dynamic programming and a binary search.
Note that the algorithm you describe does the same thing as the algorithm below - it finds the first pile you can put the item on (with binary search), or it creates a new pile, so this serves as a "proof" of correctness for your algorithm and a way to reduce your complexity.
Let dp[i] be equal to the minimum value element at the end of an increasing subsequence of length (i + 1). To reframe it in terms of your question, dp[i] would also be equal to the weight of the stone on the ith pile.
from bisect import bisect_left
def lengthOfLIS(nums):
arr = []
for i in range(len(nums)):
idx = bisect_left(arr, nums[i])
if idx == len(arr):
arr.append(nums[i])
else:
arr[idx] = nums[i]
return len(arr)
Related
I have a problem simlar to Huffman's encoding, I'm not sure exactly how it can be solved or if it is a reverse Huffman's encoding. But it definitely can be solved using a greedy approach.
Consider a set of length, each associated with a probability. i.e.
X={a1=(100,1/4),a2=(500,1/4),a3=(200,1/2)}
Obviously, the sum of all the probabilities = 1.
Arrange the lengths together on a line one after the other from a starting point.
For example: {a2,a1,a3} in that order from start to finish.
Define the cost of an element a_i as its the total length from the starting line to the end of this element multiplied by its probability.
So from the previous arrangement:
cost(a2) = (500)*(1/4)
cost(a1) = (500+100)*(1/4)
cost(a3) = (500+100+200)*(1/2)
Define the total cost as the sum of all costs. e.g. cost(X) = cost(a2) + cost(a1) + cost(a3). Give an algorithm that finds an arrangement that minimizes cost(X)
I've tried forming some alternative huffman trees but it doesn't work.
Sorting by probability will fail (consider X={(100,0.4),(300,0.6)}).
Sorting by length will also fail (consider X={(100,0.1),(300,0.9)}).
If anyone can help or hint towards an optimal solution algorithm, it would be great.
Consider what happens if you swap two adjacent elements. The calculations before and after the two elements are the same, so it just depends on the two elements.
Taking two elements in isolation, the costs are P1L1 + P2(L1 + L2) and P2L2 + P1(L1 + L2). If you subtract this and simplify if I have got the algebra right you want to swap 1 to first when L1/P1 < L2/P2. Check - this at least gets the right answer when L1 = 0.
So I think you want to sort the elements into increasing order of Li/Pi, because if that is not the case you can improve the answer by swapping adjacent elements.
I'm learning about finding optimal solutions in my algorithms class at the moment and one of the topics is about finding optimal substructures in problems.
My understanding of it so far is that we see if we can find an optimal solution for a problem of size n. If we can, then we increase the size of the problem by 1 so it's n+1. If the optimal solution for n+1 includes the entire optimal solution of n plus the new solution introduced by the +1, then we have optimal substructure.
I was given an example of using optimal substructure to find the longest increasing subsequence given a set of numbers. This is shown on the powerpoint slide here:
Can someone explain to me the notation on the bottom of the slide and give me a proof that this problem can be solved using optimal substructure?
Lower(i) means a set of positions j in S to the left of the current index i such that Sj is less than Si. In other words, elements Sj and Si are in increasing order, even though there may be other elements in between them.
The expression with the brace on the left explains how we construct the answer:
First line says that if the set Lower(i) is empty (i.e. Si is the largest number in the sequence so far) then the answer is 1. This is the base case: a single number is treated as one-element sequence
Second line says that if Lower(i) is not empty, then we pick the max element from it, and add 1. In other words, we look to the left of the number Si for another number Sj that is smaller than Si, and ends the longest ascending subsequence among Lower(i).
All of this is incredibly long way of writing these six lines of pseudocode:
L[0] = 1
for i = 1..N
L[i] = 1
for j = i..0
if S[i] > S[j] // Member of Lower(i) ?
L[i] = MAX(L[i], L[j]+1)
Just to add to #dasblinkenlight answer:
This is an iterative approach based on optimal substructure because at any given iteration i, we will figure out the length of the longest increasing subsequence ending at index i. Hence by the time we reach this iteration all corresponding LIS are already established for any index j < i. Using this information we find the answer for index i, i+1 and so on. Now the original question is asking for the LIS, but it has to have an ending index, so it is enough to take the maximum LIS among all indexes.
Such approach is strongly correlated with Mathematical Induction and quite broad programming/algorithm method Dynamic Programming.
P.S.
There exists another, slightly more complicated approach, which allows to compute LIS in a more efficient way using binary search. The algorithm from the slides is O(n^2), when O(n*log(n)) algorithm does exist as well.
I came across the solution that uses Patience sort to obtain the length of the Longest Increasing Subsequence (LIS). http://www-stat.stanford.edu/~cgates/PERSI/papers/Longest.pdf, and here - http://en.wikipedia.org/wiki/Patience_sorting.
The proof that following the greedy strategy actually gives us the length correctly has 2 parts -
proves that the number of piles is at least equal to the length of the LIS.
proves that the number of piles using greedy strategy is at most equal to the LIS.
Thus by virtue of both 1) and 2), the solution gives the length of LIS correctly.
I get the explanation for 1), but I just cannot intuitively realize part 2). Can someone may be use a different example to convince me that this is indeed true. Or, you could even use a different proof technique too.
I just read over the paper and I agree that the proof is a bit, um, terse. (I'd say that it's missing some pretty important steps!)
Intuitively, the idea behind the proof is to show that if you play with the greedy strategy and at the end of the game pick any card in a pile numbered p, you can find an increasing subsequence in the original array whose length is p. If you can prove this fact, then you can conclude that the maximum number of piles produced by the greedy strategy is the length of the longest increasing subsequence.
To formally prove this, we're going to argue that the following two invariants hold at each step:
The top cards in each pile, when read from left to right, are in sorted order.
At any point in time, every card in every pile is part of an increasing subsequence whose length is given by the pile index.
Part (1) is easy to see from the greedy strategy - every element is placed as far to the left as possible without violating the rule that smaller cards must always be placed on top of larger cards. This means that if a card is put into pile p, we are effectively taking a sorted sequence and reducing the value of the pth element to a value that's greater than whatever is in position p - 1 (if it exists).
To see part (2), we'll go inductively. The first placed card is put into pile 1, and it's also part of an increasing subsequence of length 1 (the card by itself). For the inductive step, assume that this property holds after placing n cards and consider the (n+1)st. Suppose that it ends up in pile p. If p = 1, then the claim still holds because this card forms an increasing subsequence of length 1 all by itself. Otherwise, p > 1. Now, look at the card on top of pile p - 1. We know that this card's value is less than the value of the card we just placed, since otherwise we would have placed the card on top of that pile. We also know that the card on top of that pile precedes the card we just placed in the original ordering, since we're playing the cards in order. By our existing invariant, we know that the card on top of pile p - 1 is part of an increasing subsequence of length p - 1, so that subsequence, with this new card added into it, forms an increasing subsequence of length p, as required.
I'm re-reading Skiena's Algorithm Design Manual to catch up on some stuff I've forgotten since school, and I'm a little baffled by his descriptions of Dynamic Programming. I've looked it up on Wikipedia and various other sites, and while the descriptions all make sense, I'm having trouble figuring out specific problems myself. Currently, I'm working on problem 3-5 from the Skiena book. (Given an array of n real numbers, find the maximum sum in any contiguous subvector of the input.) I have an O(n^2) solution, such as described in this answer. But I'm stuck on the O(N) solution using dynamic programming. It's not clear to me what the recurrence relation should be.
I see that the subsequences form a set of sums, like so:
S = {a,b,c,d}
a a+b a+b+c a+b+c+d
b b+c b+c+d
c c+d
d
What I don't get is how to pick which one is the greatest in linear time. I've tried doing things like keeping track of the greatest sum so far, and if the current value is positive, add it to the sum. But when you have larger sequences, this becomes problematic because there may be stretches of negative numbers that would decrease the sum, but a later large positive number may bring it back to being the maximum.
I'm also reminded of summed area tables. You can calculate all the sums using only the cumulative sums: a, a+b, a+b+c, a+b+c+d, etc. (For example, if you need b+c, it's just (a+b+c) - (a).) But don't see an O(N) way to get it.
Can anyone explain to me what the O(N) dynamic programming solution is for this particular problem? I feel like I almost get it, but that I'm missing something.
You should take a look to this pdf back in the school in http://castle.eiu.edu here it is:
The explanation of the following pseudocode is also int the pdf.
There is a solution like, first sort the array in to some auxiliary memory, then apply Longest Common Sub-Sequence method to the original array and the sorted array, with sum(not the length) of common sub-sequence in the 2 arrays as the entry into the table (Memoization). This can also solve the problem
Total running time is O(nlogn)+O(n^2) => O(n^2)
Space is O(n) + O(n^2) => O(n^2)
This is not a good solution when memory comes into picture. This is just to give a glimpse on how problems can be reduced to one another.
My understanding of DP is about "making a table". In fact, the original meaning "programming" in DP is simply about making tables.
The key is to figure out what to put in the table, or modern terms: what state to track, or what's the vertex key/value in DAG (ignore these terms if they sound strange to you).
How about choose dp[i] table being the largest sum ending at index i of the array, for example, the array being [5, 15, -30, 10]
The second important key is "optimal substructure", that is to "assume" dp[i-1] already stores the largest sum for sub-sequences ending at index i-1, that's why the only step at i is to decide whether to include a[i] into the sub-sequence or not
dp[i] = max(dp[i-1], dp[i-1] + a[i])
The first term in max is to "not include a[i]", the second term is to "include a[i]". Notice, if we don't include a[i], the largest sum so far remains dp[i-1], which comes from the "optimal substructure" argument.
So the whole program looks like this (in Python):
a = [5,15,-30,10]
dp = [0]*len(a)
dp[0] = max(0,a[0]) # include a[0] or not
for i in range(1,len(a)):
dp[i] = max(dp[i-1], dp[i-1]+a[i]) # for sub-sequence, choose to add or not
print(dp, max(dp))
The result: largest sum of sub-sequence should be the largest item in dp table, after i iterate through the array a. But take a close look at dp, it holds all the information.
Since it only goes through items in array a once, it's a O(n) algorithm.
This problem seems silly, because as long as a[i] is positive, we should always include it in the sub-sequence, because it will only increase the sum. This intuition matches the code
dp[i] = max(dp[i-1], dp[i-1] + a[i])
So the max. sum of sub-sequence problem is easy, and doesn't need DP at all. Simply,
sum = 0
for v in a:
if v >0
sum += v
However, what about largest sum of "continuous sub-array" problem. All we need to change is just a single line of code
dp[i] = max(dp[i-1]+a[i], a[i])
The first term is to "include a[i] in the continuous sub-array", the second term is to decide to start a new sub-array, starting a[i].
In this case, dp[i] is the max. sum continuous sub-array ending with index-i.
This is certainly better than a naive approach O(n^2)*O(n), to for j in range(0,i): inside the i-loop and sum all the possible sub-arrays.
One small caveat, because the way dp[0] is set, if all items in a are negative, we won't select any. So for the max sum continuous sub-array, we change that to
dp[0] = a[0]
So, this is a common interview question. There's already a topic up, which I have read, but it's dead, and no answer was ever accepted. On top of that, my interests lie in a slightly more constrained form of the question, with a couple practical applications.
Given a two dimensional array such that:
Elements are unique.
Elements are sorted along the x-axis and the y-axis.
Neither sort predominates, so neither sort is a secondary sorting parameter.
As a result, the diagonal is also sorted.
All of the sorts can be thought of as moving in the same direction. That is to say that they are all ascending, or that they are all descending.
Technically, I think as long as you have a >/=/< comparator, any total ordering should work.
Elements are numeric types, with a single-cycle comparator.
Thus, memory operations are the dominating factor in a big-O analysis.
How do you find an element? Only worst case analysis matters.
Solutions I am aware of:
A variety of approaches that are:
O(nlog(n)), where you approach each row separately.
O(nlog(n)) with strong best and average performance.
One that is O(n+m):
Start in a non-extreme corner, which we will assume is the bottom right.
Let the target be J. Cur Pos is M.
If M is greater than J, move left.
If M is less than J, move up.
If you can do neither, you are done, and J is not present.
If M is equal to J, you are done.
Originally found elsewhere, most recently stolen from here.
And I believe I've seen one with a worst-case O(n+m) but a optimal case of nearly O(log(n)).
What I am curious about:
Right now, I have proved to my satisfaction that naive partitioning attack always devolves to nlog(n). Partitioning attacks in general appear to have a optimal worst-case of O(n+m), and most do not terminate early in cases of absence. I was also wondering, as a result, if an interpolation probe might not be better than a binary probe, and thus it occurred to me that one might think of this as a set intersection problem with a weak interaction between sets. My mind cast immediately towards Baeza-Yates intersection, but I haven't had time to draft an adaptation of that approach. However, given my suspicions that optimality of a O(N+M) worst case is provable, I thought I'd just go ahead and ask here, to see if anyone could bash together a counter-argument, or pull together a recurrence relation for interpolation search.
Here's a proof that it has to be at least Omega(min(n,m)). Let n >= m. Then consider the matrix which has all 0s at (i,j) where i+j < m, all 2s where i+j >= m, except for a single (i,j) with i+j = m which has a 1. This is a valid input matrix, and there are m possible placements for the 1. No query into the array (other than the actual location of the 1) can distinguish among those m possible placements. So you'll have to check all m locations in the worst case, and at least m/2 expected locations for any randomized algorithm.
One of your assumptions was that matrix elements have to be unique, and I didn't do that. It is easy to fix, however, because you just pick a big number X=n*m, replace all 0s with unique numbers less than X, all 2s with unique numbers greater than X, and 1 with X.
And because it is also Omega(lg n) (counting argument), it is Omega(m + lg n) where n>=m.
An optimal O(m+n) solution is to start at the top-left corner, that has minimal value. Move diagonally downwards to the right until you hit an element whose value >= value of the given element. If the element's value is equal to that of the given element, return found as true.
Otherwise, from here we can proceed in two ways.
Strategy 1:
Move up in the column and search for the given element until we reach the end. If found, return found as true
Move left in the row and search for the given element until we reach the end. If found, return found as true
return found as false
Strategy 2:
Let i denote the row index and j denote the column index of the diagonal element we have stopped at. (Here, we have i = j, BTW). Let k = 1.
Repeat the below steps until i-k >= 0
Search if a[i-k][j] is equal to the given element. if yes, return found as true.
Search if a[i][j-k] is equal to the given element. if yes, return found as true.
Increment k
1 2 4 5 6
2 3 5 7 8
4 6 8 9 10
5 8 9 10 11