Problem:
Given a group of numbers of length n (sorted), each number is the location of a house in a 1D line "city".
Given a number k<=n, you need to place k "supermarkets" on the 1D city.
For every element in A, the min distance is defined as the minimum distance between A and a supermarket: |a-c|.
The cost of a city is defined as the max of all min distances.
You need to find what the minimum (optimal) cost would be for a given A of length n, and k<=n.
I can't find a solution for this problem. The solution should use dynamic programming. I'm thinking of how to write the recursive formula, and I think I already came out with the base cases:
if k = n then obviously the result will be 0 since you can place each supermarket in a city
if k = 1, I think the solution should be: (A[n] - A[1])/2.
But I can't come up with the actual formula (and the whole actual dynamic program). Also, I can't seem to find a "title" to this answer, I didn't find any other example of this exact answer online.
To minimize the maximum distance from k supermarkets, you divide the houses into consecutive groups so that you minimize the maximum distance between the starting and ending houses in each group. Then you just put a supermarket in the middle of each group.
Solving the problem this way makes it much easier for dynamic programming, since it removes the continuous variable of supermarket position.
I came up with this recursive function for the problem:
if there are more stands than houses, the answer is 0
if there is only one stand, so we place it in the middle between the edges
Othrwise:
For all the indexes from i to j, we calculate the maximum between all of them, and then the min.
Related
I have to reorder a sequence of elements based on the similarity between each other (expressed by a coefficient) so that each element is the most similar possible to each of its neighbors. I have to find an algorithm rather than a code.
Example with 10 elements and similarity coefficients calculated for each pair of the elements below :
The excel file can be find here : https://1drv.ms/x/s!AtmZN4-kjgrPms99fqgaDwAS_F4uYw
What I have tried :
Find a pair with the highest coefficient. In the example : 0.98 for T3 (left-end) and T5 (right-end)
Find maximum coefficient between the left-end and the remaining elements
Find maximum coefficient between the right-end and the remaining elements
Take the maximum between 2. and 3.
If maximum is 2. add on the left the element corresponding to the maximum coefficient for the left-end. Else, add on the right the element corresponding to the maximum coefficient for the right-end
Repeat points 2 - 6 until no elements left.
Here is the result :
The result isn't bad. One of the disadvantages I see is that 0.99>0.98 is considered in the same way as 0.99>0.01.
The second option I thought about was maximizing the sum of coefficients between all neighbors, but don't really know where to start from. Especially if there are significantly more than 10 elements. More, it could result in a more "flat" order where while having better similarities overall some extremely similar elements could be placed far from each other.
Being really new to this kind of problems I am pretty sure this should be a rather standard issue with existing solutions. Could you please point to those?
Thank you!
After researching I have found that my problem can be seen as the "Travelling Salesman Problem" (TSP). More here : https://en.wikipedia.org/wiki/Travelling_salesman_problem
To apply it you can see "elements" in my example as "cities" in TSP and (1-Similarity coefficient) as "distances".
I'm learning about finding optimal solutions in my algorithms class at the moment and one of the topics is about finding optimal substructures in problems.
My understanding of it so far is that we see if we can find an optimal solution for a problem of size n. If we can, then we increase the size of the problem by 1 so it's n+1. If the optimal solution for n+1 includes the entire optimal solution of n plus the new solution introduced by the +1, then we have optimal substructure.
I was given an example of using optimal substructure to find the longest increasing subsequence given a set of numbers. This is shown on the powerpoint slide here:
Can someone explain to me the notation on the bottom of the slide and give me a proof that this problem can be solved using optimal substructure?
Lower(i) means a set of positions j in S to the left of the current index i such that Sj is less than Si. In other words, elements Sj and Si are in increasing order, even though there may be other elements in between them.
The expression with the brace on the left explains how we construct the answer:
First line says that if the set Lower(i) is empty (i.e. Si is the largest number in the sequence so far) then the answer is 1. This is the base case: a single number is treated as one-element sequence
Second line says that if Lower(i) is not empty, then we pick the max element from it, and add 1. In other words, we look to the left of the number Si for another number Sj that is smaller than Si, and ends the longest ascending subsequence among Lower(i).
All of this is incredibly long way of writing these six lines of pseudocode:
L[0] = 1
for i = 1..N
L[i] = 1
for j = i..0
if S[i] > S[j] // Member of Lower(i) ?
L[i] = MAX(L[i], L[j]+1)
Just to add to #dasblinkenlight answer:
This is an iterative approach based on optimal substructure because at any given iteration i, we will figure out the length of the longest increasing subsequence ending at index i. Hence by the time we reach this iteration all corresponding LIS are already established for any index j < i. Using this information we find the answer for index i, i+1 and so on. Now the original question is asking for the LIS, but it has to have an ending index, so it is enough to take the maximum LIS among all indexes.
Such approach is strongly correlated with Mathematical Induction and quite broad programming/algorithm method Dynamic Programming.
P.S.
There exists another, slightly more complicated approach, which allows to compute LIS in a more efficient way using binary search. The algorithm from the slides is O(n^2), when O(n*log(n)) algorithm does exist as well.
Say I have a Group data structure which contains a list of Element objects, such that each group has a unique set of elements.:
public class Group
{
public List<Element> Elements;
}
and say I have a list of populations who require certain elements, in such a way that each population has a unique set of required elements:
public class Population
{
public List<Element> RequiredElements;
}
I have an unlimited quantity of each defined Group, i.e. they are not consumed by populations.
Say I am looking at a particular Population. I want to find the best possible match of groups such that there is minimum excess elements, and no unmatched elements.
For example: I have a population which needs wood, steel, grain, and coal. The only groups available are {wood, herbs}, {steel, coal, oil}, {grain, steel}, and {herbs, meat}.
The last group - {herbs, meat} isn't required at all by my population so it isn't used. All others are needed, but herbs and oil are not required so it is wasted. Furthermore, steel exists twice in the minimum set, so one lot of steel is also wasted. The best match in this example has a wastage of 3.
So for a few hundred Population objects, I need to find the minimum wastage best match and compute how many elements are wasted.
How do I even begin to solve this? Once I have found a match, counting the wastage is trivial. Finding the match in the first place is hard. I could enumerate all possibilities but with a few thousand populations and many hundreds of groups, it's quite a task. Especially considering this whole thing sits inside each iteration of a simulated annealing algorithm.
I'm wondering whether I can formulate the whole thing as a mixed-integer program and call a solver like GLPK at each iteration.
I hope I have explained the problem correctly. I can clarify anything that's unclear.
Here's my binary program, for those of you interested...
x is the decision vector, an element of {0,1}, which says that the population in question does/doesn't receive from group i. There is an entry for each group.
b is the column vector, an element of {0,1}, which says which resources the population in question does/doesn't need. There is an entry for each resource.
A is a matrix, an element of {0,1}, which says what resources are in what groups.
The program is:
Minimise: ((Ax - b)' * 1-vector) + (x' * 1-vector);
Subject to: Ax >= b;
The constraint just says that all required resources must be satisfied. The objective is to minimise all excess and the total number of groups used. (i.e. 0 excess with 1 group used is better than 0 excess with 5 groups used).
You can formulate an integer program for each population P as follows. Use a binary variable xj to denote whether group j is chosen or not. Let A be a binary matrix, such that Aij is 1 if and only if item i is present in group j. Then the integer program is:
min Ei,j (xjAij)
s.t. Ej xjAij >= 1 for all i in P.
xj = 0, 1 for all j.
Note that you can obtain the minimum wastage by subtracting |P| from the optimal solution of the above IP.
Do you mean the Maximum matching problem?
You need to build a bipartite graph, where one of the sides is your populations and the other is groups, and edge exists between group A and population B if it have it in its set.
To find maximum edge matching you can easily use Kuhn algorithm, which is greatly described here on TopCoder.
But, if you want to find mimimum edge dominating set (the set of minimum edges that is covering all the vertexes), the problem becomes NP-hard and can't be solved in polynomial time.
Take a look at the weighted set cover problem, I think this is exactly what you described above. A basic description of the (unweighted) problem can be found here.
Finding the minimal waste as you defined above is equivalent to finding a set cover such that the sum of the cardinalities of the covering sets is minimal. Hence, the weight of each set (=a group of elements) has to be defined equal to its cardinality.
Since even the unweighted the set cover problem is NP-complete, it is not likely that an efficient algorithm for your problem instances exist. Maybe a good greedy approximation algorithm will be sufficient or your purpose? Googling weighted set cover provides several promising results, e.g. this script.
You have a set of n objects for which integer positions are given. A group of objects is a set of objects at the same position (not necessarily all the objects at that position: there might be multiple groups at a single position). The objects can be moved to the left or right, and the goal is to move these objects so as to form k groups, and to do so with the minimum distance moved.
For example:
With initial positions at [4,4,7], and k = 3: the minimum cost is 0.
[4,4,7] and k = 2: minimum cost is 0
[1,2,5,7] and k = 2: minimum cost is 1 + 2 = 3
I've been trying to use a greedy approach (by calculating which move would be shortest) but that wouldn't work because every move involves two elements which could be moved either way. I haven't been able to formulate a dynamic programming approach as yet but I'm working on it.
This problem is a one-dimensional instance of the k-medians problem, which can be stated as follows. Given a set of points x_1...x_n, partition these points into k sets S_1...S_k and choose k locations y_1...y_k in a way that minimizes the sum over all x_i of |x_i - y_f(i)|, where y_f(i) is the location corresponding of the set to which x_i is assigned.
Due to the fact that the median is the population minimizer for absolute distance (i.e. L_1 norm), it follows that each location y_j will be the median of the elements x in the corresponding set S_j (hence the name k-medians). Since you are looking at integer values, there is the technicality that if S_j contains an even number of elements, the median might not be an integer, but in such cases choosing either the next integer above or below the median will give the same sum of absolute distances.
The standard heuristic for solving k-medians (and the related and more common k-means problem) is iterative, but this is not guaranteed to produce an optimal or even good solution. Solving the k-medians problem for general metric spaces is NP-hard, and finding efficient approximations for k-medians is an open research problem. Googling "k-medians approximation", for example, will lead to a bunch of papers giving approximation schemes.
http://www.cis.upenn.edu/~sudipto/mypapers/kmedian_jcss.pdf
http://graphics.stanford.edu/courses/cs468-06-winter/Papers/arr-clustering.pdf
In one dimension things become easier, and you can use a dynamic programming approach. A DP solution to the related one-dimensional k-means problem is described in this paper, and the source code in R is available here. See the paper for details, but the idea is essentially the same as what #SajalJain proposed, and can easily be adapted to solve the k-medians problem rather than k-means. For j<=k and m<=n let D(j,m) denote the cost of an optimal j-medians solution to x_1...x_m, where the x_i are assumed to be in sorted order. We have the recurrence
D(j,m) = min (D(j-1,q) + Cost(x_{q+1},...,x_m)
where q ranges from j-1 to m-1 and Cost is equal to the sum of absolute distances from the median. With a naive O(n) implementation of Cost, this would yield an O(n^3k) DP solution to the whole problem. However, this can be improved to O(n^2k) due to the fact that the Cost can be updated in constant time rather than computed from scratch every time, using the fact that, for a sorted sequence:
Cost(x_1,...,x_h) = Cost(x_2,...,x_h) + median(x_1...x_h)-x_1 if h is odd
Cost(x_1,...,x_h) = Cost(x_2,...,x_h) + median(x_2...x_h)-x_1 if h is even
See the writeup for more details. Except for the fact that the update of the Cost function is different, the implementation will be the same for k-medians as for k-means.
http://journal.r-project.org/archive/2011-2/RJournal_2011-2_Wang+Song.pdf
as I understand, the problems is:
we have n points on a line.
we want to place k position on the line. I call them destinations.
move each of n points to one of the k destinations so the sum of distances is minimum. I call this sum, total cost.
destinations can overlap.
An obvious fact is that for each point we should look for the nearest destinations on the left and the nearest destinations on the right and choose the nearest.
Another important fact is all destinations should be on the points. because we can move them on the line to right or to left to reach a point without increasing total distance.
By these facts consider following DP solution:
DP[i][j] means the minimum total cost needed for the first i point, when we can use only j destinations, and have to put a destination on the i-th point.
to calculate DP[i][j] fix the destination before the i-th point (we have i choice), and for each choice (for example k-th point) calculate the distance needed for points between the i-th point and the new point added (k-th point). add this with DP[k][j - 1] and find the minimum for all k.
the calculation of initial states (e.g. j = 1) and final answer is left as an exercise!
Task 0 - sort the position of the objects in non-decreasing order
Let us define 'center' as the position of the object where it is shifted to.
Now we have two observations;
For N positions the 'center' would be the position which is nearest to the mean of these N positions. Example, let 1,3,6,10 be the positions. Then mean = 5. Nearest position is 6. Hence the center for these elements is 6. This gives us the position with minimum cost of moving when all elements need to be grouped into 1 group.
Let N positions be grouped into K groups "optimally". When N+1 th object is added, then it will disturb only the K th group, i.e, first K-1 groups will remain unchanged.
From these observations, we build a dynamic programming approach.
Let Cost[i][k] and Center[i][k] be two 2D arrays.
Cost[i][k] = minimum cost when first 'i' objects are partitioned into 'k' groups
Center[i][k] stores the center of the 'i-th' object when Cost[i][k] is computed.
Let {L} be the elements from i-L,i-L+1,..i-1 which have the same center.
(Center[i-L][k] = Center[i-L+1][k] = ... = Center[i-1][k]) These are the only objects that need to be considered in the computation for i-th element (from observation 2)
Now
Cost[i][k] will be
min(Cost[i-1][k-1] , Cost[i-L-1][k-1] + computecost(i-L, i-L+1, ... ,i))
Update Center[i-L ... i][k]
computecost() can be found trivially by finding the center (from observation 1)
Time Complexity:
Sorting O(NlogN)
Total Cost Computation Matrix = Total elements * Computecost = O(NK * N)
Total = O(NlogN + N*NK) = O(N*NK)
Let's look at k=1.
For k=1 and n odd, all points should move to the center point. For k=1 and n even, all points should move to either of the center points or any spot between them. By 'center' I mean in terms of number of points to either side, i.e. the median.
You can see this because if you select a target spot, x, with more points to its right than it's left, then a new target 1 to the right of x would result in a cost reduction (unless there is exactly one more point to the right than the left and the target spot is a point, in which case n is even and the target is on/between the two center points).
If your points are already sorted, this is an O(1) operation. If not, I believe it's O(n) (via an order statistic algorithm).
Once you've found the spot that all points are moving to, it's O(n) to find the cost.
Thus regardless of whether the points are sorted or not, this is O(n).
If there is more than one constraint (for example, both a volume limit and a weight limit, where the volume and weight of each item are not related), we get the multiply-constrained knapsack problem, multi-dimensional knapsack problem, or m-dimensional knapsack problem.
How do I code this in the most optimized fashion? Well, one can develop a brute force recursive solution. May be branch and bound.. but essentially its exponential most of the time until you do some sort of memoization or use dynamic programming which again takes a huge amount of memory if not done well.
The problem I am facing is this
I have my knapsack function
KnapSack( Capacity, Value, i) instead of the common
KnapSack ( Capacity , i ) since I have upper limits on both of those. can anyone guide me with this? or provide suitable resources for solving these problems for reasonably large n
or is this NP complete ?
Thanks
Merge the constraints. Look at http://www.diku.dk/~pisinger/95-1.pdf
chapter 1.3.1 called Merging the Constraints.
An example is say you have
variable , constraint1 , constraint2
1 , 43 , 66
2 , 65 , 54
3 , 34 , 49
4 , 99 , 32
5 , 2 , 88
Multiply the first constraint by some big number then add it to the second constraint.
So you have
variable , merged constraint
1 , 430066
2 , 650054
3 , 340049
4 , 990032
5 , 20088
From there do whatever algorithm you wanted to do with one constraint. The main limiter that comes to mind with this how many digits your variable can hold.
As a good example would serve the following problem:
Given an undirected graph G having positive weights and N vertices.
You start with having a sum of M money. For passing through a vertex i, you must pay S[i] money. If you don't have enough money - you can't pass through that vertex. Find the shortest path from vertex 1 to vertex N, respecting the above conditions; or state that such path doesn't exist. If there exist more than one path having the same length, then output the cheapest one. Restrictions: 1
Pseudocode:
Set states(i,j) as unvisited for all (i,j)
Set Min[i][j] to Infinity for all (i,j)
Min[0][M]=0
While(TRUE)
Among all unvisited states(i,j) find the one for which Min[i][j]
is the smallest. Let this state found be (k,l).
If there wasn't found any state (k,l) for which Min[k][l] is
less than Infinity - exit While loop.
Mark state(k,l) as visited
For All Neighbors p of Vertex k.
If (l-S[p]>=0 AND
Min[p][l-S[p]]>Min[k][l]+Dist[k][p])
Then Min[p][l-S[p]]=Min[k][l]+Dist[k][p]
i.e.
If for state(i,j) there are enough money left for
going to vertex p (l-S[p] represents the money that
will remain after passing to vertex p), and the
shortest path found for state(p,l-S[p]) is bigger
than [the shortest path found for
state(k,l)] + [distance from vertex k to vertex p)],
then set the shortest path for state(i,j) to be equal
to this sum.
End For
End While
Find the smallest number among Min[N-1][j] (for all j, 0<=j<=M);
if there are more than one such states, then take the one with greater
j. If there are no states(N-1,j) with value less than Infinity - then
such a path doesn't exist.
Knapsack with multiple constraints is a packing problem. Read up. http://en.wikipedia.org/wiki/Packing_problem
There are greedy like heuristics that calculate an "efficiency" for each item, that run quickly and yield approximate solutions.
You can use a branch and bound algorithm. You can get an initial lower bound using a greedy like heuristic, which can be used to initialize the incumbent solution. You can calculate upper bounds for various sub-problems by considering each of the m constraints one at time (relaxing the other constraints in the problem), then use the lowest of these bounds as an upper bound for the original problem. This technique is due to Shih. However this technique probably won't work well if no particular constraint tends to dominate the solution, or if the initial solution from the greedy like heuristic is not close to the optimum.
There are better more modern algorithms which are harder to implement, see "multidimensional knapsack problem" papers by J Puchinger!
As you said vol and weight both are positive quantities, try to use that fact that weight always decreases:
knap[position][vol][t]
Now t=0 when wt is positive, t=1 when wt is negative.