This is a algorithmic question which thought by me, but myself couldn't think of an easy solution.
The problem is inspired by merging two famous problems: Minimum segment coverage & Knapsack problem, and the description is as followed:
Given n segments [l_i, r_i], where all l_i, r_i in [1,M]. n, M are known.
Each segment has a value v_i, what is the maximum total value you can get if you can choose any number of non-overlapping segments? (touching is ok)
I have a strong feeling that my thought is over-complicated
but now the solution in my head is use dynamic programming like we solve knapsack.
Sort the segments by r_i in ascending order
Define DP(i) := maximum value we can get using segment [0,i], here the index is the sorted index after step 1
DP(i) = max(DP(j) + v[i], DP(i-1)) where j is the largest index where r_j <= l_i, which can be found using binary search
I think this solution is of O(N lg N). Now my problem is:
Is this solution correct?
Is there any easier, better-performance solution?
The segment coverage can be represented by a graph that is called interval graph. Since you don't want to take two overlapping segments, you are looking at finding a Maximum Weighted Independent Set in an interval graph. This problem is NP-hard on general graph, but fortunately it can easily be solved on interval graphs. If you look at the GraphClasses website you can see that the problem is solvable in linear time, even for the chordal graphs (it is a larger class than the interval graph), and you have the reference to the original paper that proves it.
Related
I came across this question in preparation for the final exam, and I could not find the recursive formula although I saw similar questions.
I will thank you for any help!
the problem is:
Suppose we are given a set L of n line segments in the plane, where the endpoints
of each segment lie on the unit circle x
2 + y
2 = 1, and all 2n endpoints are
distinct. Describe and analyze an algorithm to compute the largest subset of L in
which every pair of segments intersects
The solution needs to be an algorithm in dynamic programming approach (based on recursive formula)
I am assuming the question ("the largest subset of L...") is dealing with the subset size, and not that the subset cannot be extended. If the latter is true, the problem is trivial and the simple greedy algorithm works.
Now to your question. Following Matt Timmermans' hint (can you prove it?) this can be viewed as the longest common subsequence problem, except that we don't know what the 2 input strings are = where the splitting point between the 2 sequence occurences is.
Longest common subsequence problem can be solved in O(m*n) time and linear memory. By moving the splitting point along your 2n-length array you will create 2n instances of the LCS problem each of which can be solved in O(n^2) time, which yields the total time complexity of O(n^3).
Your problem is known as the maximum clique problem (with line segments corresponding to graph nodes, and line segments intersections corresponding to graph edges) of a circle graph and has been shown in 2010 to have a solution with O(n^2*log(n)) time complexity.
Please note that the maximum clique problem (the decision version) is NP-hard (NP-complete, to be exact) in the case of an arbitrary graph.
I ran into a question on a midterm exam. Can anyone clarify the answer?
Problem A: Given a Complete Weighted Graph G, find a Hamiltonian Tour with minimum weight.
Problem B: Given a Complete Weighted Graph G and Real Number R, does G have a Hamiltonian Tour with weight at most R?
Suppose there is a machine that solves B. How many times can we call B (each time G and Real number R are given),to solve problem A with that machine? Suppose the sum of Edges in G up to M.
1) We cannot do this, because there is uncountable state.
2) O(|E|) times
3) O(lg m) times
4) because A is NP-Hard, This is cannot be done.
First algorithm
The answer is (3) O(lg m) times. You just have to perform a binary search for the minimum hamiltonian tour in the weighted graph. Notice that if there is a hamiltonian tour of length L in the graph, there is no point in checking if a hamiltonian tour of length L', where L' > L, exists, since you are interested in the minimum-weight hamiltonian tour. So, in each step of your algorithm you can eliminate half of the remaining possible tour-weights. Consequently, you will have to call B in your machine O(lg m) times, where m stands for the total weight of all edges in the complete graph.
Edit:
Second algorithm
I have a slight modification of the above algorithm, which uses the machine O(|E|) times, since some people said that we cannot apply binary search in an uncountable set of possible values (and they are possibly right): Take every possible subset of edges from the graph, and for each subset store a value that is the sum of weights of all edges from the subset. Lets store the values for all the subsets in an array called Val. The size of this array is 2^|E|. Sort Val in increasing order, and then apply binary search for the minimum hamiltonian path, but this time call the machine that solves problem B only with values from the Val array. Since every subset of edges is included in the sorted array, it is guaranteed that the solution will be found. The total number of calls of the machine is O(lg(2^|E|)), which is O(|E|). So, the correct choice is (2) O(|E|) times.
Note:
The first algorithm I proposed is probably incorrect, as some people noted that you cannot apply binary search in an uncountable set. Since we are talking about real numbers, we cannot apply binary search in the range [0-M]
I believe that choice that was meant to be the answer is 1- you can't do that.
The reason is that you can only do binary search on countable sets.
Note that the edges of the graph may even have negative weights, and besides, they may have fractional, or even irrational weights. In that case, the search space for the answer will be the set of all real values less than m.
However,you may get arbitrarily close to the answer of A in Log(n) time, but you cannot find the exact answer. (n being the size of the countable space).
Supposing that in the encoding of graphs the weights are encoded as binary strings representing nonnegative integers and that Problem B can actually algorithmically be solved by entering a real number and perform calculations based on that, things are apparently as follows.
It is possible to do first binary search over the integral interval {0,...,M} to obtain the minumum weight of a Hamiltonian tour in O(log M) calls to the algorithm for Problem B. As afterwards the optimum is known, we can eliminate single edges in G and use the resulting graph as an input to the algorithm for Problem B to test whether or not the optimum changes. This process uses O(|E|) calls to the algorithm for Problem B to identify edges which occur in an optimal Hamiltonian tour. The overall running time of this approach is O( (|E| + log M ) * r(G)), where r(G) denotes the running time of the algorithm for Problem B taking a graph G as an input. I suppose that r is a polynomial, although the question does not explicitly state this; in total, the overall running time would be polynomially bounded in the encoding length of the input, as M can be computed in polynomial time (and hence is pseudopolynomially bounded in the encoding length of the input G).
That being said, the supposed answers can be commented as follows.
The answer is wrong, as the set of necessary states are finite.
Might be true, but does not follow from the algorithm discussed above.
Might be true, but does not follow from the algorithm discussed above.
The answer is wrong. Strictly speaking, the NP-hardness of Problem A does not rule out a polynomial time algorithm; furthermore, the algorithm for Problem B is not stated to be polynomial, so even P=NP does not follow if Problem A can be solved by a polynomial number of calls to the algorithm for Problem B (which is the case by the algorithm sketched above).
For n stations a n*n matrix A is given such that A[i][j] represents time of direct journey from station i to j (i,j <= n).
The person travelling between stations always seeks least time. Given two station numbers a, b, how to proceed about calculating minimum time of travel between them?
Can this problem be solved without using graph theory, i.e. just by matrix A alone?
You do need graph theory in order to solve it - more specifically, you need Dijkstra's algorithm. Representing the graph as a matrix is neither an advantage nor a disadvantage to that algorithm.
Note, though, that Dijkstra's algorithm requires all distances to be nonnegative. If for some reason you have negative "distances" in your matrix, you must use the slower Bellman-Ford algorithm instead.
(If you're really keen on using matrix operations and don't mind that it will be terribly slow, you could use the Floyd-Warshall algorithm, which is based on quite simple matrix operations, to compute the shortest paths between all pairs of stations (massive overkill), and then pick the pair you're interested in...)
This looks strikingly similar to the traveling salesman problem which is NP hard.
Wiki link to TSP
I have a set of N points in a D-dimensional metric space. I want to select K of them in such a way that the smallest distance between any two points in the subset is the largest.
For instance, with N=4 and K=3 in 3D Euclidean space, the solution is the face of the tetrahedron having the longest short side.
Is there a classical way to achieve that ? Can it be solved exactly in polynomial time ?
I have googled as much as I could, but I have not figured out yet how to call this problem.
In my case N=50, K=10 and D=300 typically.
Clarification:
A brute force approach would be to try every combination of K points among the N and determine the closest pair in every subset. The solution is given by the subset that yields the longest pair.
Done the trivial way, an O(K^2) process, to be repeated N! / K!(N-K)! times.
Hum, 10^2 50! / 10! 40! = 1027227817000
I think you might find papers on unit disk graphs informative but discouraging. For instance, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.3113&rep=rep1&type=pdf states that the maximum independent set problem on unit disk graphs in NP-complete, even if the disk representation is known. A unit disk graph is the graph you get by placing points in the plane and forming links between every pair of points at most a unit distance apart.
So I think that if you could solve your problem in polynomial time you could run it on a unit disk graph for different values of K until you find a value at which the smallest distance between two chosen points was just over one, and I think this would be a maximum independent set on the unit disk graph, which would be solving an NP-complete problem in polynomial time.
(Just about to jump on a bicycle so this is a bit rushed, but searching for papers on unit disk graphs might at least turn up some useful search terms)
Here's an attempt to explain it piece by piece:
Here is another attempt to relate the two problems.
For maximum independent set see http://en.wikipedia.org/wiki/Maximum_independent_set#Finding_maximum_independent_sets. A decision problem version of this is "Are there K vertices in this graph such that no two are joined by an edge?" If you can solve this you can certainly find a maximum independent set by finding the largest K by asking this question for different K and then finding the K nodes by asking the question on versions of the graph with one or more nodes deleted.
I state without proof that finding the maximum independent set in a unit disk graph is NP-complete. Another reference for this is http://web.sau.edu/lilliskevinm/wirelessbib/ClarkColbournJohnson.pdf.
A decision version of your problem is "Do there exist K points with distance at least D between any two points?" Again, you can solve this in polynomial time iff you can solve your original problem in polynomial time - play around until you find the largest D that gives answer yes, and then delete points and see what happens.
A unit disk graph has an edge exactly when the distance between two points is 1 or less. So if you could solve the decision version of your original problem you could solve the decision version of the unit disk graph problem just by setting D = 1 and solving your problem.
So I think I have constructed a series of links showing that if you could solve your problem you could solve an NP-complete problem by turning it into your problem, which causes me to think that your problem is hard.
I am looking for an algorithm I could use to solve this, not the code. I wondered about using linear programming with relaxation, but maybe there are more efficient ways for solving this?
The problem
I have set of intervals with weights. Intervals can overlap. I need to find maximal sum of weights of disjunctive intervals subset.
Example
Intervals with weights :
|--3--| |---1-----|
|----2--| |----5----|
Answer: 8
I have an exact O(nlog n) DP algorithm in mind. Since this is homework, here is a clue:
Sort the intervals by right edge position as Saeed suggests, then number them up from 1. Define f(i) to be the highest weight attainable by using only intervals that do not extend to the right of interval i's right edge.
EDIT: Clue 2: Calculate each f(i) in increasing order of i. Keep in mind that each interval will either be present or absent. To calculate the score for the "present" case, you'll need to hunt for the "rightmost" interval that is compatible with interval i, which will require a binary search through the solutions you've already computed.
That was a biggie, not sure I can give more clues without totally spelling it out ;)
If there is no weight it's easy you can use greedy algorithm by sorting the intervals by the end time of them, and in each step get the smallest possible end time interval.
but in your case I think It's NPC (should think about it), but you can use similar greedy algorithm by Value each interval by Weigth/Length, and each time get one of a possible intervals in sorted format, Also you can use simulated annealing, means each time you will get best answer by above value with probability P (p is near to 1) or select another interval with probability 1-P. you can do it in while loop for n times to find a good answer.
Here's an idea:
Consider the following graph: Create a node for each interval. If interval I1 and interval I2 do not overlap and I1 comes before I2, add a directed edge from node I1 to node I2. Note this graph is acyclic. Each node has a cost equal to the length of the corresponding interval.
Now, the idea is to find the longest path in this graph, which can be found in polynomial time for acyclic graphs (using dynamic programming, for example). The problem is that the costs are in the nodes, not in the edges. Here is a trick: split each node v into v' and v''. All edges entering v will now enter v' and all edges leaving v will now leave v''. Then, add an edge from v' to v'' with the node's cost, in this case, the length of the interval. All the other edges will have cost 0.
Well, if I'm not mistaken the longest path in this graph will correspond to the set of disjoint intervals with maximum sum.
You could formulate this problem as a general IP (integer programming) problem with binary variables indicating whether an interval is selected or not. The objective function will then be a weighted linear combination of the variables. You would then need appropriate constraints to enforce disjunctiveness amongst the intervals...That should suffice given the homework tag.
Also, just because a problem can be formulated as an integer program (solving which is NP-Hard) it does not mean that the problem class itself is NP-Hard. So, as Ulrich points out there may be a polynomially-solvable formulation/algorithm such as formulating/solving the problem as a linear program.
Correct solution (end to end) is explained here: http://tkramesh.wordpress.com/2011/02/03/dynamic-programming-1-weighted-interval-scheduling/