minimu number of line intervals, that cover points on a line - set

Given 𝑛 points on a line and π‘š intervals on the same line. Design a polynomial-time algorithm that will return the minimum number of intervals whose union covers all 𝑛 points. Prove the correctness of the algorithm.
I have to write a program in python to solve this problem.
Any idea ?
Thanks in advance

Related

Minimum number of flips to get adjacent 1's in a matrix

Given a binary matrix (values of 0 or 1), adjacent entries of 1 denote β€œhills”. Also, given some number k, find the minimum number of 0's you need to β€œflip” to 1 in order to form a hill of at least size k.
Edit: For clarification, adjacent means left-right-up-down neighborhoods. Diagonals do not count as adjacent. For example,
[0 1
0 1]
is one hill of size 2,
[0 1
1 0]
defines 2 hills of size 1,
[0 1
1 1]
defines 1 hill of size 3, and
[1 1
1 1]
defines 1 hill of size 4.
Also for clarification, size is defined by the area formed by the adjacent blob of 1's.
My initial solution has to do with transforming each existing hill into nodes of a graph, and the cost to be the minimal path to each other node. Then, performing a DFS (or similar algorithm) to find the minimum cost.
This fails in cases where choosing some path reduces the cost for another edge, and solutions to combat this (that I can think of) are too close to a brute force solution.
Your problem is closely related to the rectilinear Steiner tree problem.
A Steiner tree connects a set of points together using line segments, minimising the total length of the line segments. The line segments can meet in arbitrary places, not necessarily at points in the set (so it is not the same thing as a minimum spanning tree). For example, given three points at the corners of an equilateral triangle, the Euclidean Steiner tree connects them by meeting in the middle:
A rectilinear Steiner tree is the same, except you minimise the total Manhattan distance instead of the total Euclidean distance.
In your problem, instead of joining your hills with line segments whose length is measured by Euclidean distance, you are joining your hills by adding pixels. The total number of 0s you need to flip to join two cells in your array is equal to the Manhattan distance between those two cells, minus 1.
The rectilinear Steiner tree problem is known to be NP-complete, even when restricted to points with integer coordinates. Your problem is a generalisation, except for two differences:
The "minus 1" part when measuring the Manhattan distance. I doubt that this subtle difference is enough to bring the problem into a lower complexity class, though I don't have a proof for you.
The coordinates of your integer points are bounded by the size of the matrix (as pointed out by Albert Hendriks in the comments). This does matter β€” it means that pseudo-polynomial time for the rectilinear Steiner tree problem would be polynomial time for your problem.
This means that your problem may or may not be NP-hard, depending on whether the rectilinear Steiner tree problem is weakly NP-complete or strongly NP-complete. I wasn't able to find a definitive answer to this in the literature, and there isn't much information about the problem other than in academic literature. It does at least appear that there isn't a known pseudo-polynomial time algorithm, as far as I can tell.
Given that, your most likely options are some kind of backtracking search for an exact solution, or applying a heuristic to get a "good enough" solution. One possible heuristic as described by Wikipedia is to compute a rectilinear minimum spanning tree and then try to improve on the RMST using an iterative improvement method. The RMST itself gives a solution within a constant factor of 1.5 of the true optimum.
A hill is composed by four sequences of 1's:
The right sequence is composed of r 'bits', the up sequence has u bits, and so on.
A hill of size k is k= 1 + r + l + u + d (1 central + sequences), where each value is 0 <= v < k.
The problem is combinatorial. For each cell all possible combinations of {r,l,u,d} that satisfy the former relation should be tested.
When testing a combination in a cell, you must count the number of the existing 1 in each value of the combination, they don't "flip". This will also skip early some other combinations.

algorithm to compute the largest subset of L in which every pair of segments intersects

I came across this question in preparation for the final exam, and I could not find the recursive formula although I saw similar questions.
I will thank you for any help!
the problem is:
Suppose we are given a set L of n line segments in the plane, where the endpoints
of each segment lie on the unit circle x
2 + y
2 = 1, and all 2n endpoints are
distinct. Describe and analyze an algorithm to compute the largest subset of L in
which every pair of segments intersects
The solution needs to be an algorithm in dynamic programming approach (based on recursive formula)
I am assuming the question ("the largest subset of L...") is dealing with the subset size, and not that the subset cannot be extended. If the latter is true, the problem is trivial and the simple greedy algorithm works.
Now to your question. Following Matt Timmermans' hint (can you prove it?) this can be viewed as the longest common subsequence problem, except that we don't know what the 2 input strings are = where the splitting point between the 2 sequence occurences is.
Longest common subsequence problem can be solved in O(m*n) time and linear memory. By moving the splitting point along your 2n-length array you will create 2n instances of the LCS problem each of which can be solved in O(n^2) time, which yields the total time complexity of O(n^3).
Your problem is known as the maximum clique problem (with line segments corresponding to graph nodes, and line segments intersections corresponding to graph edges) of a circle graph and has been shown in 2010 to have a solution with O(n^2*log(n)) time complexity.
Please note that the maximum clique problem (the decision version) is NP-hard (NP-complete, to be exact) in the case of an arbitrary graph.

How to compute this greedy complexity

Can anyone help me to compute this complexity?
What i think is :
Line 1 is an O(nlogn) for the sort.
Then,I have to compute for every node (line 2, while (U!=0) ) two stars for the node.
Compute one star for all nodes its cost O(|E|), but my problem is the second star.
Can anyone help me?

Algorithm to divide a set of symbols with constraints into minimum number of subsets

I have a set S={a,c,d,e,f,j,m,q,s,t} with a constraint C={am,cm,de,df,dm,ds,ef,em,eq,es,et,fj,fm,fs,jm,js}. xy in C means that x and y cannot be in the same subset. I would like an algorithm to split set S into subsets Sj such that:
1.The number of Sj is minimized
2.The difference between size of each subset is as large as possible
For example in this case, both {{q,a,c,d,j,t},{m,s},{f},{e}} and {{a,c,e,j},{m,s,q,t},{d},{f}} are satisfying 1, but the first is optimal.
Coming from a computer science background, I wonder whether Mathematicians have devised an algorithm for this problem.
As I understand, your task can be rewritten as: find the largest independent subset of vertices S' of graph G=(S, C); repeat the step for graph G'=G\S'.
It's well-known (also pointed by #tobias_k in his comment) that largest independent set of the graph is NP-hard problem (as it's equivalent to the famous clique-problem).
I think this is very hard problem, and that is why. For finding minimum number of subsets, you must solve problem about minimum chromatic number of graph. This problem is generally solved by brute force.

Graph:: Deletion Contraction Complexity?

I am applying the classic deletion contraction algorithm to a Graph G of "n" vertices and "m" edges.
Z(G) = Z(G-e) + Z(G/e)
In Wikipedia,
http://en.wikipedia.org/wiki/Chromatic_polynomial#Deletion.E2.80.93contraction
They say that complexity is: O(1.6180^(n+m)).
Mi main question is why they included the number of vertices in the complexity ?? when is clear that the recursion only depends on the number of edges.
The closest reference to deletion-contraction is fibonacci sequence, which its computing complexity is demonstrated in Herbert S. Wilf's Algorithms and Complexity book
http://www.math.upenn.edu/~wilf/AlgComp3.html
pages 18-19.
All help is welcome.
Look at page 46 of the pdf version. Deletion and contraction each reduce the number of edges by 1, so a recurrence in edges only shows that Z(G) is O(2m), which is worse than O(Fib(n + m)) for all but the sparsest graphs. The improvement in considering vertices as well as edges is that, when a self-loop is formed, we know immediately that the chromatic polynomial is zero.

Resources