Given two sets A and B of equal size N, and a weighting assigning a real number to each of the N^2 entries of the cross-product AxB, we want to form a matching of A and B such that the lowest weighting is maximized.
Take as an example we are organizing a horse race and we have 10 jockeys and 10 horses, each jockey has a different expected speed riding each horse. We have to pick which jockey rides which horse such that the slowest jockey/horse of this match-up, is as fast as possible.
Take
i j k
a 9 1 2
b 4 3 1
c 7 3 5
Here the "max-min-matching" is { (a,i), (b,j), (c,k) } with a value of 3.
What is an algorithm to calculate this matching and what is its complexity?
This answer guides how to create an O(n^2 * sqrt(n) * log(n)) solution for this problem.
Naive slow algorithm:
First, note that a naive O(n^4 * sqrt(n)) is iteratively using a matching algorithm on the bipartite graph which models the problem, and looking for the "highest set of edges" that cannot be remvoed. (Meaning: looking for the maximal edge that will be minimal in a matching).
The graph is G= (V,E), where V = A [union] B and E = A x B.
The algorithm is:
sort the edges according to the weighted value
while there is an unweighted match match:
remove the edge with smallest value
return the match weight of the last removed edge
Correctness explanation:
It is easy to see that the value is not smaller then the last removed edge - because there is a match using it and not "smaller" edge.
It is also not higher because when this edge is removed - there is no match.
complexity:
running O(n^2) the matching algorithm, which is O(|E|sqrt(|V|)) = O(n^2 * sqrt(n)) yields total of O(n^4 * sqrt(n)
We would like to reduce the O(n^2) factor, since the matching algorithm should probably be used.
Optimizing:
Note that the algorithm is actually looking where to "cut" the list of sorted edges. We are actually looking for the smallest edge that must be in the list in order to obtain a match.
One can imply binary search here, where each "compare" is actually checking if there is a matching, and you are looking for the "highest" element that yields a match. This will result in O(log(n^2)) = O(logn) iterations of the matching algorithm, giving you total of O(n^2 * sqrt(n) * log(n))
high level optimized algorithm:
//the compare OP
checkMatching(edges,i):
edges' <- edges
remove from edges' all the elements with index j < i
check if there is a match in the graph
return 1 if there is, 0 otherwise.
//the algorithm:
find max_min(vertices,edges):
sort edges according to weight ascending
binary search in edges for the smallest index that yields 0
let this index be i
remove from edges all the elements with index j < i
return a match with the modified edges
This problem is a typical Bipartite Matching problem. You can have look at the Hungarian method or KM algorithm to solve it.
Related
Lets say that you are given n sorted arrays of numbers and you need to pick one number from each array such that the minimum distance between the n chosen elements is maximized.
Example:
arrays:
[0, 500]
[100, 350]
[200]
2<=n<=10 and every array could have ~10^3-10^4 elements.
In this example the optimal solution to maximize minimum distance is pick numbers: 500, 350, 200 or 0, 200, 350 where min distance is 150 and is the maximum possible of every combination.
I am looking for an algorithm to solve this. I know that I could binary search the max min distance but I can't see how to decide is there is a solution with max min distance of at least d, in order for the binary search to work. I am thinking maybe dynamic programming could help but haven't managed to find a solution with dp.
Of course generating all combination with n elements is not efficient. I have already tried backtracking but it is slow since it tries every combination.
n ≤ 10 suggests that we can take an exponential dependence on n. Here's
an O(2n m n)-time algorithm where m is the total size of the
arrays.
The dynamic programming approach I have in mind is, for each subset of
arrays, calculate all of the pairs (maximum number, minimum distance) on
the efficient frontier, where we have to choose one number from each of
the arrays in the subset. By efficient frontier I mean that if we have
two pairs (a, b) ≠ (c, d) with a ≤ c and b ≥ d, then (c, d) is not on
the efficient frontier. We'll want to keep these frontiers sorted for
fast merges.
The base case with the empty subset is easy: there's one pair, (minimum
distance = ∞, maximum number = −∞).
For every nonempty subset of arrays in some order that extends the
inclusion order, we compute a frontier for each array in the subset,
representing the subset of solutions where that array contributes the
maximum number. Then we merge these frontiers. (Naively this costs us
another factor of log n, which maybe isn't worth the hassle to avoid
given that n ≤ 10, but we can avoid it by merging the arrays once at the
beginning to enable future merges to use bucketing.)
To construct a new frontier from a subset of arrays and another array
also involves a merge. We initialize an iterator at the start of the
frontier (i.e., least maximum number) and an iterator at the start of
the array (i.e., least number). While neither iterator is past the end,
Emit a candidate pair (min(minimum distance, array number − maximum
number), array number).
If the min was less than or equal to minimum distance, increment the
frontier iterator. If the min was less than or equal to array number
− maximum number, increment the array iterator.
Cull the candidate pairs to leave only the efficient frontier. There is
an elegant way to do this in code that is more trouble to explain.
I am going to give an algorithm that for a given distance d, will output whether it is possible to make a selection where the distance between any pair of chosen numbers is at least d. Then, you can binary-search the maximum d for which the algorithm outputs "YES", in order to find the answer to your problem.
Assume the minimum distance d be given. Here is the algorithm:
for every permutation p of size n do:
last := -infinity
ok := true
for p_i in p do:
x := take the smallest element greater than or equal to last+d in the p_i^th array (can be done efficiently with binary search).
if no such x was found; then
ok = false
break
end
last = x
done
if ok; then
return "YES"
end
done
return "NO"
So, we brute-force the order of arrays. Then, for every possible order, we use a greedy method to choose elements from each array, following the order. For example, take the example you gave:
arrays:
[0, 500]
[100, 350]
[200]
and assume d = 150. For the permutation 1 3 2, we first take 0 from the 1st array, then we find the smallest element in the 3rd array that is greater than or equal to 0+150 (it is 200), then we find the smallest element in the 2nd array which is greater than or equal to 200+150 (it is 350). Since we could find an element from every array, the algorithm outputs "YES". But for d = 200 for instance, the algorithm would output "NO" because none of the possible orderings would result in a successful selection.
The complexity for the above algorithm is O(n! * n * log(m)) where m is the maximum number of elements in an array. I believe it would be sufficient, since n is very small. (For m = 10^4, 10! * 10 * 13 ~ 5*10^8. It can be computed under a second on a modern CPU.)
Lets look at an example with optimal choices, x (horizontal arrays A, B, C, D):
A x
B b x b
C x c
D d x
Our recurrence based on range could be: let f(low, excluded) represent the maximum closest distance between two chosen elements (from arrays 1 to n) of the subset without elements in excluded, where low is the lowest chosen element. Then:
(1)
f(low, excluded) when |excluded| = n-1:
max(low)
for low in the only permitted array
(2)
f(low, excluded):
max(
min(
a - low,
f(a, excluded')
)
)
for a ≥ low, a not in excluded'
where excluded' = excluded ∪ {low's array}
We can limit a. For one thing the maximum we can achieve is
(3)
m = (highest - low) / (n - |excluded| - 1)
which means a need not go higher than low + m.
Secondly, we can store results for all f(a, excluded'), keyed by excluded' (we have 2^10 possible keys), each in a decorated binary tree ordered by a. The decoration will be the highest result achievable in the right subtree, meaning we can find the max for all f(v, excluded'), v ≥ a in logarithmic time.
The latter establishes a dominance relationship and clearly we are intetested in both a larger a and a larger f(a, excluded') so as to maximise the min function in (2). Picking an a in the middle, we can use a binary search. If we have:
a - low < max(v, excluded'), v ≥ a
where max(v, excluded') is the lookup
for a in the decorated tree
then we look to the right since max(v, excluded) indicates there's a better answer on the right, where a - low is also larger.
And if we have:
a - low ≥ max(v, excluded), v ≥ a
then we record this candidate and look to the left since to the right, the answer is fixed at max(v, excluded), given that a - low could not decrease.
In order to conduct the binary search on the range, [low, low + m] (see (3)), rather than merge and label all the arrays at the outset, we can keep them separate and compare the closest candidates to mid out of each array we are currently permitted to choose a from. (The trees have the mixed results, keyed by subset.) (The flow of this part is not completely clear to me.)
Worst case with this method, given that n = C is constant seems to be
O(C * array_length * 2^C * C * log(array_length) * log(C * array_length))
C * array_length is the iteration on low
Each low can be paired with 2^C inclusions
C * log(array_length) is the separated binary-search
And log(C * array_length) is the tree lookup
Simplifying:
= O(array_length * log^2(array_length))
although in practice, there could be many dead-end branches that exit early where a full selection wouldn't be possible.
In case, it wasn't clear, the iteration is on a fixed lowest element in the selection. In other words, we want the best f(low, excluded) for all different lows (and excludeds). For bottom-up, we would iterate from the highest value down so our results for a get stored as we iterate.
The answer is N! and I don't understand how it is.
My view:
Assuming it is an undirected graph;
The dimension of each row in a adj. matrix of a vertex is N irrespective of the number of edges. Hence, the number of permutation possible for first row= N!.
Total permutation for second row= (N-1)! since one cell has already been taken care in the first row.
Similarly, third row= (N-2)!
.
.
.
For Nth row=1
Total permutation= N! + N-1!+...+1!
I am confused if considering an undirected or directed graph will yield different result or not. How will the answer change if we consider graph to be directed?
I'll take a shot at it, but please ask questions if it's unclear. For it to be N!, we are assuming an undirected graph.
For a graph with N vertices, it would be represented with an NxN matrix (2D array) with each value being either 0 (edge does not exist) or 1 (edge exists). In this, we are not considering other weights on edges.
Then, we consider all of the possible assignments. If there are N different choices on the first row, there must be N-1 choices on the second row (since we already know about the edge 1,2) and so on.
N * (N-1) * (N-2) * ... * 1 = N!
Let S be a set of intervals (containing n number of intervals) of the natural numbers that might overlap and N be a list of numbers (containing n number of numbers).
I want to find the smallest subset (let's call P) of S such that for each number
in our list N, there exists at least one interval in P that contains it. The intervals in P are allowed to overlap.
Trivial example:
S = {[1..4], [2..7], [3..5], [8..15], [9..13]}
N = [1, 4, 5]
// so P = {[1..4], [2..7]}
I think a dynamic algorithm might not work always, so if anybody knows of a solution to this problem (or a similar one that can be converted into), that would be great. I am trying to make a O(n^2 solution)
Here is one greedy approach
P = {}
for each q in N: // O(n)
if q in P // O(n)
continue
for each i in S // O(n)
if q in I: // O(n)
P.add(i)
break
But that is O(n^4).. Any help with creating a greedy approach that is O(n^2) would be great!
Thanks!
* Update: * I've been slamming at this problem and I think I have an O(n^2) solution!!
Let me know if you think I'm right!!!
N = MergeSort (N)
upper, lower = infinity, -1
P = empty set
for each q in N do
if (q>=lower and q<=upper)=False
max_interval = [-infinity, infinity]
for each r in S do
if q in r then
if r.rightEndPoint > max_interval.rightEndPoint
max_interval = r
P.append(max_interval)
lower = max_interval.leftEndPoint
upper = max_interval.rightEndPoint
S.remove(max_interval)
I think this should work!! I'm trying to find a counter solution; but yeah!!
This problem is similar to set cover problem, which is NP-complete (i.e., arguably has no solution faster than exponential). What makes it different is that intervals always cover adjacent elements (not arbitrary subset of N), which opens ways for faster solutions.
http://en.wikipedia.org/wiki/Set_cover_problem
I think that the solution proposed by Mike is good enough. But I think I have quite straightforward O(N^2) greedy algo. It starts like the Mike's one (moreover, I believe Mike's solution can also be improved in similar way):
You sort your N numbers and place them sorted into array ELEM; COMPLEXITY O(N*lg N);
Using binary search, for each interval S[i] you identify starting and ending index of elements in ELEM that are covered by S[i]. Say, you place this pair of numbers into array COVER, the difference between the two indices tells you how many elements you cover, for simplicity, let us place it array COVER_COUNT; COMPLEXITY O(N*lg N);
You introduce index pointer p, that shows till which element in ELEM, your N is already covered. you set p = 0, meaning that all elements up to 0-th (excluded) are initially covered (i.e., no elements); Complexity O(1). Moreover you introduce boolean array IS_INCLUDED, that reflects if interval S[i] is already included in your coverage set. Complexity O(N)
Then you start from the 0-th element in ELEM and see what is the interval that contains ELEM[0] and has greater coverage COVER_COUNT[i]. Imagine that it is i-th interval. We then mark it as included by setting IS_INCLUDED[i] to true. Then you set p to end[i] + 1 where end[i] is the ending index in COVER[i] pair (indeed now all elements til end[i] are covered). Then, knowing p you update all elements in COVER_COUNT so that they reflect how many elements of not yet covered elements each interval covers (this can be easily done in O(N) time). Then you perform the same step for ELEM[p] and continues till p >= ELEM.length. It can be observed that the overall complexity is O(N^2).
You finish in O(n^2) and in IS_INCLUDED has true for intervals of S included in optimal cover set
Let me know if this solution seems reasonable to you and if I calculated everything well.
P.S. Just wanted to add that the optimality of ythe solution found by algo can be proved by induction and contradiction. By contradiction, it is easy to show that at least one optimal solution includes the longest interval of those covering element ELEM[0]. If so, by induction we can show that for each next element in algo, we can keep on following the strategy of selelcting the interval that is the longest with respect to the number of remaining elements covered and that covers the leftmost yet uncovered element.
I am not sure, but mb some think like this.
1) For each interval create a list with elements from N witch contain in interval, it will take O(n^2) lets call it Q[i] for S[i]
2) Then sort our S by length of Q[i], O(n*lg(n))
3) Go throw this array excluding Q[i] from N O(n) and from Q[i+1]...Q[n] = O(n^2)
4) Repeat 2 while N is not empty.
It's not O(n^2), it's O(n^3) but if you can use hashmap, i think you can improve this.
This question already has answers here:
Linear time algorithm for 2-SUM
(13 answers)
Closed 8 years ago.
This is a generalization of the 2Sum problem
Given an unsorted array of numbers, how do you find the pair of numbers with a sum closest to an arbitrary target. Note that an exact match may not exist, so the O(n) hashtable solution doesn't fit here.
I can solve the problem in O(n*log(n)) time for a target of 0 as so:
Sort the numbers by absolute value.
Iterate across the sorted array, keeping the minimum of the sums of adjacent values.
This works because the three cases of pairs (+/+, +/-, -/-) are all handled by the logic of absolute value. That is, the sum of pairs of the same sign is minimized when they are closest to 0, and the sum of pairs of different sign is minimized when the components of the pair are closest to each other. Both of these cases are represented when sorting by absolute value.
How can I generalize this to an arbitrary target sum?
Step 1: Sort the array in non-decreasing order. Complexity: O( n lg n )
Step 2: Scan inwards from both ends. Complexity: O( n )
On the sorted array A, let l point to the left-most (i.e. minimum) element and r point to the right-most (i.e. maximum) element.
while true:
curSum = A[ l ] + A[ r ]
diff = currSum - target
if( diff > 0 ):
r = r - 1
else:
l = l + 1
If ever diff == 0, then you got a perfect match for 2Sum.
Otherwise, look for change of sign in diff, i.e. transition from positive to negative, or negative to positive. Whenever the transition happens, either just before or just after the change, currSum is closest to target.
Overall Complexity: O( n log n ) because of step 1.
If you are going to sort the numbers anyway, your algorithm already will have a complexity of at least O(n*log(n)). So here is what you can do: for each number v you can perform a binary search to find the least number u in the array that gets the sum u + v more than target. Now check u + v and u + t where t is the predecessor of v in the sorted array.
The complexity of this is n times the complexity of binary search i.e. O(n * log (n)) thus your overall complexity remains O(n*log(n)). Also this solution is way easier to implement than what you suggest.
As pointed out by amit in a comment to the question you can do the second phase with linear complexity, improving its speed. Still the overall computational complexity will remain the same and it is a little bit harder to implement the solution.
I am thinking about the algorithm for the following problem (found on carrercup):
Given a polygon with N vertexes and N edges. There is an int number(could be negative) on every vertex and an operation in set(*,+) on every edge. Every time, we remove an edge E from the polygon, merge the two vertexes linked by the edge(V1,V2) to a new vertex with value: V1 op(E) V2. The last case would be two vertexes with two edges, the result is the bigger one.
Return the max result value can be gotten from a given polygon.
I think we can use just greedy approach. I.e. for polygon with k edges find a pair (p, q) which produces the maximum number when collapsing: (p ,q) = max ({i operation j : i, j - adjacent edges)
Then just call a recursion on polygons:
1. Let function CollapseMaxPair( P(k) ) - gets polygon with k edges and returns 'collapsed' polygon with k-1 edges
2. Then our recursion:
P = P(N);
Releat until two edges left
P = CollapseMaxPair( P )
maxvalue = max ( two remained values)
What do you think?
I have answered this question here: Google Interview : Find the maximum sum of a polygon and it was pointed out to me that that question is a duplicate of this one. Since no one has answered this question fully yet, I have decided to add this answer here as well.
As you have identified (tagged) correctly, this indeed is very similar to the matrix multiplication problem (in what order do I multiply matrixes in order to do it quickly).
This can be solved polynomially using a dynamic algorithm.
I'm going to instead solve a similar, more classic (and identical) problem, given a formula with numbers, addition and multiplications, what way of parenthesizing it gives the maximal value, for example
6+1 * 2 becomes (6+1)*2 which is more than 6+(1*2).
Let us denote our input a1 to an real numbers and o(1),...o(n-1) either * or +. Our approach will work as follows, we will observe the subproblem F(i,j) which represents the maximal formula (after parenthasizing) for a1,...aj. We will create a table of such subproblems and observe that F(1,n) is exactly the result we were looking for.
Define
F(i,j)
- If i>j return 0 //no sub-formula of negative length
- If i=j return ai // the maximal formula for one number is the number
- If i<j return the maximal value for all m between i (including) and j (not included) of:
F(i,m) (o(m)) F(m+1,j) //check all places for possible parenthasis insertion
This goes through all possible options. TProof of correctness is done by induction on the size n=j-i and is pretty trivial.
Lets go through runtime analysis:
If we do not save the values dynamically for smaller subproblems this runs pretty slow, however we can make this algorithm perform relatively fast in O(n^3)
We create a n*n table T in which the cell at index i,j contains F(i,j) filling F(i,i) and F(i,j) for j smaller than i is done in O(1) for each cell since we can calculate these values directly, then we go diagonally and fill F(i+1,i+1) (which we can do quickly since we already know all the previous values in the recursive formula), we repeat this n times for n diagonals (all the diagonals in the table really) and filling each cell takes (O(n)), since each cell has O(n) cells we fill each diagonals in O(n^2) meaning we fill all the table in O(n^3). After filling the table we obviously know F(1,n) which is the solution to your problem.
Now back to your problem
If you translate the polygon into n different formulas (one for starting at each vertex) and run the algorithm for formula values on it, you get exactly the value you want.
Here's a case where your greedy algorithm fails:
Imagine your polygon is a square with vertices A, B, C, D (top left, top right, bottom right, bottom left). This gives us edges (A,B), (A,D), (B,C), and (C, D).
Let the weights be A=-1, B=-1, C=-1, and D=1,000,000.
A (-1) ------ B (-1)
| |
| |
| |
| |
D(1000000) ---C (-1)
Clearly, the best strategy is to collapse (A,B), and then (B,C), so that you may end up with D by itself. Your algorithm, however, will start with either (A,D) or (D,C), which will not be optimal.
A greedy algorithm that combines the min sums has a similar weakness, so we need to think of something else.
I'm starting to see how we want to try to get all positive numbers together on one side and all negatives on the other.
If we think about the initial polygon entirely as a state, then we can imagine all the possible child states to be the subsequent graphs were an edge is collapsed. This creates a tree-like structure. A BFS or DFS would eventually give us an optimal solution, but at the cost of traversing the entire tree in the worst case, which is probably not as efficient as you'd like.
What you are looking for is a greedy best-first approach to search down this tree that is provably optimal. Perhaps you could create an A*-like search through it, although I'm not sure what your admissable heuristic would be.
I don't think the greedy algorithm works. Let the vertices be A = 0, B = 1, C = 2, and the edges be AB = a - 5b, BC = b + c, CA = -20. The greedy algorithm selects BC to evaluate first, value 3. Then AB, value, -15. However, there is a better sequence to use. Evaluate AB first, value -5. Then evaluate BC, value -3. I don't know of a better algorithm though.