I know various algorithms to compute the maximum weighted matching of weighted, undirected bipartite graphs (i.e. the assignment problem):
For instance ... The Hungarian Algorithm, Bellman-Ford or even the Blossom algorithm (which works for general, i.e. not bipartite, graphs).
However, how can I compute the maximum weighted matching if the edges of the bipartite graph are weighted and directed?
I would appreciate pointers to algorithms with polinomial complexity or prior transformations to make the graph undirected so that I could apply any of the aforementioned algorithms.
Edit: note that the matching should maximize the weight of the edges, that's why having directed edges makes a difference (A->B can have a totally different weight than B->A).
Admittedly, if I was maximizing cardinality, the directed edges wouldn't make a difference and I could apply any of the well-known algorithms to maximize cardinality: Hopcroft–Karp, Maximum Network Flow ....
Edit 2: Since matching is a term normally applied to undirected graphs, let me clarify what I exactly mean by matching in this question: a set of directed edges that do not share start or end vertices. More formally, if U->V and U'->V' are part of the matching, then V /= U' and V' /= U.
dfb's comment is correct, for any two vertices A, B you can discard the cheaper of the two edges AB and BA.
The proof is a one-liner:
Theorem: A maximum matching M never contains the cheaper edge of AB and BA for any two vertices A,B.
Proof: Let M be a maximum matching. Suppose AB is in M and is cheaper than BA. Define M' = M - {AB} + {BA}. M' is clearly still a matching, but it's more expensive. That contraditcs the assumption that M was a maximum matching.
Related
I have a question which I was asked in some past exams at my school and I can't find an answer to it.
Is it possible knowing the final matrix after running the Johnson Algorithm on a graph, to know if it previously had negative cycles or not? Why?
Johnson Algorithm
Johnson's Algorithm is a technique that is able to compute shortest paths on graphs. Which is able to handle negative weights on edges, as long as there does not exist a cycle with negative weight.
The algorithm consists of (from Wikipedia):
First, a new node q is added to the graph, connected by zero-weight edges to each of the other nodes.
Second, the Bellman–Ford algorithm is used, starting from the new vertex q, to find for each vertex v the minimum weight h(v) of a path from q to v. If this step detects a negative cycle, the algorithm is terminated.
Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from u to v, having length w(u, v), is given the new length w(u,v) + h(u) − h(v).
Finally, q is removed, and Dijkstra's algorithm is used to find the shortest paths from each node s to every other vertex in the reweighted graph.
If I understood your question correctly, which should have been as follows:
Is it possible knowing the final pair-wise distances matrix after running the Johnson Algorithm on a graph, to know if it originally had any negative-weight edges or not? Why?
As others commented here, we must first assume the graph has no negative weight cycles, since otherwise the Johnson algorithm halts and returns False (due to the internal Bellman-Form detection of negative weight cycles).
The answer then is that if any negative weight edge e = (u, v) exists in the graph, then the shortest weighted distance between u --> v cannot be > 0 (since at the worst case you can travel the negative edge e between those vertices).
Therefore, at least one of the edges had negative weight in the original graph iff any value in the final pair-wise distances is < 0
If the question is supposed to be interpreted as:
Is it possible, knowing the updated non-negative edge weights after running the Johnson Algorithm on a graph, to know if it originally had any negative-weight edges or not? Why?
Then no, you can't tell.
Running the Johnson algorithm on a graph that has only non-negative edge weights will leave the weights unchanged. This is because all of the shortest distances q -> v will be 0. Therefore, given the edge weights after running Johnson, the initial weights could have been exactly the same.
Given undirected, connected graph G={V,E}, a vertex in V(G), label him v, and a weight function f:E->R+(Positive real numbers), I need to find a MST such that v's degree is minimal. I've already noticed that if all the edges has unique weight, the MST is unique, so I believe it has something to do with repetitive weights on edges. I though about running Kruskal's algorithm, but when sorting the edges, I'll always consider edges that occur on v last. For example, if (a,b),(c,d),(v,e) are the only edges of weight k, so the possible permutations of these edges in the sorted edges array are: {(a,b),(c,d),(v,e)} or {(c,d),(a,b),(v,e)}. I've ran this variation over several graphs and it seems to work, but I couldn't prove it. Does anyone know how to prove the algorithm's correct (Meaning proving v's degree is minimal), or give a contrary example of the algorithm failing?
First note that Kruskal's algorithm can be applied to any weighted graph, whether or not it is connected. In general it results in a minimum-weight spanning forest (MSF), with one MST for each connected component. To prove that your modification of Kruskal's algorithm succeeds in finding the MST for which v has minimal degree, it helps to prove the slightly stronger result that if you apply your algorithm to a possibly disconnected graph then it succeeds in finding the MSF where the degree of v is minimized.
The proof is by induction on the number, k, of distinct weights.
Basis Case (k = 1). In this case weights can be ignored and we are trying to find a spanning forest in which the degree of v is minimized. In this case, your algorithm can be described as follows: pick edges for as long as possible according to the following two rules:
1) No selected edge forms a cycle with previously selected edges
2) An edge involving v isn't selected unless any edge which doesn't
involve v violates rule 1.
Let G' denote the graph from which v and all incident edges have been removed from G. It is easy to see that the algorithm in this special case works as follows. It starts by creating a spanning forest for G'. Then it takes those trees in the forest that are contained in v's connected component in the original graph G and connects each component to v by a single edge. Since the components connected to v in the second stage can be connected to each other in no other way (since if any connecting edge not involving v exists it would have been selected by rule 2) it is easy to see that the degree of v is minimal.
Inductive Case: Suppose that the result is true for k and G is a weighted graph with k+1 distinct weights and v is a specified vertex in G. Sort the distinct weights in increasing order (so that weight k+1 is the longest of the distinct weights -- say w_{k+1}). Let G' be the sub-graph of G with the same vertex set but with all edges of weight w_{k+1} removed. Since the edges are sorted in the order of increasing weight, note that the modified Kruskal's algorithm in effect starts by applying itself to G'. Thus -- by the induction hypothesis prior to considering edges of weight w_{k+1}, the algorithm has succeeded in constructing an MSF F' of G' for which the degree, d' of v in G' is minimized.
As a final step, modified Kruskal's applied to the overall graph G will merge certain of the trees in F' together by adding edges of weight w_{k+1}. One way to conceptualize the final step is the think of F' as a graph where two trees are connected exactly when there is an edge of weight w_{k+1} from some node in the first tree to some node in the second tree. We have (almost) the basis case with F'. Modified Kruskal's will add edged of weight w_{k+1} until it can't do so anymore -- and won't add an edge connecting to v unless there is no other way to connect to trees in F' that need to be connected to get a spanning forest for the original graph G.
The final degree of v in the resulting MSF is d = d'+d" where d" is the number of edges of weight w_{k+1} added at the final step. Neither d' nor d" can be made any smaller, hence it follows that d can't be made any smaller (since the degree of v in any spanning forest can be written as the sum of the number of edges whose weight is less than w_{k+1} coming into v and the number off edges of weight w_{k+1} coming into v).
QED.
There is still an element of hand-waving in this, especially with the final step -- but Stack Overflow isn't a peer-reviewed journal. Anyway, the overall logic should be clear enough.
One final remark -- it seems fairly clear that Prim's algorithm can be similarly modified for this problem. Have you looked into that?
I have a bipartite graph with two sets of vertices A and B. Edges have no weights. However, vertices in one of the sets(say set B) have positive weights assigned to them (wb1,wb2...)
I want to find a matching in this bipartite graph so as to maximize the sum of weights of vertices matched from set B.
After an extensive online search, this is what I have come up with : Assign weight wbi to all edges incident on vertex bi and run the Hungarian algorithm.
Is there a more efficient way to look at this problem, since it's different from weighted maximum matching (here vertices have weights as opposed to edges)
If my language is not clear, feel free to edit. Thank you.
If an improvement from O(V^3) to O(V E) and a simpler algorithm is worth it (it isn't asymptotically for the densest graphs), you could exploit the matroid structure of matchings as follows. Instantiate Ford--Fulkerson by repeatedly choosing a path to an unmatched vertex in B whose weight is as large as possible.
I'm trying to find an efficient method of detecting whether a given graph G has two different minimal spanning trees. I'm also trying to find a method to check whether it has 3 different minimal spanning trees. The naive solution that I've though about is running Kruskal's algorithm once and finding the total weight of the minimal spanning tree. Later , removing an edge from the graph and running Kruskal's algorithm again and checking if the weight of the new tree is the weight of the original minimal spanning tree , and so for each edge in the graph. The runtime is O(|V||E|log|V|) which is not good at all, and I think there's a better way to do it.
Any suggestion would be helpful,
thanks in advance
You can modify Kruskal's algorithm to do this.
First, sort the edges by weight. Then, for each weight in ascending order, filter out all irrelevant edges. The relevant edges form a graph on the connected components of the minimum-spanning-forest-so-far. You can count the number of spanning trees in this graph. Take the product over all weights and you've counted the total number of minimum spanning trees in the graph.
You recover the same running time as Kruskal's algorithm if you only care about the one-tree, two-trees, and three-or-more-trees cases. I think you wind up doing a determinant calculation or something to enumerate spanning trees in general, so you likely wind up with an O(MM(n)) worst-case in general.
Suppose you have a MST T0 of a graph. Now, if we can get another MST T1, it must have at least one edge E different from the original MST. Throw away E from T1, now the graph is separated into two components. However, in T0, these two components must be connected, so there will be another edge across this two components that has exactly the same weight as E (or we could substitute the one with more weight with the other one and get a smaller ST). This means substitute this other edge with E will give you another MST.
What this implies is if there are more than one MSTs, we can always change just a single edge from a MST and get another MST. So if you are checking for each edge, try to substitute the edge with the ones with the same weight and if you get another ST it is a MST, you will get a faster algorithm.
Suppose G is a graph with n vertices and m edges; that the weight of any edge e is W(e); and that P is a minimal-weight spanning tree on G, weighing Cost(W,P).
Let δ = minimal positive difference between any two edge weights. (If all the edge weights are the same, then δ is indeterminate; but in this case, any ST is an MST so it doesn't matter.) Take ε such that δ > n·ε > 0.
Create a new weight function U() with U(e)=W(e)+ε when e is in P, else U(e)=W(e). Compute Q, an MST of G under U. If Cost(U,Q) < Cost(U,P) then Q≠P. But Cost(W,Q) = Cost(W,P) by construction of δ and ε. Hence P and Q are distinct MSTs of G under W. If Cost(U,Q) ≥ Cost(U,P) then Q=P and distinct MSTs of G under W do not exist.
The method above determines if there are at least two distinct MSTs, in time O(h(n,m)) if O(h(n,m)) bounds the time to find an MST of G.
I don't know if a similar method can treat whether three (or more) distinct MSTs exist; simple extensions of it fall to simple counterexamples.
Let G (U u V, E) be a weighted directed bipartite graph (i.e. U and V are the two sets of nodes of the bipartite graph and E contains directed weighted edges from U to V or from V to U). Here is an example:
In this case:
U = {A,B,C}
V = {D,E,F}
E = {(A->E,7), (B->D,1), (C->E,3), (F->A,9)}
Definition: DirectionalMatching (I made up this term just to make things clearer): set of directed edges that may share the start or end vertices. That is, if U->V and U'->V' both belong to a DirectionalMatching then V /= U' and V' /= U but it may be that U = U' or V = V'.
My question: How to efficiently find a DirectionalMatching, as defined above, for a bipartite directional weighted graph which maximizes the sum of the weights of its edges?
By efficiently, I mean polynomial complexity or faster, I already know how to implement a naive brute force approach.
In the example above the maximum weighted DirectionalMatching is: {F->A,C->E,B->D}, with a value of 13.
Formally demonstrating the equivalence of this problem to any other well known problem in graph theory would also be valuable.
Thanks!
Note 1: This question is based on Maximum weighted bipartite matching _with_ directed edges but with the extra relaxation that it is allowed for edges in the matching to share the origin or destination. Since that relaxation makes a big difference, I created an independent question.
Note 2: This is a maximum weight matching. Cardinality (how many edges are present) and the number of vertices covered by the matching is irrelevant for a correct result. Only the maximum weight matters.
Note 2: During my research to solve the problem I found this paper, I think it would be helpful to others trying to find a solution: Alternating cycles and paths in edge-coloured
multigraphs: a survey
Note 3: In case it helps, you can also think of the graph as its equivalent 2-edge coloured undirected bipartite multigraph. The problem formulation would then turn into: Find the set of edges without colour-alternating paths or cycles which has maximum weight sum.
Note 4: I suspect that the problem might be NP-hard, but I am not that experienced with reductions so I haven't managed to prove it yet.
Yet another example:
Imagine you had
4 vertices: {u1, u2} {v1, v2}
4 edges: {u1->v1, u1->v2, u2->v1, v2->u2}
Then, regardless of their weights, u1->v2 and v2->u2 cannot be in the same DirectionalMatching, neither can v2->u2 and u2->v1. However u1->v1 and u1->v2 can, and so can u1->v1 and u2->v1.
Define a new undirected graph G' from G as follows.
G' has a node (A, B) with weight w for each directed edge (A, B) with weight w in G
G' has undirected edge ((A, B),(B, C)) if (A, B) and (B, C) are both directed edges in G
http://en.wikipedia.org/wiki/Line_graph#Line_digraphs
Now find a maximal (weighted) independent vertex set in G'.
http://en.wikipedia.org/wiki/Vertex_independent_set
Edit: stuff after this point only works if all of the edge weights are the same - when the edge weights have different values its a more difficult problem (google "maximum weight independent vertex set" for possible algorithms)
Typically this would be an NP-hard problem. However, G' is a bipartite graph -- it contains only even cycles. Finding the maximal (weighted) independent vertex set in a bipartite graph is not NP-hard.
The algorithm you will run on G' is as follows.
Find the connected components of G', say H_1, H_2, ..., H_k
For each H_i do a 2-coloring (say red and blue) of the nodes. The cookbook approach here is to do a depth-first search on H_i alternating colors. A simple approach would be to color each vertex in H_i based on whether the corresponding edge in G goes from U to V (red) or from V to U (blue).
The two options for which nodes to select from H_i are either all the red nodes or all the blue nodes. Choose the colored node set with higher weight. For example, the red node set has weight equal to H_i.nodes.where(node => node.color == red).sum(node => node.w). Call the higher-weight node set N_i.
Your maximal weighted independent vertex set is now union(N_1, N_2, ..., N_k).
Since each vertex in G' corresponds to one of the directed edges in G, you have your maximal DirectionalMatching.
This problem can be solved in polynomial time using the Hungarian Algorithm. The "proof" by Vor above is wrong.
The method of structuring the problem for the above example is as follows:
D E F
A # 7 9
B 1 # #
C # 3 #
where "#" means negative infinity. You then resolve the matrix using the Hungarian algorithm to determine the maximum matching. You can multiply the numbers by -1 if you want to find a minimum matching.