Looking for algorithms: Minimum cut to produce bipartite graph - algorithm

Given an undirected weighted graph (or a single connected component of a larger disjoint graph) which typically will contain numerous odd and even cycles, I am searching for algorithms to remove the smallest possible number of edges necessary in order to produce one or more bipartite subgraphs. Are there any standard algorithms in the literature such as exist for minimum cut, etc.?
The problem I am trying to solve looks like this in the real world:
Presentations of about 1 hour each are given to students about different subjects in one or two time blocks. Students can sign up for at least one presentation of their choice, or two, or three (3rd choice is an alternative in case one of the others isn't going to be presented). They have to be all different choices. If there are less than three sign-ups for a given presentation, it will not be given. If there are 18 or more, it will be given twice in both blocks. I have to schedule the presentations such that the maximum number of sign-ups are satisfied.
Scheduling is trivial in the following cases:
Sign-ups for only one presentation can always be satisfied if the presentation is given (i.e. sign-ups >= 3);
Sign-ups for two given presentations are always satisfiable if at least one of them is given twice.
First, all sign-ups are aggregated to determine which ones are given once and which are given twice. If a student has signed up for a presentation with too few other sign-ups, the alternative presentation is chosen if it will also be given.
At the end of the day, I am left with an undirected weighted graph where the vertices are the presentations and the edges represent students who have signed up for that combination of presentations, each of which is only presented once. The weight corresponds to the number of sign-ups for the unique combination of presentations (thus avoiding parallel edges).
If the number of vertices, or presentations, is around 20 or less, I have come up with a brute force solution which finishes in acceptable time. However, each additional vertex will double the runtime of that solution. After 28 or so, it rapidly becomes unmanageable.
This year we had 37 presentations, thirty of which were only given once and thus ended up in the graph. What I am trying right now for larger graphs is the following:
Find all discrete components and solve each component individually;
For each component, remove leaf nodes and bridge edges recursively;
Generate a spanning tree (I am using Kruskal's algorithm which works very well), saving the removed edges;
Generate the fundamental cycle set by adding one removed edge back into the tree at a time and stripping off the rest of the tree;
Using the Gibbs-Welch algorithm, I generate the complete set of all elemental cycles starting with the fundamental set obtained in step 4;
Count the number of odd and even cycles to which each edge belongs;
Create a priority queue of edges (ordering discussed below) and remove each edge successively from its connected component until the resulting component is bipartite.
I cannot find an ordering of the priority queue for which I can prove that the result would be as acceptable as a solution obtained using the brute force method (it is probably NP-hard). However, I am trying something along these lines:
a. If the edge belongs only to odd cycles, remove it first;
b. If the edge belongs to more odd than even cycles, remove it before any other edges which belong to more even cycles than odd;
c. Edges with the smallest weight should be removed first.
If an edge belongs to both an odd and an even cycle, removing it would leave a larger odd cycle behind. That is why I am ordering them like that. Obviously, the larger the number of odd cycles to which an edge belongs, the higher the priority, but only if less even cycles are affected.
There are additional criteria which exist but need to be considered outside of the graph problem; for example, removing an edge effectively removes one of the sign-ups for one of the presentations, so an eye has to be kept on not letting the number of sign-ups get too small.
(EDIT: there is also the possibility of splitting presentations into two blocks which have almost enough sign-ups, e.g. 15-16 instead of 18. But this means that whoever is giving the presentation would have to do it twice, so it is a trade-off.)
Thanks in advance for any suggestions!

This problem is equivalent to the NP-hard weighted max cut problem, which asks for a partition of the vertices into two parts such that the maximum number of edges go between the parts.
I think the easiest way to solve a problem size such as you have would be to formulate it as a quadratic integer program and then apply an off the shelf solver. The formulation looks like
maximize (1/2) sum_{ij} w_{ij} (1 - y_i y_j)
subject to
y_i in {±1} for all i
where w_ij is the weight of the undirected edge ij if present else zero (so the corresponding variable and its constraint can be omitted).

Related

Fixing Karger's min cut algorithm with union-find data structure

I was trying to implement Karger's min cut algorithm in the same way it is explained here but I don't like the fact that at each step of the while loop we can pick an edge with it's two endpoints already in a supernode. More specifically, this part
// If two corners belong to same subset,
// then no point considering this edge
if (subset1 == subset2)
continue;
Is there a quick fix for avoiding this problem?
It might help to back up and think about why there’s a union-find structure here at all and why it’s worth improving on the continue statement.
Conceptually, each contraction performed changes the graph in the following way:
The nodes contracted get replaced with a single node.
The edges incident to either node get replaced with an edge to the new joint node.
The edges running between the two earlier nodes get removed.
The question, then, is how to actually do this to the graph. The code you’ve found does this lazily. It doesn’t actually change the graph when the contraction is done. Instead, it uses the union-find structure to show which nodes are now equivalent to one another. When it samples a random edge, it then has to check whether that edge is one of the ones that would have been deleted in step (3). If so, it skips it and moves on. This has the effect that early contractions are really fast (the likelihood of picking two edges that are part of contracted nodes is very low when few edges are contracted), but later contractions might be a lot slower (once edges have started being contracted, lots of edges may have been deleted).
Here’s a simple modification you can use to speed this step up. Whenever you pick an edge to contract and find that its endpoints are already connected, discard that edge, and remove it from the list of edges so that it never gets picked again. You can do this by swapping that edge to the end of the list of edges, then removing the last element of the list. This has the effect that every edge processed will never be seen again, so across all iterations of the algorithm every edge will be processed at most once. That gives a runtime of one randomized contraction phase as O(m + nα(n)), where m is the number of edges and n is the number of nodes. The factor of α(n) comes from the use of the union-find structure.
If you truly want to remove all semblance of that continue statement, an alternative approach would be to directly simulate the contraction. After each contraction, iterate over all m edges and adjust each one by seeing whether it needs to remain unchanged, point to the new contracted node, or be removed altogether. This will take time O(m) per contraction for a net cost of O(mn) for the overall min cut calculation.
There are ways to speed things up beyond this. Karger’s original paper suggests generating a random permutation of the edges and using binary search over that array with a clever use of BFS or DFS to find the cut produced in time O(m), which is slightly faster than the O(m + nα(n)) approach for large graphs. The basic idea is the following:
Probe the middle element of the list of edges.
Run a BFS on the graph formed by only using those edges and see if there are exactly two connected components.
If so, great! Those two CCs are the ones you want.
If there is only one CC, discard the back half of the array of edges and try again.
If there is more than one CC, contract each CC into a single node and update a global table indicating which CC each node belongs to. Then discard the first half of the array and try again.
The cost of each BFS is O(m), where m is the number of edges in the graph, and this gives the recurrence T(m) = T(m/2) + O(m) because at each stage we’re throwing away half of the edges. That solves to O(m) total time, though as you can see, it’s a much trickier way of coding this algorithm up!
To summarize:
With a very small modification to the provided code, you can keep the continue statement in but still have a very fast implementation of the randomized contraction algorithm.
To eliminate that continue without sacrificing the runtime of the algorithm, you need to do some major surgery and change approaches to something only marginally asymptotically faster than keeping the continue in.
Hope this helps!

Solving a TSP-related task

I have a problem similar to the basic TSP but not quite the same.
I have a starting position for a player character, and he has to pick up n objects in the shortest time possible. He doesn't need to return to the original position and the order in which he picks up the objects does not matter.
In other words, the problem is to find the minimum-weight (distance) Hamiltonian path with a given (fixed) start vertex.
What I have currently, is an algorithm like this:
best_total_weight_so_far = Inf
foreach possible end vertex:
add a vertex with 0-weight edges to the start and end vertices
current_solution = solve TSP for this graph
remove the 0 vertex
total_weight = Weight (current_solution)
if total_weight < best_total_weight_so_far
best_solution = current_solution
best_total_weight_so_far = total_weight
However this algorithm seems to be somewhat time-consuming, since it has to solve the TSP n-1 times. Is there a better approach to solving the original problem?
It is a rather minor variation of TSP and clearly NP-hard. Any heuristic algorithm (and you really shouldn't try to do anything better than heuristic for a game IMHO) for TSP should be easily modifiable for your situation. Even the nearest neighbor probably wouldn't be bad -- in fact for your situation it would probably be better that when used in TSP since in Nearest Neighbor the return edge is often the worst. Perhaps you can use NN + 2-Opt to eliminate edge crossings.
On edit: Your problem can easily be reduced to the TSP problem for directed graphs. Double all of the existing edges so that each is replaced by a pair of arrows. The costs for all arrows is simply the existing cost for the corresponding edges except for the arrows that go into the start node. Make those edges cost 0 (no cost in returning at the end of the day). If you have code that solves the TSP for directed graphs you could thus use it in your case as well.
At the risk of it getting slow (20 points should be fine), you can use the good old exact TSP algorithms in the way John describes. 20 points is really easy for TSP - instances with thousands of points are routinely solved and instances with tens of thousands of points have been solved.
For example, use linear programming and branch & bound.
Make an LP problem with one variable per edge (there are more edges now because it's directed), the variables will be between 0 and 1 where 0 means "don't take this edge in the solution", 1 means "take it" and fractional values sort of mean "take it .. a bit" (whatever that means).
The costs are obviously the distances, except for returning to the start. See John's answer.
Then you need constraints, namely that for each node the sum of its incoming edges is 1, and the sum of its outgoing edges is one. Also the sum of a pair of edges that was previously one edge must be smaller or equal to one. The solution now will consist of disconnected triangles, which is the smallest way to connect the nodes such that they all have both an incoming edge and an outgoing edge, and those edges are not both "the same edge". So the sub-tours must be eliminated. The simplest way to do that (probably strong enough for 20 points) is to decompose the solution into connected components, and then for each connected component say that the sum of incoming edges to it must be at least 1 (it can be more than 1), same thing with the outgoing edges. Solve the LP problem again and repeat this until there is only one component. There are more cuts you can do, such as the obvious Gomory cuts, but also fancy special TSP cuts (comb cuts, blossom cuts, crown cuts.. there are whole books about this), but you won't need any of this for 20 points.
What this gives you is, sometimes, directly the solution. Usually to begin with it will contain fractional edges. In that case it still gives you a good underestimation of how long the tour will be, and you can use that in the framework of branch & bound to determine the actual best tour. The idea there is to pick an edge that was fractional in the result, and pick it either 0 or 1 (this often turns edges that were previously 0/1 fractional, so you have to keep all "chosen edges" fixed in the whole sub-tree in order to guarantee termination). Now you have two sub-problems, solve each recursively. Whenever the estimation from the LP solution becomes longer than the best path you have found so far, you can prune the sub-tree (since it's an underestimation, all integral solutions in this part of the tree can only be even worse). You can initialize the "best so far solution" with a heuristic solution but for 20 points it doesn't really matter, the techniques I described here are already enough to solve 100-point problems.

Algorithm to identify "fuzzily-connected" subgraphs

I have a problem which looks like the connectet subgraphs problem from a mile-high, but is quite distinct in that it does not fall under the strict definitions.
I face a graph with a few millions of nodes and links (manual analysis is not possible), among those millions of nodes, there are known to be 2 or 3 "sets".
Each of the "sets" is comprised of hundreds of thousandths of nodes, and tens of thousandths sub-graphs, not strongly connected. Each of those sets should theoretically not be linked to the other sets... but there are (guesstimation) a dozen of erroneous links that end up connecting those sets.
The problem is to find those sets and the erroneous links, or at least get a human-manageable list of erroneous links candidates that can be verified manually.
My current "best idea" is to randomly pick two nodes, find the shortest path between them, then mark the links on that shortest path. Rinse & repeat millions of times, and the erroneous links eventually end up as the most marked ones, as they are "chokepoints" between the sets.
However, this is quite slow, and when one set is much larger than the others and has internal chokepoints, it ends up dominating the "most marked" list, making it meaningless.
Are there better algorithms/approaches for that?
edit: a refinement of the path marking is to mark proportionally with the length of the path, which helps with the "internal chokepoints of a large set" issue, but does not entirely eliminates it as some sets can have distant "outliers", while other sets have lots of tightly connected nodes (short internal distances)
My idea is an ant colony algorithm. I get inspired with your approach of choosing two random nodes, but thought it will be useful to do something more instead of just computing the shortest path.
Start n ants in n random nodes. You will need to adjust n with trial and error method. Ants leave a pheromone on traveled edges. Pheromone evaporate in time. Ants choose one of the distinct edges to travel according to the probability. The more pheromone an edge has, the more likely an ant will choose that edge.
In the beginning ants move totally randomly, since there is no pheromone and edges have the same probability. However, over time, the most popular edges, bridges between two "fuzzily-connected" components will have more and more pheromone on them.
So, you throw n ants, simulate for m turns and return edges with the highest amount of pheromone on them. You can visualize this process to clearly see what is going on.
Update: I realized that the sentence "However, over time, the most popular edges, bridges between two "fuzzily-connected" components will have more and more pheromone on them" is wrong. I implemented it and it looks like most of the time bridges not necessarily attract ants:
There were n = 1000 ants and m = 1000 steps. Initially every edge had 1 unit of pheromone and it was increased by 1 if ant traveled over it. No evaporation, however I think it would not improve the situation. Bridge had 49845 units of pheromone, but there were three other edges which had more than 100k.
As suggested by Peter de Rivaz in the comment, I tried (source code) repeating min-cut between 2 random nodes and it is much better:
Graphs generated with python-igraph library.

Many to One Matching Algorithm

My apologies if this is a duplicate. I lack the Computer Science knowledge to know what to properly search for.
I need to find a matching algorithm. I've got a series of rooms, and a series of contents-for-rooms. The contents have a minimum size room in which they would fit - so some would happily fit in any room, some would only fit in one or two rooms. I'll also be having a maximum size for some of the rooms, but I assume that this only effects when I determine if the room would be suitable.
Assuming (though - this won't be guaranteed in my actual use) that there is a potential solution, how do I find the optimal allocation, such that each room is only used once and none of the contents are lacking a room?
You problem appears to be a maximum bipartite matching problem. You could think of your problem as an undirected graph G(V,E) where the vertices V are the rooms and contents and the edges E are possible connections between rooms and contents:
The graph is bipartite. If we split the vertices into two sets, rooms and contents, there are no internal edges in each set.
An edge exists in the graph between contents(i) and room(j) if the room is big enough to hold the contents.
A maximum matching produces the maximum number of pairings between vertices in the two sets (i.e. rooms and contents), ensuring that each vertex is only used once. The matching is said to be "perfect" if all vertices are matched. There are a number of algorithms that can be used for such problems, potentially the fastest is the Hopcroft-Karp method.
You could also consider a further optimisation of your problem, in which you try to minimse the total wasted space in the rooms. In this case a "weight" would be associated to the edges defined above based on the difference between the areas of the contents and the rooms.
You would then seek a maximum weight maximum matching.
You could solve this as an http://en.wikipedia.org/wiki/Assignment_problem. You don't have matching numbers of things to be matched, but you can make up things for whichever side runs short first. If you make the cost for the made up things the same for every possible match, the minimum cost answer for a solution with made up things will also be the minimum cost answer for a solution without made up things which produces only a partial match, because the contribution of the made-up things to the cost is the same no matter how they are assigned.
(of course there may be a faster way to solve your specific problem - for instance if you only have one thing to match on one of the sides, just try it in every possible location).

graph algorithm to detect even cycles

I have an undirected graph. One edge in that graph is special. I want to find all other edges that are part of a even cycle containing the first edge.
I don't need to enumerate all the cycles, that would be inherently NP I think. I just need to know, for each each edge, whether it satisfies the conditions above.
A brute force search works of course but is too slow, and I'm struggling to come up with anything better. Any help appreciated.
I think we have an answer (I must credit my colleague with the idea). Essentially his idea is to do a flood fill algorithm through the space of even cycles. This works because if you have a large even cycle formed by merging two smaller cycles then the smaller cycles must have been both even or both odd. Similarly merging an odd and even cycle always forms a larger odd cycle.
This is a practical option only because I can imagine pathological cases consisting of alternating even and odd cycles. In this case we would never find two adjacent even cycles and so the algorithm would be slow. But I'm confident that such cases don't arise in real chemistry. At least in chemistry as it's currently known, 30 years ago we'd never heard of fullerenes.
If your graph has a small node degree, you might consider using a different graph representation:
Let three atoms u,v,w and two chemical bonds e=(u,v) and k=(v,w). A typical way of representing such data is to store u,v,w as nodes and e,k as edges in a graph.
However, one may represent e and k as nodes in the graph, having edges like f=(e,k) where f represents a 2-step link from u to w, f=(e,k) or f=(u,v,w). Running any algorithm to find cycles on such a graph will return all even cycles on the original graph.
Of course, this is efficient only if the original graph has a small node degree. When a user performs an edit, you can easily edit accordingly the alternative representation.

Resources