Let G be a directed weighted graph with nodes colored black or white, and all weights non-negative. No other information is specified--no start or terminal vertex.
I need to find a path (not necessarily simple) of minimal weight which alternates colors at least n times. My first thought is to run Kosaraju's algorithm to get the component graph, then find a minimal path between the components. Then you could select nodes with in-degree equal to zero since those will have at least as many color alternations as paths which start at components with in-degree positive. However, that also means that you may have an unnecessarily long path.
I've thought about maybe trying to modify the graph somehow, by perhaps making copies of the graph that black-to-white edges or white-to-black edges point into, or copying or deleting edges, but nothing that I'm brain-storming seems to work.
The comments mention using Dijkstra's algorithm, and in fact there is a way to make this work. If we create an new "root" vertex in the graph, and connect every other vertex to it with a directed edge, we can run a modified Dijkstra's algorithm from the root outwards, terminating when a given path's inversions exceeds n. It is important to note that we must allow revisiting each vertex in the implementation, so the key of each vertex in our priority queue will not be merely node_id, but a tuple (node_id, inversion_count), representing that vertex on its ith visit. In doing so, we implicitly make n copies of each vertex, one per potential visit. Visually, we are effectively making n copies of our graph, and translating the edges between each (black_vertex, white_vertex) pair to connect between the i and i+1th inversion graphs. We run the algorithm until we reach a path with n inversions. Alternatively, we can connect each vertex on the nth inversion graph to a "sink" vertex, and run any conventional path finding algorithm on this graph, unmodified. This will run in O(n(E + Vlog(nV))) time. You could optimize this quite heavily, and also consider using A* instead, with the smallest_inversion_weight * (n - inversion_count) as a heuristic.
Furthermore, another idea hit me regarding using knowledge of the inversion requirement to speedup the search, but I was unable to find a way to implement it without exceeding O(V^2) time. The idea is that you can use an addition-chain (like binary exponentiation) to decompose the shortest n-inversion path into two smaller paths, and rinse and repeat in a divide and conquer fashion. The issue is you would need to construct tables for the shortest i-inversion path from any two vertices, which would be O(V^2) entries per i, and O(V^2logn) overall. To construct each table, for every entry in the preceding table you'd need to append V other paths, so it'd be O(V^3logn) time overall. Maybe someone else will see a way to merge these two ideas into a O((logn)(E + Vlog(Vlogn))) time algorithm or something.
Related
I just want to be clear that EMST stands for Euclidean Minimum Spanning Tree.
Essentially, I have been given a file with 100k 4D vertices (one vertex on each line). The goal is to visit every vertex in the file while minimizing the total distance traveled. The distance from a point to another point is simply the Euclidean Distance (Distance if you draw a Straight Line between two points".
I already know that this is pretty much the Traveling Salesman Problem, which is NP Complete, so I am looking to approximate the solution.
The first approximation algorithm that came to my mind is by finding the MST from a graph constructed from the file... But that would take O(N^2) to even just construct all the edges from the file given the fact that it's a complete graph ( I can go from any point to another ). And given that my input is N = 10^5, my algorithm will have a huge running time, which is too slow...
Any ideas on how I can plan on approximating the solution? Thank you very much..
I know it's quadratic-time, but I think you should consider Prim with an implicit graph. The structure of the algorithm is
for each vertex v
mindist[v] := infinity
visited[v] := false
choose a root vertex r
mindist[r] := 0
repeat |V| times
let w be the minimizer of d[w] such that not visited[w]
visited[w] := true
for each vertex v
if not visited[v] and distance(w, v) < mindist[v]:
mindist[v] := distance(w, v)
parent[v] := w
Since the storage used is linear, it will likely stay resident in cache, and there are no fancy data structures, so this algorithm should run pretty fast.
I am going to assume that you actually want a EMST as your title suggests, and the TSP is just a means to that end, and not the actual goal itself. The two have very different restrictions (the TSP being far more restrictive), and thus very different optimal solutions.
Overview
The idea is that we want to run a modified kruskal's algorithm, which will make use of a k-d tree to find the closest pairs without evaluating every potential edge. We can find the shortest edge to each vertex in a connected component, take the shortest overall, and connect our connected components via that edge. As you'll see, this connects at least half of our connected components each iteration, so it takes at most logn iterations to complete.
Nearest Neighbor Search
For constructing an EMST, you'll want to use a data structure for querying for nearest neighbors in 4D space. You could extend octrees to work in a higher dimension, but I'd personally go with a k-d tree. You can construct a k-d tree in O(nlogn) time using the median of medians algorithm to find the median at each level, and you can insert / remove from a balanced k-d tree in O(logn) time.
Once you've built a k-d tree, you'll want to query for the nearest neighbor to each point. We'll then construct the edge between these two vertices. Many of these edges will be duplicated, as for some vertices A and B, A's nearest neighbor may be B, and B's nearest neighbor may be A. We'll handle this by storing which connected component each vertex belongs to, and after two vertices are joined by an edge, the duplicate edge will clearly connect two vertices of the same connected component, and so we'll discard it. To accomplish this, we'll use a disjoint-set (just like in many implementations of kruskal's algorithm) to assign a connected component to each vertex. This will also prevent us from creating cycles in our graph, which would introduce unnecessary edges in the MST.
Merging
However, as we construct each edge, we'll want to insert it into a min-heap priority queue before checking which edges to keep and which edges connect already-connected vertices. This will not affect the outcome of this first iteration, but later on we will need to handle edges by increasing distance. Then dequeue all the edges, check for unnecessary / redundant edges via the disjoint-set, insert valid edges into the MST, and merge the respective disjoint-sets. All of this of course introduces a nlogn factor for constructing and dequeuing elements from the min-heap (we could also just sort them in a plain array, if we wished).
After this first iteration of adding edges, we'll have connected at least half of the MST, maybe more. This is because for each vertex we added one edge, and we can have at most one duplicate per edge, so we've added a few as vertices / 2 edges, but as many as vertices - 1. Now at least 1/2 of our MST has been built. We'll continue the process as described in the following paragraphs, until we've added vertices - 1 edges in total.
Generalizing NN-Search
To continue, we'll want to construct lists of the vertices in each connected component, so that we can iterate over them by groups. This can be done in nearly linear time, as searching (also merging) a disjoint-set takes O(α(n)) time (α being the inverse ackermann function) and we repeat exactly n times. Once we have our lists of vertices per connected component, the rest is fairly straightforward. We'll take our existing k-d tree, and remove all the vertices in our current connected component. We'll then query for the nearest neighbor to each vertex to each vertex in our connected component, and add these edges to our min-heap. We'll then add these vertices back into the k-d tree, and repeat on the next connected component. Since we insert/remove a total of n elements, this amounts to an average case O(nlogn) time complexity.
Now that we have a queue of the shortest potential edges connecting our connected components, we'll dequeue these in order, and just as before insert valid edges and merge the disjoint sets. For the same reasons as before, this is guaranteed to connect at least half of our components, maybe even all of them. We'll repeat this process until we have connected all vertices into a single connected component, which will be our MST. Note that because we halve the number of disconnected components each iteration, it'll take at most O(logn) iterations to connect every vertex in our MST (most likely far less).
Remarks
Overall, this will take O(nlog^2(n)) time. There will likely be far less than log(n) iterations however, so expect a speedup there in practice. Also note that R-tree might be a good alternative to the k-d tree- I don't know how they compare in practice however.
Given undirected not weighted graph with any type of connectivity, i.e. it can contain from 1 to several components with or without single nodes, each node can have 0 to many connections, cycles are allowed (but no loops from node to itself).
I need to find the maximal amount of vertex pairs assuming that each vertex can be used only once, ex. if graph has nodes 1,2,3 and node 3 is connected to nodes 1 and 2, the answer is one (1-3 or 2-3).
I am thinking about the following approach:
Remove all single nodes.
Find the edge connected a node with minimal number of edges to node with maximal number of edges (if there are several - take any of them), count and remove this pair of nodes from graph.
Repeat step 2 while graph has connected nodes.
My questions are:
Does it provide maximal number of pairs for any case? I am
worrying about some extremes, like cycles connected with some
single or several paths, etc.
Is there any faster and correct algorithm?
I can use java or python, but pseudocode or just algo description is perfectly fine.
Your approach is not guaranteed to provide the maximum number of vertex pairs even in the case of a cycle-free graph. For example, in the following graph your approach is going to select the edge (B,C). After that unfortunate choice, there are no more vertex pairs to choose from, and therefore you'll end up with a solution of size 1. Clearly, the optimal solution contains two vertex pairs, and hence your approach is not optimal.
The problem you're trying to solve is the Maximum Matching Problem (not to be confused with the Maximal Matching Problem which is trivial to solve):
Find the largest subset of edges S such that no vertex is incident to more than one edge in S.
The Blossom Algorithm solves this problem in O(EV^2).
The way the algorithm works is not straightforward and it introduces nontrivial notions (like a contracted matching, forest expansions and blossoms) to establish the optimal matching. If you just want to use the algorithm without fully understanding its intricacies you can find ready-to-use implementations of it online (such as this Python implementation).
The problem: you need to find the minimum spanning tree of a graph (i.e. a set S of edges in said graph such that the edges in S together with the respective vertices form a tree; additionally, from all such sets, the sum of the cost of all edges in S has to be minimal). But there's a catch. You are given an initial set of fixed edges K such that K must be included in S.
In other words, find some MST of a graph with a starting set of fixed edges included.
My approach: standard Kruskal's algorithm but before anything else join all vertices as pointed by the set of fixed edges. That is, if K = {1,2}, {4,5} I apply Kruskal's algorithm but instead of having each node in its own individual set initially, instead nodes 1 and 2 are in the same set and nodes 4 and 5 are in the same set.
The question: does this work? Is there a proof that this always yields the correct result? If not, could anyone provide a counter-example?
P.S. the problem only inquires finding ONE MST. Not interested in all of them.
Yes, it will work as long as your initial set of edges doesn't form a cycle.
Keep in mind that the resulting tree might not be minimal in weight since the edges you fixed might not be part of any MST in the graph. But you will get the lightest spanning tree which satisfies the constraint that those fixed edges are part of the tree.
How to implement it:
To implement this, you can simply change the edge-weights of the edges you need to fix. Just pick the lowest appearing edge-weight in your graph, say min_w, subtract 1 from it and assign this new weight,i.e. (min_w-1) to the edges you need to fix. Then run Kruskal on this graph.
Why it works:
Clearly Kruskal will pick all the edges you need (since these are the lightest now) before picking any other edge in the graph. When Kruskal finishes the resulting set of edges is an MST in G' (the graph where you changed some weights). Note that since you only changed the values of your fixed set of edges, the algorithm would never have made a different choice on the other edges (the ones which aren't part of your fixed set). If you think of the edges Kruskal considers, as a sorted list of edges, then changing the values of the edges you need to fix moves these edges to the front of the list, but it doesn't change the order of the other edges in the list with respect to each other.
Note: As you may notice, giving the lightest weight to your edges is basically the same thing as you suggest. But I think it is a bit easier to reason about why it works. Go with whatever you prefer.
I wouldn't recommend Prim, since this algorithm expands the spanning tree gradually from the current connected component (in the beginning one usually starts with a single node). The case where you join larger components (because your fixed edges might not all be in a single component), would be needed to handled separately - it might not be hard, but you would have to take care of it. OTOH with Kruskal you don't have to adapt anything, but simply manipulate your graph a bit before running the regular algorithm.
If I understood the question properly, Prim's algorithm would be more suitable for this, as it is possible to initialize the connected components to be exactly the edges which are required to occur in the resulting spanning tree (plus the remaining isolated nodes). The desired edges are not permitted to contain a cycle, otherwise there is no spanning tree including them.
That being said, apparently Kruskal's algorithm can also be used, as it is explicitly stated that is can be used to find an edge that connects two forests in a cost-minimal way.
Roughly speaking, as the forests of a given graph form a Matroid, the greedy approach yields the desired result (namely a weight-minimal tree) regardless of the independent set you start with.
I have a directed graph that has all non-negative edges except the edge(s) that leave the source (S). There are no edges from any other vertices to the source. To find the shortest distance from source (S) to a vertex (T) in the graph, can I use Dijkstra's shortest path algorithm even though the edges leaving the source is negative?
Assuming only source-adjecent edges can have negative weights and there is no path back to the source from any of the source-adjecent nodes (as mentioned in the comment), you can just add a constant C onto all edges leaving the source to make them all non-negative. Then subtract C from the final result.
On a more general note, Dijkstra can be used to solve shortest-path in any graph with negative edge weights (but no negative cycles) after applying Johnson's reweighting algorithm (which is essentially Bellman-Ford, but needs to be performed only once).
Yes, you can use Dijkstra on that type of directed graph.
If you use already finished alghoritm for Dijsktra and it cannot use negative values, it can be good practise to find the lowest negative edge and add that number to all starting edges, therefore there is no-negative number at all. You substract that number after finishing.
If you code it yourself (which is acutally pretty easy and I recommend it to you), you almost does not change anything, just start with lowest value (as usual for Dijkstra) and allow it, that lowest value can be negative. It will work in your case.
The reason you generally can't use Dijkstra's algorithm for (directed) graphs with negative links is that Dijkstra's algorithm is greedy. It assumes that once you pick a vertex with minimum distance, there is no way it can later be reached by a smaller paths.
In your particular graph, after the very first step, you traverse all possible negative edges and Dijkstra's assumption actually holds from now on. Regardless of the fact that those vertices directly connected to start now have negative values, once you identify which has the minimum distance, it can never be reached again with a smaller distance (since all edges you would traverse from this point on would have a positive distance).
If you think about the conditions that dijkstra's algorithm puts upon the edges for the algorithm to work it is only that they are never decreasing after initialisation.
Thus, it actually doesn't matter if the first step is negative as from those several points onwards the function is constantly increasing and thus the correct output will be found (provided there is no way to get back to the start square.).
I have an graph with the following attributes:
Undirected
Not weighted
Each vertex has a minimum of 2 and maximum of 6 edges connected to it.
Vertex count will be < 100
Graph is static and no vertices/edges can be added/removed or edited.
I'm looking for paths between a random subset of the vertices (at least 2). The paths should simple paths that only go through any vertex once.
My end goal is to have a set of routes so that you can start at one of the subset vertices and reach any of the other subset vertices. Its not necessary to pass through all the subset nodes when following a route.
All of the algorithms I've found (Dijkstra,Depth first search etc.) seem to be dealing with paths between two vertices and shortest paths.
Is there a known algorithm that will give me all the paths (I suppose these are subgraphs) that connect these subset of vertices?
edit:
I've created a (warning! programmer art) animated gif to illustrate what i'm trying to achieve: http://imgur.com/mGVlX.gif
There are two stages pre-process and runtime.
pre-process
I have a graph and a subset of the vertices (blue nodes)
I generate all the possible routes that connect all the blue nodes
runtime
I can start at any blue node select any of the generated routes and travel along it to reach my destination blue node.
So my task is more about creating all of the subgraphs (routes) that connect all blue nodes, rather than creating a path from A->B.
There are so many ways to approach this and in order not confuse things, here's a separate answer that's addressing the description of your core problem:
Finding ALL possible subgraphs that connect your blue vertices is probably overkill if you're only going to use one at a time anyway. I would rather use an algorithm that finds a single one, but randomly (so not any shortest path algorithm or such, since it will always be the same).
If you want to save one of these subgraphs, you simply have to save the seed you used for the random number generator and you'll be able to produce the same subgraph again.
Also, if you really want to find a bunch of subgraphs, a randomized algorithm is still a good choice since you can run it several times with different seeds.
The only real downside is that you will never know if you've found every single one of the possible subgraphs, but it doesn't really sound like that's a requirement for your application.
So, on to the algorithm: Depending on the properties of your graph(s), the optimal algorithm might vary, but you could always start of with a simple random walk, starting from one blue node, walking to another blue one (while making sure you're not walking in your own old footsteps). Then choose a random node on that path and start walking to the next blue from there, and so on.
For certain graphs, this has very bad worst-case complexity but might suffice for your case. There are of course more intelligent ways to find random paths, but I'd start out easy and see if it's good enough. As they say, premature optimization is evil ;)
A simple breadth-first search will give you the shortest paths from one source vertex to all other vertices. So you can perform a BFS starting from each vertex in the subset you're interested in, to get the distances to all other vertices.
Note that in some places, BFS will be described as giving the path between a pair of vertices, but this is not necessary: You can keep running it until it has visited all nodes in the graph.
This algorithm is similar to Johnson's algorithm, but greatly simplified thanks to the fact that your graph is unweighted.
Time complexity: Since there is a constant number of edges per vertex, each BFS will take O(n), and the total will take O(kn), where n is the number of vertices and k is the size of the subset. As a comparison, the Floyd-Warshall algorithm will take O(n^3).
What you're searching for is (if I understand it correctly) not really all paths, but rather all spanning trees. Read the wikipedia article about spanning trees here to determine if those are what you're looking for. If it is, there is a paper you would probably want to read:
Gabow, Harold N.; Myers, Eugene W. (1978). "Finding All Spanning Trees of Directed and Undirected Graphs". SIAM J. Comput. 7 (280).