I created a program that adds edges between vertices. The goal is to add as many edges as possible without crossing them(ie Planar graph). What is the complexity?
Attempt: Since I used depth first search I think it is O(n+m) where n is node and m is edge.
Also, if we plot the number of edges as a function of n what is it going to look like?
(* I'm re posting the question since noone answered it the last time *)
Related
Suppose you have been given a simple undirected graph and the graph has a max degree of d. You are given d + 1 colors, represented by numbers starting from 0 to d and you want to return a valid placement of colors such that no two adjacent vertices share the same color. And as the title suggests, the graph is given in adjacency list representation. The algorithm should run in O(V+E) time.
I think the correct way to approach this is by using a greedy coloring algorithm. However, this may sound stupid but I am stuck on the part where I try to find the first available color that hasn't been used by its neighbors for each vertex. I don't really know how I can do it so that it runs in O(number of neighbors) time for each vertex that helps to fit under time complexity requirements.
I am trying to understand the O(N^4) explanation for the assignment problem by reading the topcoder article [1] . Specifically, i am unable to comprehend how the default O(N^5) procedure can be converted to O(N^4). Please read below for details:
Assume for the assignment problem, a complete bipartite graph of N vertices in both sets of vertices, and a edge cost from vertex i to vertex j, denoted by cost[i][j] that is say, integral, non-negative. We are trying to minimize the overall sum of weights of the perfect matching (assuming there is one)
Quoting the O(n^4) algorithm from [1]
Step 0)
A. For each vertex from left Set (workers) find the minimal outgoing edge and subtract its weight from all weights connected with this vertex. This will introduce 0-weight edges (at least one).
B. Apply the same procedure for the vertices in the right Set (jobs).
Step 1)
A. Find the maximum matching using only 0-weight edges (for this purpose you can use max-flow algorithm, augmenting path algorithm, etc.).
B. If it is perfect, then the problem is solved. Otherwise find the minimum vertex cover V (for the subgraph with 0-weight edges only), the best way to do this is to use Konig’s graph theorem.
Step 2)
Let delta = min(cost[i][j]) for i not belonging to vertex cover, and j not belonging to vertex cover.
Then, modify the cost matrix as follows:
cost[i][j] = cost[i][j] - delta for i not belonging to vertex cover, and j not belonging to vertex cover.
cost[i][j] = cost[i][j] + delta for i belonging to vertex cover, and j belonging to vertex cover.
cost[i][j] = cost[i][j] otherwise
Step 3)
Repeat Step 1) until problem is solved
I understand that the above algorithm is O(n^5) if implemented as-is, since the maximum matching on a bipartite graph takes O(n^3) if we use, say, breadth first search, and there are O(n^2) iterations of the overall algorithm since each edge becomes 0 in an iteration.
But, [1] also mentions:
finding the maximum matching in step 1 on each iteration will cause the algorithm to become O(n^5). In order to avoid this, on each step we can just modify the matching from the previous step, which only takes O(n^2) operations, making the overall algorithm O(n^4)
This is the part i do not understand. How do we modify the matching from the previous epoch, thereby taking only O(N^2) operations, instead of the normal O(N^3) iterations for maximum bipartite matching?
I referred [2] as well, but in that solution as well, there is only a comment that says:
// to make O(n^4), start from previous solution
I fail to understand how to convert the O(N^5) to O(N^4), and what exactly starting from previous solution mean?
Can someone explain using the algorithm referenced in [1], how to modify it to run in O(N^4)? Pseudo-code would be most welcome.
[1] https://www.topcoder.com/community/data-science/data-science-tutorials/assignment-problem-and-hungarian-algorithm/
[2] http://algs4.cs.princeton.edu/65reductions/Hungarian.java
Thanks in Advance.
Given N points in a map of edges Map<Point, List<Edge>>, it's possible to get the polygons formed by these edges in O(N log N)?
What I know is that you have to walk all the vertices and get the edges containing that vertex as a starting point. These are edges of a voronoi diagram, and each vertex has, at most, 3 artists containing it. So, in the map, the key is a vertex, and the value is a list where the vertex is the start node.
For example:
Points: a,b,c,d,e,f,g
Edges: [a,b]; [a,c]; [a,d], [b,c], [d,e], [e,g], [g,f]
My idea is to iterate the map counterclockwise until I get the initial vertex. That is a polygon, then I put it in a list of polygons and keep looking for others. The problem is I do not want to overcome the complexity O(N log N)
Thanks!
You can loop through the edges and compute the distance from midpoint of the edge to all sites. Then sort the distances in ascending order and for inner voronoi polygons pick the first and the second. For outer polygons pick the first. Basically an edge separate/divide 2 polygons.
It's something O(m log n).
If I did find a polynomial solution to this problem I would not post it here because I am fairly certain this is at least NP-Hard. I think your best bet is to do a DFS. You might find this link useful Finding all cycles in undirected graphs.
You might be able to use the below solution if you can formulate your graph as a directed graph. There are 2^E directed graphs (because each edge can be represented in 2 directions). You could pick a random directed graph and use the below solution to find all of the cycles in this graph. You could do this multiple times for different random directed graphs keeping track of all the cycles and until you've reached a satisfactory error bounds.
You can efficiently create a directed graph with a little bit of state (Maybe store a + or - with an edge to note the direction?) And once you do this in O(n) the first time you can randomly flip x << E directions to get a new graph in what will essentially be constant time.
Since you can create subsequent directed graphs in constant time you need to choose the number of times to run the cycle finding algorithm to have it still be polynomial and efficient.
UPDATE - The below only works for directed graphs
Off the top of my head it seems like it's a better idea to think of this as a graph problem. Your map of vertices to edges is a graph representation. Your problem reduces to finding all of the loops in the graph because each cycle will be a polygon. I think "Tarjan's strongly connected components algorithm" will be of use here as it can do this in O(v+e).
You can find more information on the algorithm here https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
Say we have a n x n chess board (or a matrix in other words) and each square has a weight to it. A piece can move horizontally or vertically, but it can't move diagonally. The cost of each movement will be equal to the difference of the two squares on the chess board. Using an algorithm, I want to find the minimum cost for a single chess piece to move from the square (1,1) to square (n,n) which has a worst-case time complexity in polynomial time.
Could dikstras algorithm be used to solve this? Would my algorithm below be able to solve this problem? Diijkstras can already be ran in polynomial time, but what makes it this time complexity?
Pseudocode:
We have some empty set S, some integer V, and input a unweighted graph. After that we complete a adjacency matrix showing the cost of an edge without the infinity weighted vertices and while the matrix hasn't picked all the vertices we find a vertex and if the square value is less then the square we're currently on, move to that square and update V with the difference between the two squares and update S marking each vertices thats been visited. We do this process until there are no more vertices.
Thanks.
Since you are trying to find a minimum cost path, you can use Dijkstra's for this. Since Dijkstra is O(|E| + |V|log|V|) in the worst case, where E is the number of edges and V is the number of verticies in the graph, this satisfies your polynomial time complexity requirement.
However, if your algorithm considers only costs associated with the beginning and end square of a move, and not the intermediate nodes, then you must connect all possible beginning and end squares together so that you can take "short-cuts" around the intermediate nodes.
I created a program that adds edges between vertices. The goal is to add as many edges as possible without crossing them(ie Planar graph). What is the complexity?
Attempt: Since I used depth first search I think it is O(n+m) where n is node and m is edge.
Also, if we plot the number of edges as a function of n what is it going to look like?
Your first question is impossible to answer, since you have not described the algorithm.
For your second question, any maximal planar graph with v ≥ 3 vertices has exactly 3v - 6 edges.