find total components in graph - algorithm

I have a graph with N nodes and M edges. It is a single component.
Now I have to delete a single node from graph, deleting that node might split graph into 1,2 or more components. The count of such components is required for each deleted node.
Note that only a single node is deleted at any point of time.
I need to do this for all the nodes of the graph in a linear time.
Is this possible in linear time?
I am able to do this in O(n^2) by running dfs for each node.

Read some online resources about "Articulation Point".
I hope you will get your answer.
https://www.geeksforgeeks.org/articulation-points-or-cut-vertices-in-a-graph/

Related

Finding the closest marked node in a graph

In a graph with a bunch of normal nodes and a few special marked nodes, is there a common algorithm to find the closest marked node from a given starting position in the graph?
Or is the best way to do a BFS search to find the marked nodes and then doing Dijkstra's on each of the discovered marked nodes to see which one is the closest?
This depends on the graph, and your definition of "closest".
If you compute "closest" ignoring edge weights, or your graph has no edge weights, a simple breadth-first search (BFS) will suffice. The first node reached vía BFS is, by definition of BFS, the closest (or, if there are several closest nodes, tied for closeness). If you keep track of the number of expanded BFS levels, you can locate all closest nodes by reaching the end of the level instead of stopping as soon as you find the first marked node.
If you have edge weights, and need to use them in your computation, use Dijkstra instead. If the edges can have negative weights, and there happen to be any negative cycles, then you will need to instead use Bellman-Ford.
As mentioned by SaiBot, if the start node is always the same, and you will perform several queries with changing "marked" nodes, there are faster ways to do things. In particular, you can store in each node the "parent" found in a first full traversal, and the node's distance to the start node. When adding a new batch of k marked nodes, you would immediately know the closest to the start by looking at this distance for each marked node.
The fastest way would be to perform Dijkstra right away from your starting position (starting node). When "closeness" is defined as the number of edges that have to be traversed, you can just assign a weight of 1 to each edge. In case precomputation is allowed there will be faster ways to do it.

Check if a changing undirected graph has at least one circle

I have an undirected graph which initially has no edges. Now in every step an edge is added or deleted and one has to check whether the graph has at least one circle. Probably the easiest sufficient condition for that is
connected components + number of edges <= number of nodes.
As the "steps" I mentioned above are executed millions of times, this check has to be really fast. So I wonder what would be a quick way to check the condition depending on the fact that in each step only one edge changes.
Any suggestions?
If you are keen, you can try to implement a fully dynamic graph connectivity data structure like described in "Poly-logarithmic deterministic fully-dynamic graph algorithms I: connectivity and minimum spanning tree" by Jacob Holm, Kristian de Lichtenberg, Mikkel Thorup.
When adding an edge, you check whether the two endpoints are connected. If not, the number of connected components decreases by one. After deleting an edge, check if the two endpoints are stil connected. If not, the number of connected components increases by one. The amortized runtime of edge insertion and deletion would be O(log^2 n), but I can imagine the constant factor is quite high.
There are newer result with better bounds. There is also an experimental evaluation of some of the dynamic connectivity algorithms that considers implementation details as well. There is also a Javascript implementation. I have no idea how good it is.
I guess in practice you can have it much easier by maintaining a spanning forest. You get edge additions and non-tree edge deletions (almost) for free. For tree edge deletions you could just use "brute force" in the form of BFS or DFS to check whether the end points are still connected. Especially if the number of nodes is bounded, maybe that works well enough in practice, BFS and DFS are both O(n^2) for dense graphs and you can charge some of that work to the operations where you got lucky and didn't have a lot to do.
I suggest you label all the nodes. Use integers, that's easiest.
At any point, your graph will be divided into a number of disjoint subgraphs. Initially, each node is in its own subgraph.
Maintain the condition that each subgraph has a unique label, and all the nodes in the subgraph carry that label. Initially, just give each node a unique label. If your problem includes adding nodes, you might want to maintain a variable to hold the next available label.
If and only if a new edge would connect two nodes with identical labels, then the edge would create a cycle.
Whenever you add an edge, you will connect two previously disjoint subgraphs. You must relabel one of the subgraphs to match the other, which will require visiting all the nodes of one subgraph. This is the highest computatonal burden in this scheme.
If you don't mind allocating more space, you should also maintain a list of labels in use, associated with a count of the nodes carrying that label. This will allow you to choose the smaller subgraph when relabeling.
If you know which two nodes are being connected by the new edge, you could use some sort of path finding algorithm to detect an alternative path between the two nodes. In other words, if a path exists which connects the two nodes of your new edge before you add the new edge, adding the new edge will create a circle.
Your problem then reduces to finding the paths between two given nodes.

Why are there two listed time complexities for breadth-first search?

The Wikipedia article on breadth-first search lists two time complexities for breadth-first search over a graph: O(|E|) and O(bd). Later on the page, though, it only lists O(|E|).
Why are there two different runtimes? Which one is correct?
Both time complexities are correct, but are used in different circumstances to measure the time complexity relative to two different quantities. This has to do with the fact that the breadth-first search algorithm typically refers to two different related algorithms used in different contexts.
In one context, BFS is an algorithm that, given a graph and a start node in the graph, visits every node reachable from the start node by first visiting all nodes at distance 0, then distance 1, then distance 2, etc. until all nodes are visited. This will visit every node in the graph and in the process of doing so explore each node one and edge edge at most once (in the directed case) or twice (in the undirected case). By using queues to keep track of which nodes to explore next and using appropriate bookkeeping, the runtime will be O(|E| + |V|) (and with further optimizations, it will be O(|E|)).
In a different context, BFS is a search algorithm used to find the shortest path from some start node in a graph to a destination node in the graph. In this case, the algorithm stops running as soon as it discovers the destination node. This means that the runtime depends on how far away the destination node is from the source node. That distance in turn depends on the structure of the graph. If the graph is densely connected, the node can't be that far away, and if the graph is sparse, the node might be extremely distant. In this context, it's common to add in a parameter called the "branching factor" b, which describes the average or maximum number of edges adjacent to any node. This means that
There is one node at distance 0 from the start node.
There are at most b nodes at distance 1 from the start node.
There are at most b2 nodes at distance 2 from the start node.
...
There are at most bk nodes at distance k from the start node.
If we assume that the destination node is at distance d from the start node, then BFS will visit at most b0 + b1 + ... + bd = O(bd) nodes during its search, spending O(b) time on each of them. Accordingly, the total runtime will be O(bd).
In summary:
The runtime of O(|E|) is typically used when discussing the algorithm when being used to explore the entire graph.
The runtime of O(bd) is typically used when discussing the algorithm when being used to find a specific node in the graph.
Hope this helps!

Most efficient way to visit nodes of a DAG in order

I have a large (100,000+ nodes) Directed Acyclic Graph (DAG) and would like to run a "visitor" type function on each node in order, where order is defined by the arrows in the graph. i.e. all parents of a node are guaranteed to be visited before the node itself.
If two nodes do not refer to each other directly or indirectly, then I don't care which order they are visited in.
What's the most efficient algorithm to do this?
You would have to perform a topological sort on the nodes, and visit the nodes in the resulting order.
The complexity of such algorithm is O(|V|+|E|) which is quite good. You want to traverse all nodes, so if you would want a faster algorithm than that, you would have to solve it without even looking at all edges, which would be dangerous, because one single edge could havoc the order completely.
There are some answers here:
Good graph traversal algorithm
and here:
http://en.wikipedia.org/wiki/Topological_sorting
In general, after visiting a node, you should visit its related nodes, but only the nodes that are not already visited. In order to keep track of the visited nodes, you need to keep the IDs of the nodes in a set (or map), or you can mark the node as visited (somehow).
If you care about the topological order, you must first get hold of a collection of all the un-traversed links ("remaining links") to a node, sorted by the id of the referenced node (typically: map(node-ID -> link-count)). If you haven't got that, you might need to build it using an approach similar to the one above. Then, start by visiting a node whose remaining incoming link count is zero. For each link from that node, reduce the remaining link count for each related node, adding the related node to the set of nodes-to-visit (or just visiting the node) if the count reaches zero.
As mentioned in the other answers, this problem can be solved by Topological Sorting.
A very simple algorithm for that (not the most efficient):
Keep an array (or map) indegree[] where indegree[node]=number of incoming edges of node
while there is at least one node n with indegree[n]=0:
for each node n in nodes where indegree[n]>0:
visit(n)
indegree[n]=-1 # mark n as visited
for each node x adjacent to n:
indegree[x]=indegree[x]-1 # its parent has been visited, so one less edge coming into it
You can traverse a DAG in O(N) (without any topsort) by just running your dfs from every node with zero indegree, because those will be the valid "starting point". This will work because graph has no cycles, those zero indegree nodes must exist, and must traverse the whole graph.

How to detect if breaking an edge will make a graph disjoint?

I have a graph that starts off with a single, root node. Nodes are added one by one to the graph. At node creation time, they have to be linked either to the root node, or to another node, by a single edge. Edges can also be created and deleted (one by one, between any two nodes). Nodes can be deleted one at a time. Node and edge creation, deletion operations can happen in any arbitrary order.
OK, so here's my question: When an edge is deleted, is it possible do determine, in constant time (i.e. with an O(1) algorithm), if doing this will divide the graph into two disjoint subgraphs? If it will, then which side of the edge will the root node belong?
I'm willing to maintain, within reasonable limits, any additional data structure that can facilitate the derivation of this information.
Maybe it is not possible to do it in O(1), if so any pointers to literature will be appreciated.
Edit: The graph is a directed graph.
Edit 2: OK, maybe I can restrict the case to deletion of edges from the root node. [Edit 3: not, actually] Also, no edge lands into the root node.
To speed things up a little over the obvious O(|V|+|E|) solution, you could keep a spanning tree which is fairly easy to update as the graph is changed.
If an edge not in the spanning tree is deleted, then the graph isn't disconnected and do nothing. If an edge in the spanning tree is deleted, then you must try to find a new path between those two vertices (if you find one, use it to update the spanning tree, otherwise the graph is disconnected).
So, best case O(1), worst-case O(|V|+|E|), but fairly simple to implement anyway.
Is this a directed graph? The below assumes undirected.
What you are looking for is whether the given edge is a Bridge in the graph. I believe this can be found using a traversal looking for cycles containing that edge and would be O(|V| + |E|).
O(1) is too much to ask.
You might find that looking to maintain 2-edge connected components in dynamic graphs could be useful to you.
Eppstein et al have a paper on this: http://www.ics.uci.edu/~eppstein/pubs/EppGalIta-TR-93-20.pdf
which can maintain 2-edge connected components, in a graph of n nodes where edge insertions and deletions are allowed. It has O(sqrt(n)) time per update and O(log n) time per query.
So any time you delete, you can query in O(logn) to determine if the number of 2-edge connected components has changed. I suppose it can also tell you which component a specific node is in.
This paper is more general and applies to other graph problems, not only 2 edge connected components.
I suggest you look for bridges and dynamic 2-edge connectivity to get you started.
Hope that helps.
as said by Moron just before, you are actually looking for a Bridge in your graph.
Now a Bridge is an edge that has the described attribute and also originates and ends up in Cut Vertexes. Cut vertex is exactly what a Bridge is, but in a vertex (node) edition.
So the only way (though quite bending the initial data structure hypothesis) I can think of, to get a O(1) complexity for this, is if you first check every node in your graph if it is a Cut Vertex and then simply in constant time checking if the edge you want to delete is a attached to one of those two.
Finding if a node in a graph is a Cut Vertex takes O(m+n) where m = # edges and n= # nodes.
Cheers

Resources