Related
Let G = (V;E) be a directed graph with two weights we and re on each edge e. There
are two special vertices in this graph: c, the chemical plant, and a, the amphitheater. The
task is to create an evacuation plan from a to v for every vertex w in V in the event that
the chemical plant catches re. The time it takes the re to spread along an edge e is re.
The time to travel along an edge is we.
I have to write an algorithm to find the shortest safe path from a to
v for each v in V : a path is safe if it does not include any vertex on re at the time that vertex is traversed.
My trying:
i. I have first computed the time at which each vertex catches re , assuming the chemical plant catches re at time 0.
ii. Then I have modified Dijkstra's algorithm so that it does not use any vertex on re on the paths
it computes.
My Algorithm:
DijkstraShortestPath:
1. graph[][] --> read the graph
2. c <-- chemical plant
3. a <-- amphitheater
4. for i --> 0 to V
queuePriority.add(i);
distance[i] = Integer.MAX_VALUE;
prev[i] = -1;
visitedVertex[i] = false;
}
5. Q <-- queue(c), level[]--> inifinity
6. while Q is not empty:
U <-- Q.pop()
for all edges from u to v in G.adjacentEdges(c)
If level[v] --> inifinity:
level[v] <-- level[u] + re
Q.enqueu(v)
7. distance[a] = 0;
while (!queuePriority.isEmpty()) {
int u = minDistance();
if (u == -1) {
break;
}
queuePriority.remove(u);
visitedVertex[u] = true;
for (int i = 0; i < nodeNumber; i++) {
if (graph[u][i] != 0 && graph[u][i]%re ==0 && graph[u][i] + distance[u] < distance[i]) {
distance[i] = graph[u][i] + distance[u];
prev[i] = u;
}
}
}
printShortestPath(distination);
return distance[distination];
}
Does my algorithm solve the problem ? If not, how can I modify my algorithm to solve the problem ?
From the Algorithm Design Manual, 2nd edition, question 5-22:
Design a linear-time algorithm to eliminate each vertex v of degree 2 from a graph by replacing edges (u,v) and (v,w) by an edge (u,w). We also seek to eliminate multiple copies of edges by replacing them with a single edge. Note that removing multiple copies of an edge may create a new vertex of degree 2, which has to be removed, and that removing a vertex of degree 2 may create multiple edges, which also must be removed.
Because the question appears in the section on undirected graphs,
assume that our graph is undirected.
Here is an algorithm for removing vertices of degree two as desired, similar to the one given here. The implementation relies on Skiena's graph, queue, and edgenode structs. g->edges[v] is a pointer to the head of v's adjacency list. g->M[u][v] returns the boolean value in row u and column v of g's adjacency matrix.
The problem: according to my analysis, it does not work in linear time, no matter whether we use adjacency lists or adjacency matrices to represent our graph.
process_vertex(graph *g, int v, queue Q) {
int u,w;
if (degree[v] != 2) {return;}
u = pop_first_edge(g,v); // O(n) for AL, O(n) for AM
w = pop_first_edge(g,v); // O(n) for AL, O(n) for AM
if (!edge_exists(g,u,w)) { // O(n) for AL, O(1) for AM
insert_edge(g,u,w);
}
if (degree[u] == 2) {enqueue(Q,u);}
if (degree[w] == 2) {enqueue(Q,w);}
}
remove_degree_twos(graph *g) {
queue Q;
for (int v = 1; v <= g->nvertices; ++v) {
if (degree[v] == 2) {enqueue(Q,v);}
}
while (!Q.empty()) {
process_vertex(g,dequeue(Q),Q);
}
}
There are two required functions that have not yet been implemented:
// removes the first edge in v's adjacency list
// and updates degrees appropriately
// returns the vertex to which that edge points
int pop_first_edge(g,v);
// determines whether edge (u,v) already exists
// in graph g
bool edge_exists(g,u,v);
If g is represented with adjacency lists, then the required functions can be implemented as follows:
// O(n)
int pop_first_edge(g,v) {
int u = -1; // the vertex to which the first outedge from v points
edgenode *p = g->edges[v];
if (p != NULL) {
u = p->y;
g->edges[v] = p->next;
--(g->degree[v]);
// delete v from u's adjacency list
p1 = &g->edges[u];
p2 = g->edges[u];
while (p2 != NULL) {
if (p2->y == v) {
*p1 = p2->next;
--(g->degree[u]);
break;
}
p1 = p2;
p2 = p2->next;
}
}
}
// O(n)
edge_exists(g,u,w) {
edgenode *p = g->edges[u];
while (p != NULL) {
if (p->y == w) {
return true;
}
p = p->next;
}
return false;
}
If g is represented with adjacency matrices, then we have:
// O(n)
int pop_first_edge(v) {
int u = -1;
for (int j = 1; j <= g->nvertices; ++j) {
if (M[v][j]) {
u = j;
break;
}
}
if (u > 0) {
M[v][u] = false;
M[u][v] = false;
--(g->degree[v]);
--(g->degree[u]);
return u;
} else {
return -1;
}
}
// O(1)
edge_exists(g,u,w) {
return g->M[u][w];
}
No matter whether we use adjacency lists or adjacency matrices, the runtime of process_vertex is O(n), where n is the number of vertices in the graph. Because O(n) vertices may be processed, the total runtime is O(n^2).
How can this be done in linear time?
Assume we have graph G=(V,E), where
V={1,...,n}
is set of vertices and E is set of edges, therefore subset of set
{(x,y) : x,y in V}
Usually the graph is given by the list of edges. Let's assume we receive it this way. Now:
make the set of edges distinct (O(m), m = |E|), consider the edges (x,y) and (y,x) equal
create an array representing the degree of each vertex in G (O(m) again)
for each vertex v of degree 2 make a list of those 2 edges (this is O(m), one iteration over the edges is sufficient)
at last iterate over the vertices of degree 2 and replace the related edges with one edge getting rid of vertex of degree 2 (this is O(n) operation)
repeat step 1. and return edges
Here's code written in python:
def remove_2_degree_vertices(n, edges):
adj_matrix = [[0]*n for i in xrange(n)]
#1 O(m)
edges = get_distinct(adj_matrix, edges)
#2 O(m)
degrees = calculate_degrees(n, edges)
#3 O(m)
adj_lists = get_neighbours(degrees, edges)
#4 O(n + m)
to_remove, to_add_candidates = process(n, adj_lists)
edges.extend(to_add_candidates)
result = [(e0,e1) for e0, e1 in edges if to_remove[e0][e1] == 0]
#5 O(m)
adj_matrix = [[0]*n for i in xrange(n)]
result = get_distinct(adj_matrix, result)
return result
def get_distinct(adj_matrix, edges):
result = []
for e0, e1 in edges:
if adj_matrix[e0][e1] == 0:
result.append((e0,e1))
adj_matrix[e0][e1] = adj_matrix[e1][e0] = 1
return result
def calculate_degrees(n, edges):
result = [0]*n
for e0, e1 in edges:
result[e0] += 1
result[e1] += 1
return result
def get_neighbours(degrees, edges):
result = {}
for e0, e1 in edges:
if degrees[e0] == 2:
add_neighbour(result, e0, e1)
if degrees[e1] == 2:
add_neighbour(result, e1, e0)
return result
def add_neighbour(neighbours, key, value):
if not neighbours.has_key(key):
neighbours[key] = set()
neighbours[key].add(value)
else:
neighbours[key].add(value)
def process(n, adj_lists):
to_remove = [[0 for i in xrange(n)] for j in xrange(n)]
to_add_candidates = []
if len(adj_lists) == 0:
return to_remove, to_add_candidates
for key in adj_lists:
neighbours = list(adj_lists[key])
if len(neighbours) == 1:
to_remove[key][neighbours[0]] = to_remove[neighbours[0]][key] = 1
else: # len(neighbours) == 2
remove_edge(adj_lists, to_remove, key, neighbours[0], neighbours[1])
remove_edge(adj_lists, to_remove, key, neighbours[1], neighbours[0])
to_add_candidates.append((neighbours[0], neighbours[1]))
return to_remove, to_add_candidates
def remove_edge(adj_lists, to_remove, key, n0, n1):
to_remove[key][n0] = to_remove[n0][key] = 1
if n0 in adj_lists:
adj_lists[n0].remove(key)
adj_lists[n0].add(n1)
This question already has answers here:
Bridges in a connected graph
(4 answers)
Closed 7 years ago.
Given an undirected Graph, how can I find all the bridges? I've only found Tarjan's algorithm which seems rather complicated.
It seems there should be multiple linear time solutions, but I can't find anything.
Tarjan's algorithm was the first bridge finding algorithm in an undirected graph that ran in linear time. However a simpler algorithm exists and you can have a look at its implementation here.
private int bridges; // number of bridges
private int cnt; // counter
private int[] pre; // pre[v] = order in which dfs examines v
private int[] low; // low[v] = lowest preorder of any vertex connected to v
public Bridge(Graph G) {
low = new int[G.V()];
pre = new int[G.V()];
for (int v = 0; v < G.V(); v++) low[v] = -1;
for (int v = 0; v < G.V(); v++) pre[v] = -1;
for (int v = 0; v < G.V(); v++)
if (pre[v] == -1)
dfs(G, v, v);
}
public int components() { return bridges + 1; }
private void dfs(Graph G, int u, int v) {
pre[v] = cnt++;
low[v] = pre[v];
for (int w : G.adj(v)) {
if (pre[w] == -1) {
dfs(G, v, w);
low[v] = Math.min(low[v], low[w]);
if (low[w] == pre[w]) {
StdOut.println(v + "-" + w + " is a bridge");
bridges++;
}
}
// update low number - ignore reverse of edge leading to v
else if (w != u)
low[v] = Math.min(low[v], pre[w]);
}
}
The algorithm does the job by maintaining 2 arrays pre and low. pre holds the pre-order traversal numbering for the nodes. So pre[0] = 2 means that vertex 0 was discovered in the 3rd dfs call. And low[u] holds the smallest pre-order number of any vertex that is reachable from u.
The algorithm detects a bridge whenever for an edge u--v, where u comes first in the preorder numbering, low[v]==pre[v]. This is because if we remove the edge between u--v, v can't reach any vertex that comes before u. Hence removing the edge would split the graph into 2 separate graphs.
For a more elaborate explanation you can also have a look at this answer .
I've got a collection of orders.
[a, b]
[a, b, c]
[a, b, c, d]
[a, b, c, d]
[b, c]
[c, d]
Where a, b, c and d are SKUs, and there are big boxes full of them. And there are thousands of orders and hundreds of possible SKUs.
Now imagine that when packing these orders, if an order lacks items from the previous order, you must put the box for that SKU away (and similarly take one out that you don't have).
How do you sort this so there are a minimum number of box changes? Or, in more programmy terms: how do you minimize the cumulative hamming distance / maximize the intersect between adjacent items in a collection?
I really have no clue where to start. Is there already some algorithm for this? Is there a decent approximation?
Indeed #irrelephant is correct. This is an undirected Hamiltonian path problem. Model it as a complete undirected graph where the nodes are sku sets and the weight of each edge is the Hamming distance between the respective sets. Then finding a packing order is equivalent to finding a path that touches each node exactly once. This is a Hamiltonian path (HP). You want the minimum weight HP.
The bad news is that finding a min weight HP is NP complete, which means an optimal solution will need exponential time in general.
The good news is that there are reasonable approximation algorithms. The obvious greedy algorithm gives an answer no worse than two times the optimal HP. It is:
create the graph of Hamming distances
sort the edges by weight in increasing order: e0, e1, ...
set C = emptyset
for e in sequence e0, e1, ...
if C union {e} does not cause a cycle nor a vertex with degree more than 2 in C
set C = C union {e}
return C
Note the if statement test can be implemented in nearly constant time with the classical disjoint set union-find algorithm and incident edge counters in vertices.
So the run time here can be O(n^2 log n) for n sku sets assuming that computing a Hamming distance is constant time.
If graphs are not in your vocabulary, think of a triangular table with one entry for each pair of sku sets. The entries in the table are Hamming distances. You want to sort the table entries and then add sku set pairs in sorted order one by one to your plan, skipping pairs that would cause a "fork" or a "loop." A fork would be a set of pairs like (a,b), (b,c), (b,d). A loop would be (a,b), (b,c), (c, a).
There are more complex polynomial time algorithms that get to a 3/2 approximation.
I like this problem so much I couldn't resist coding up the algorithm suggested above. The code is a little long, so I'm putting it in a separate response.
It comes up with this sequence on the example.
Step 1: c d
Step 2: b c
Step 3: a b c
Step 4: a b c d
Step 5: a b c d
Step 6: a b
Note this algorithm ignores initial setup and final teardown costs. It only considers inter-setup distances. Here the Hamming distances are 2 + 1 + 1 + 0 + 2 = 6. This is the same total distance as the order given in the question.
#include <stdio.h>
#include <stdlib.h>
// With these data types we can have up to 64k items and 64k sets of items,
// But then the table of pairs is about 20Gb!
typedef unsigned short ITEM, INDEX;
// A sku set in the problem.
struct set {
INDEX n_elts;
ITEM *elts;
};
// A pair of sku sets and associated info.
struct pair {
INDEX i, j; // Indices of sets.
ITEM dist; // Hamming distance between sets.
INDEX rank, parent; // Disjoint set union/find fields.
};
// For a given set, the adjacent ones along the path under construction.
struct adjacent {
unsigned char n; // 0, 1, or 2.
INDEX elts[2]; // Indices of n adjacent sets.
};
// Some tracing functions for fun.
void print_pair(struct pair *pairs, int i)
{
struct pair *p = pairs + i;
printf("%d:(%d,%d#%d)[%d->%d]\n", i, p->i, p->j, p->dist, p->rank, p->parent);
}
void print_adjacent(struct adjacent *adjs, int i)
{
struct adjacent *a = adjs + i;
switch (a->n) {
case 0: printf("%d:o", i); break;
case 1: printf("%d:o->%d\n", i, a->elts[0]); break;
default: printf("%d:%d<-o->%d\n", i, a->elts[0], a->elts[1]); break;
}
}
// Compute the Hamming distance between two sets. Assumes elements are sorted.
// Works a bit like merging.
ITEM hamming_distance(struct set *a, struct set *b)
{
int ia = 0, ib = 0;
ITEM d = 0;
while (ia < a->n_elts && ib < b->n_elts) {
if (a->elts[ia] < b->elts[ib]) {
++d;
++ia;
}
else if (a->elts[ia] > b->elts[ib]) {
++d;
++ib;
}
else {
++ia;
++ib;
}
}
return d + (a->n_elts - ia) + (b->n_elts - ib);
}
// Classic disjoint set find operation.
INDEX find(struct pair *pairs, INDEX x)
{
if (pairs[x].parent != x)
pairs[x].parent = find(pairs, pairs[x].parent);
return pairs[x].parent;
}
// Classic disjoint set union. Assumes x and y are canonical.
void do_union(struct pair *pairs, INDEX x, INDEX y)
{
if (x == y) return;
if (pairs[x].rank < pairs[y].rank)
pairs[x].parent = y;
else if (pairs[x].rank > pairs[y].rank)
pairs[y].parent = x;
else {
pairs[y].parent = x;
pairs[x].rank++;
}
}
// Sort predicate to sort pairs by Hamming distance.
int by_dist(const void *va, const void *vb)
{
const struct pair *a = va, *b = vb;
return a->dist < b->dist ? -1 : a->dist > b->dist ? +1 : 0;
}
// Return a plan with greedily found least Hamming distance sum.
// Just an array of indices into the given table of sets.
// TODO: Deal with calloc/malloc failure!
INDEX *make_plan(struct set *sets, INDEX n_sets)
{
// Allocate enough space for all the pairs taking care for overflow.
// This grows as the square of n_sets!
size_t n_pairs = (n_sets & 1) ? n_sets / 2 * n_sets : n_sets / 2 * (n_sets - 1);
struct pair *pairs = calloc(n_pairs, sizeof(struct pair));
// Initialize the pairs.
int ip = 0;
for (int j = 1; j < n_sets; j++) {
for (int i = 0; i < j; i++) {
struct pair *p = pairs + ip++;
p->i = i;
p->j = j;
p->dist = hamming_distance(sets + i, sets + j);
}
}
// Sort by Hamming distance.
qsort(pairs, n_pairs, sizeof pairs[0], by_dist);
// Initialize the disjoint sets.
for (int i = 0; i < n_pairs; i++) {
struct pair *p = pairs + i;
p->rank = 0;
p->parent = i;
}
// Greedily add pairs to the Hamiltonian path so long as they don't cause a non-path!
ip = 0;
struct adjacent *adjs = calloc(n_sets, sizeof(struct adjacent));
for (int i = 0; i < n_pairs; i++) {
struct pair *p = pairs + i;
struct adjacent *ai = adjs + p->i, *aj = adjs + p->j;
// Continue if we'd get a vertex with degree 3 by adding this edge.
if (ai->n == 2 || aj->n == 2) continue;
// Find (possibly) disjoint sets of pair's elements.
INDEX i_set = find(pairs, p->i);
INDEX j_set = find(pairs, p->j);
// Continue if we'd form a cycle by adding this edge.
if (i_set == j_set) continue;
// Otherwise add this edge.
do_union(pairs, i_set, j_set);
ai->elts[ai->n++] = p->j;
aj->elts[aj->n++] = p->i;
// Done after we've added enough pairs to touch all sets in a path.
if (++ip == n_sets - 1) break;
}
// Find a set with only one adjacency, the path start.
int p = -1;
for (int i = 0; i < n_sets; ++i)
if (adjs[i].n == 1) {
p = i;
break;
}
// A plan will be an ordering of sets.
INDEX *plan = malloc(n_sets * sizeof(INDEX));
// Walk along the path to get the ordering.
for (int i = 0; i < n_sets; i++) {
plan[i] = p;
struct adjacent *a = adjs + p;
// This logic figures out which adjacency takes us forward.
p = a->elts[ a->n > 1 && a->elts[1] != plan[i-1] ];
}
// Done with intermediate data structures.
free(pairs);
free(adjs);
return plan;
}
// A tiny test case. Much more testing needed!
#define ARRAY_SIZE(A) (sizeof A / sizeof A[0])
#define SET(Elts) { ARRAY_SIZE(Elts), Elts }
// Items must be in ascending order for Hamming distance calculation.
ITEM a1[] = { 'a', 'b' };
ITEM a2[] = { 'a', 'b', 'c' };
ITEM a3[] = { 'a', 'b', 'c', 'd' };
ITEM a4[] = { 'a', 'b', 'c', 'd' };
ITEM a5[] = { 'b', 'c' };
ITEM a6[] = { 'c', 'd' };
// Out of order to see how we do.
struct set sets[] = { SET(a3), SET(a6), SET(a1), SET(a4), SET(a5), SET(a2) };
int main(void)
{
int n_sets = ARRAY_SIZE(sets);
INDEX *plan = make_plan(sets, n_sets);
for (int i = 0; i < n_sets; i++) {
struct set *s = sets + plan[i];
printf("Step %d: ", i+1);
for (int j = 0; j < s->n_elts; j++) printf("%c ", (char)s->elts[j]);
printf("\n");
}
return 0;
}
I have a programming task(not homework.) where I have to find the bridges in a graph. I worked on it a bit myself, but could not come up with anything satisfactory. So i googled it , I did find something but I am unable to understand the algorithm as it is presented. Could someone please take a look at this code and give me an explanation.?
public Bridge(Graph G) {
low = new int[G.V()];
pre = new int[G.V()];
for (int v = 0; v < G.V(); v++) low[v] = -1;
for (int v = 0; v < G.V(); v++) pre[v] = -1;
for (int v = 0; v < G.V(); v++)
if (pre[v] == -1)
dfs(G, v, v);
}
public int components() { return bridges + 1; }
private void dfs(Graph G, int u, int v) {
pre[v] = cnt++;
low[v] = pre[v];
for (int w : G.adj(v)) {
if (pre[w] == -1) {
dfs(G, v, w);
low[v] = Math.min(low[v], low[w]);
if (low[w] == pre[w]) {
StdOut.println(v + "-" + w + " is a bridge");
bridges++;
}
}
// update low number - ignore reverse of edge leading to v
else if (w != u)
low[v] = Math.min(low[v], pre[w]);
}
}
Def: Bridge is an edge, when removed, will disconnect the graph (or increase the number of connected components by 1).
One observation regarding bridges in graph; none of the edges that belong to a loop can be a bridge. So in a graph such as A--B--C--A, removing any of the edge A--B, B--C and C--A will not disconnect the graph. But, for an undirected graph, the edge A--B implies B--A; and this edge could still be a bridge, where the only loop it is in is A--B--A. So, we should consider only those loops formed by a back edge. This is where the parent information you've passed in the function argument helps. It will help you to not use the loops such as A--B--A.
Now to identify the back edge (or the loop), A--B--C--A we use the low and pre arrays. The array pre is like the visited array in the dfs algorithm; but instead of just flagging that the vertex as visited, we identify each vertex with a different number (according to its position in the dfs tree). The low array helps to identify if there is a loop. The low array identifies the lowest numbered (from pre array) vertex that the current vertex can reach.
Lets work through this graph A--B--C--D--B.
Starting at A
dfs: ^ ^ ^ ^ ^
pre: 0 -1 -1 -1 -1 0--1 -1 -1 1 0--1--2 -1 1 0--1--2--3 1 0--1--2--3--1
graph: A--B--C--D--B A--B--C--D--B A--B--C--D--B A--B--C--D--B A--B--C--D--B
low: 0 -1 -1 -1 -1 0--1 -1 -1 1 0--1--2 -1 1 0--1--2--3 1 0--1--2--3->1
At this point, you've encountered a cycle/loop in graph. In your code if (pre[w] == -1) will be false this time. So, you'll enter the else part. The if statement there is checking if B is the parent vertex of D. It is not, so D will absorb B's pre value into low. Continuing the example,
dfs: ^
pre: 0--1--2--3
graph: A--B--C--D
low: 0--1--2--1
This low value of D propagates back to C through the code low[v] = Math.min(low[v], low[w]);.
dfs: ^ ^ ^
pre: 0--1--2--3--1 0--1--2--3--1 0--1--2--3--1
graph: A--B--C--D--B A--B--C--D--B A--B--C--D--B
low: 0--1--1--1--1 0--1--1--1--1 0--1--1--1--1
Now, that the cycle/loop is identified, we note that the vertex A is not part of the loop. So, you print out A--B as a bridge. The code low['B'] == pre['B'] means an edge to B will be a bridge. This is because, the lowest vertex we can reach from B is B itself.
Hope this explanation helps.
Not a new answer, but I needed this in Python. Here's a translation of the algorithm for an undirected NetworkX Graph object G:
def bridge_dfs(G,u,v,cnt,low,pre,bridges):
cnt += 1
pre[v] = cnt
low[v] = pre[v]
for w in nx.neighbors(G,v):
if (pre[w] == -1):
bridge_dfs(G,v,w,cnt,low,pre,bridges)
low[v] = min(low[v], low[w])
if (low[w] == pre[w]):
bridges.append((v,w))
elif (w != u):
low[v] = min(low[v], pre[w])
def get_bridges(G):
bridges = []
cnt = 0
low = {n:-1 for n in G.nodes()}
pre = low.copy()
for n in G.nodes():
bridge_dfs(G, n, n, cnt, low, pre, bridges)
return bridges # <- List of (node-node) tuples for all bridges in G
Be careful of Python's recursion depth limiter for large graphs...
Not a new answer, but I needed this for the JVM/Kotlin. Here's a translation that relies upon com.google.common.graph.Graph.
/**
* [T] The type of key held in the [graph].
*/
private class BridgeComputer<T>(private val graph: ImmutableGraph<T>) {
/**
* Counter.
*/
private var count = 0
/**
* `low[v]` = Lowest preorder of any vertex connected to `v`.
*/
private val low: MutableMap<T, Int> =
graph.nodes().map { it to -1 }.toMap(mutableMapOf())
/**
* `pre[v]` = Order in which [depthFirstSearch] examines `v`.
*/
private val pre: MutableMap<T, Int> =
graph.nodes().map { it to -1 }.toMap(mutableMapOf())
private val foundBridges = mutableSetOf<Pair<T, T>>()
init {
graph.nodes().forEach { v ->
// DO NOT PRE-FILTER!
if (pre[v] == -1) {
depthFirstSearch(v, v)
}
}
}
private fun depthFirstSearch(u: T, v: T) {
pre[v] = count++
low[v] = checkNotNull(pre[v]) { "pre[v]" }
graph.adjacentNodes(v).forEach { w ->
if (pre[w] == -1) {
depthFirstSearch(v, w)
low[v] =
Math.min(checkNotNull(low[v]) { "low[v]" }, checkNotNull(low[w]) { "low[w]" })
if (low[w] == pre[w]) {
println("$v - $w is a bridge")
foundBridges += (v to w)
}
} else if (w != u) {
low[v] =
Math.min(checkNotNull(low[v]) { "low[v]" }, checkNotNull(pre[w]) { "pre[w]" })
}
}
}
/**
* Holds the computed bridges.
*/
fun bridges() = ImmutableSet.copyOf(foundBridges)!!
}
Hopefully this makes someone's life easier.
Lets say you are given an edge (c,d) and you have to find if it is a bridge or not
There are several methods to solve this problem but lets concentrate on one.
Starting from c, you have to do a BFS.
If there is an edge c-d then don't visit it.
Keep a track of vertex by making a boolean visited.
In the end, if you found that d is visited, this means, by removing c-d we can still visit d from the source c, hence c-d is not a bridge.
Here is the short implementation of the above :
int isBridge(int V, ArrayList<ArrayList<Integer>> adj,int c,int d)
{
Queue<Integer> q = new LinkedList<>();
boolean visited[] = new boolean[V];
ArrayList<Integer> ls = new ArrayList<>();
q.add(c);
while(!q.isEmpty()) {
Integer v = q.remove();
if(visited[v])
continue;
visited[v] = true;
ls.add(v);
for(Integer e: adj.get(v)) {
if(visited[e] || (c == v && d == e))
continue;
q.add(e);
}
}
if(visited[d] == true)
return 0;
return 1;
}