Algorithm that will grab all nodes attached to a particular node - algorithm

I am creating an algorithm that is based on directed graphs. I would like a function that will grab all the nodes that are attached to a particular node.
public List<Node> GetNodesInRange(Graph graph, int Range, Node selected)
{
var result = new List<Node>();
result.Add(selected);
if (Range > 0)
{
foreach (Node neighbour in GetNeighbours(graph, selected))
{
result.AddRange(GetNodesInRange(graph, Range - 1, neighbour));
}
}
return result;
}
private List<Node> GetNeighbours(Graph graph, Node selected)
{
foreach(Node node in graph.node)
{
if (node == selected)
{
GetNodesInRange(node, Range-1, /*don't know what 2 do here*/);
//and confused all the way down

It depends on which kind of implementation you are using for your graph:
edge list: you search all edges that have the specified vertex as first or second parameter to the edge
adjacency list: the list attached to a node is already the list of nodes incident to it
adjacency matrix: you take the column (or row) of the vertex that you chose

You are calling GetNodesInRange inside GetNeighbours and GetNeighbours inside GetNodesInRange and that is creating problem.
Look at the answer by Jack.
And if you post how your Graph,Node and Edge looks like then we will be able to offer more help.

Related

Traversal directed graph with cycles

I wrote a script to construct a directed graph using networkx in python, and I want to get all possible path from start to end including cycles.
For example, there is a directed graph:
I want to get these paths:
A->B->D
A->B->C->D
A->B->C->B->D
A->B->C->B->C->B->D
...
As far as I know, there are many algorithms to find shortest paths or paths without cycles between 2 nodes, but I want to find paths with cycles.
Is there any algorithm to achieve this ?
Thx a lot
As noted, there is an infinite number of such paths.
However, you can still generate all of them in a lazy way by maintaining all nodes v (and path you used to reach v) you can reach from the start node in k steps for k=1,2,...; if v is your target node, remember it.
When you have to return the next path, (i) pop the first target node off list, and (ii) generate the next candidates for all non-target nodes on the list. If there is no target node on the list, repeat (ii) until you find one.
The method works assuming the path always exists. If you don't find a path in n-1 steps, where n is the number of nodes, simply report that no path exists.
Here's the pseudo code for an algorithm that generates paths from shortest to longest assuming unit weights:
class Node {
int steps
Node prev
Node(int steps=0, Node prev=null) {
prev = prev
steps = steps
}
}
class PathGenerator {
Queue<Node> nodes
Node start, target;
PathGenerator(Node start, Node target) {
start = start
target = target
nodes = new Queue<>()
nodes.add(start) // assume start.steps=0 and stat.prev=null
}
Node nextPath(int n) {
current_length = -1;
do {
node = nodes.poll()
current_length = node.steps
// expand to all others you can reach from node
for each u in node.neighbors()
list.add(new Node(node, node.steps+1))
// if node is the target, return the path
if (node == target)
return node
} while (current_length < n);
throw new Exception("no path of length <=n exists");
}
}
Beware that the list nodes can grow exponentially in the worst case (think of what happens in case you run it on a complete graph).

Algorithm to find lowest common ancestor in directed acyclic graph?

Imagine a directed acyclic graph as follows, where:
"A" is the root (there is always exactly one root)
each node knows its parent(s)
the node names are arbitrary - nothing can be inferred from them
we know from another source that the nodes were added to the tree in the order A to G (e.g. they are commits in a version control system)
What algorithm could I use to determine the lowest common ancestor (LCA) of two arbitrary nodes, for example, the common ancestor of:
B and E is B
D and F is B
Note:
There is not necessarily a single path to a given node from the root (e.g. "G" has two paths), so you can't simply traverse paths from root to the two nodes and look for the last equal element
I've found LCA algorithms for trees, especially binary trees, but they do not apply here because a node can have multiple parents (i.e. this is not a tree)
Den Roman's link (Archived version) seems promising, but it seemed a little bit complicated to me, so I tried another approach. Here is a simple algorithm I used:
Let say you want to compute LCA(x,y) with x and y two nodes.
Each node must have a value color and count, resp. initialized to white and 0.
Color all ancestors of x as blue (can be done using BFS)
Color all blue ancestors of y as red (BFS again)
For each red node in the graph, increment its parents' count by one
Each red node having a count value set to 0 is a solution.
There can be more than one solution, depending on your graph. For instance, consider this graph:
LCA(4,5) possible solutions are 1 and 2.
Note it still work if you want find the LCA of 3 nodes or more, you just need to add a different color for each of them.
I was looking for a solution to the same problem and I found a solution in the following paper:
http://dx.doi.org/10.1016/j.ipl.2010.02.014
In short, you are not looking for the lowest common ancestor, but for the lowest SINGLE common ancestor, which they define in this paper.
I know it's and old question and pretty good discussion, but since I had some similar problem to solve I came across JGraphT's Lowest Common Ancestor algorithms, thought this might be of help:
NativeLcaFinder
TarjanLowestCommonAncestor
Just some wild thinking. What about using both input nodes as roots, and doing two BFS simultaneously step by step. At a certain step, when there are overlapping in their BLACK sets (recording visited nodes), algorithm stops and the overlapped nodes are their LCA(s). In this way, any other common ancestors will have longer distances than what we have discovered.
Assume that you want to find the ancestors of x and y in a graph.
Maintain an array of vectors- parents (storing parents of each node).
Firstly do a bfs(keep storing parents of each vertex) and find all the ancestors of x (find parents of x and using parents, find all the ancestors of x) and store them in a vector. Also, store the depth of each parent in the vector.
Find the ancestors of y using same method and store them in another vector. Now, you have two vectors storing the ancestors of x and y respectively along with their depth.
LCA would be common ancestor with greatest depth. Depth is defined as longest distance from root(vertex with in_degree=0). Now, we can sort the vectors in decreasing order of their depths and find out the LCA. Using this method, we can even find multiple LCA's (if there).
This link (Archived version) describes how it is done in Mercurial - the basic idea is to find all parents for the specified nodes, group them per distance from the root, then do a search on those groups.
If the graph has cycles then 'ancestor' is loosely defined. Perhaps you mean the ancestor on the tree output of a DFS or BFS? Or perhaps by 'ancestor' you mean the node in the digraph that minimizes the number of hops from E and B?
If you're not worried about complexity, then you could compute an A* (or Dijkstra's shortest path) from every node to both E and B. For the nodes that can reach both E and B, you can find the node that minimizes PathLengthToE + PathLengthToB.
EDIT:
Now that you've clarified a few things, I think I understand what you're looking for.
If you can only go "up" the tree, then I suggest you perform a BFS from E and also a BFS from B. Every node in your graph will have two variables associated with it: hops from B and hops from E. Let both B and E have copies of the list of graph nodes. B's list is sorted by hops from B while E's list is sorted by hops from E.
For each element in B's list, attempt to find it in E's list. Place matches in a third list, sorted by hops from B + hops from E. After you've exhausted B's list, your third sorted list should contain the LCA at its head. This allows for one solution, multiple solutions(arbitrarily chosen among by their BFS ordering for B), or no solution.
I also need exactly same thing , to find LCA in a DAG (directed acyclic graph). LCA problem is related to RMQ (Range Minimum Query Problem).
It is possible to reduce LCA to RMQ and find desired LCA of two arbitrary node from a directed acyclic graph.
I found THIS TUTORIAL detail and good. I am also planing to implement this.
I am proposing O(|V| + |E|) time complexity solution, and i think this approach is correct otherwise please correct me.
Given directed acyclic graph, we need to find LCA of two vertices v and w.
Step1: Find shortest distance of all vertices from root vertex using bfs http://en.wikipedia.org/wiki/Breadth-first_search with time complexity O(|V| + |E|) and also find the parent of each vertices.
Step2: Find the common ancestors of both the vertices by using parent until we reach root vertex Time complexity- 2|v|
Step3: LCA will be that common ancestor which have maximum shortest distance.
So, this is O(|V| + |E|) time complexity algorithm.
Please, correct me if i am wrong or any other suggestions are welcome.
package FB;
import java.util.*;
public class commomAnsectorForGraph {
public static void main(String[] args){
commomAnsectorForGraph com = new commomAnsectorForGraph();
graphNode g = new graphNode('g');
graphNode d = new graphNode('d');
graphNode f = new graphNode('f');
graphNode c = new graphNode('c');
graphNode e = new graphNode('e');
graphNode a = new graphNode('a');
graphNode b = new graphNode('b');
List<graphNode> gc = new ArrayList<>();
gc.add(d);
gc.add(f);
g.children = gc;
List<graphNode> dc = new ArrayList<>();
dc.add(c);
d.children = dc;
List<graphNode> cc = new ArrayList<>();
cc.add(b);
c.children = cc;
List<graphNode> bc = new ArrayList<>();
bc.add(a);
b.children = bc;
List<graphNode> fc = new ArrayList<>();
fc.add(e);
f.children = fc;
List<graphNode> ec = new ArrayList<>();
ec.add(b);
e.children = ec;
List<graphNode> ac = new ArrayList<>();
a.children = ac;
graphNode gn = com.findAncestor(g, c, d);
System.out.println(gn.value);
}
public graphNode findAncestor(graphNode root, graphNode a, graphNode b){
if(root == null) return null;
if(root.value == a.value || root.value == b.value) return root;
List<graphNode> list = root.children;
int count = 0;
List<graphNode> temp = new ArrayList<>();
for(graphNode node : list){
graphNode res = findAncestor(node, a, b);
temp.add(res);
if(res != null) {
count++;
}
}
if(count == 2) return root;
for(graphNode t : temp){
if(t != null) return t;
}
return null;
}
}
class graphNode{
char value;
graphNode parent;
List<graphNode> children;
public graphNode(char value){
this.value = value;
}
}
Everyone.
Try please in Java.
static String recentCommonAncestor(String[] commitHashes, String[][] ancestors, String strID, String strID1)
{
HashSet<String> setOfAncestorsLower = new HashSet<String>();
HashSet<String> setOfAncestorsUpper = new HashSet<String>();
String[] arrPair= {strID, strID1};
Arrays.sort(arrPair);
Comparator<String> comp = new Comparator<String>(){
#Override
public int compare(String s1, String s2) {
return s2.compareTo(s1);
}};
int indexUpper = Arrays.binarySearch(commitHashes, arrPair[0], comp);
int indexLower = Arrays.binarySearch(commitHashes, arrPair[1], comp);
setOfAncestorsLower.addAll(Arrays.asList(ancestors[indexLower]));
setOfAncestorsUpper.addAll(Arrays.asList(ancestors[indexUpper]));
HashSet<String>[] sets = new HashSet[] {setOfAncestorsLower, setOfAncestorsUpper};
for (int i = indexLower + 1; i < commitHashes.length; i++)
{
for (int j = 0; j < 2; j++)
{
if (sets[j].contains(commitHashes[i]))
{
if (i > indexUpper)
if(sets[1 - j].contains(commitHashes[i]))
return commitHashes[i];
sets[j].addAll(Arrays.asList(ancestors[i]));
}
}
}
return null;
}
The idea is very simple. We suppose that commitHashes ordered in downgrade sequence.
We find lowest and upper indexes of strings(hashes-does not mean).
It is clearly that (considering descendant order) the common ancestor can be only after upper index (lower value among hashes).
Then we start enumerating the hashes of commit and build chain of descendent parent chains . For this purpose we have two hashsets are initialised by parents of lowest and upper hash of commit. setOfAncestorsLower, setOfAncestorsUpper. If next hash -commit belongs to any of chains(hashsets),
then if current index is upper than index of lowest hash, then if it is contained in another set (chain) we return the current hash as result. If not, we add its parents (ancestors[i]) to hashset, which traces set of ancestors of set,, where the current element contained. That is the all, basically

Finding all the shortest paths between two nodes in unweighted undirected graph

I need help finding all the shortest paths between two nodes in an unweighted undirected graph.
I am able to find one of the shortest paths using BFS, but so far I am lost as to how I could find and print out all of them.
Any idea of the algorithm / pseudocode I could use?
As a caveat, remember that there can be exponentially many shortest paths between two nodes in a graph. Any algorithm for this will potentially take exponential time.
That said, there are a few relatively straightforward algorithms that can find all the paths. Here's two.
BFS + Reverse DFS
When running a breadth-first search over a graph, you can tag each node with its distance from the start node. The start node is at distance 0, and then, whenever a new node is discovered for the first time, its distance is one plus the distance of the node that discovered it. So begin by running a BFS over the graph, writing down the distances to each node.
Once you have this, you can find a shortest path from the source to the destination as follows. Start at the destination, which will be at some distance d from the start node. Now, look at all nodes with edges entering the destination node. A shortest path from the source to the destination must end by following an edge from a node at distance d-1 to the destination at distance d. So, starting at the destination node, walk backwards across some edge to any node you'd like at distance d-1. From there, walk to a node at distance d-2, a node at distance d-3, etc. until you're back at the start node at distance 0.
This procedure will give you one path back in reverse order, and you can flip it at the end to get the overall path.
You can then find all the paths from the source to the destination by running a depth-first search from the end node back to the start node, at each point trying all possible ways to walk backwards from the current node to a previous node whose distance is exactly one less than the current node's distance.
(I personally think this is the easiest and cleanest way to find all possible paths, but that's just my opinion.)
BFS With Multiple Parents
This next algorithm is a modification to BFS that you can use as a preprocessing step to speed up generation of all possible paths. Remember that as BFS runs, it proceeds outwards in "layers," getting a single shortest path to all nodes at distance 0, then distance 1, then distance 2, etc. The motivating idea behind BFS is that any node at distance k + 1 from the start node must be connected by an edge to some node at distance k from the start node. BFS discovers this node at distance k + 1 by finding some path of length k to a node at distance k, then extending it by some edge.
If your goal is to find all shortest paths, then you can modify BFS by extending every path to a node at distance k to all the nodes at distance k + 1 that they connect to, rather than picking a single edge. To do this, modify BFS in the following way: whenever you process an edge by adding its endpoint in the processing queue, don't immediately mark that node as being done. Instead, insert that node into the queue annotated with which edge you followed to get to it. This will potentially let you insert the same node into the queue multiple times if there are multiple nodes that link to it. When you remove a node from the queue, then you mark it as being done and never insert it into the queue again. Similarly, rather than storing a single parent pointer, you'll store multiple parent pointers, one for each node that linked into that node.
If you do this modified BFS, you will end up with a DAG where every node will either be the start node and have no outgoing edges, or will be at distance k + 1 from the start node and will have a pointer to each node of distance k that it is connected to. From there, you can reconstruct all shortest paths from some node to the start node by listing of all possible paths from your node of choice back to the start node within the DAG. This can be done recursively:
There is only one path from the start node to itself, namely the empty path.
For any other node, the paths can be found by following each outgoing edge, then recursively extending those paths to yield a path back to the start node.
This approach takes more time and space than the one listed above because many of the paths found this way will not be moving in the direction of the destination node. However, it only requires a modification to BFS, rather than a BFS followed by a reverse search.
Hope this helps!
#templatetypedef is correct, but he forgot to mention about distance check that must be done before any parent links are added to node. This means that se keep the distance from source in each of nodes and increment by one the distance for children. We must skip this increment and parent addition in case the child was already visited and has the lower distance.
public void addParent(Node n) {
// forbidding the parent it its level is equal to ours
if (n.level == level) {
return;
}
parents.add(n);
level = n.level + 1;
}
The full java implementation can be found by the following link.
http://ideone.com/UluCBb
I encountered the similar problem while solving this https://oj.leetcode.com/problems/word-ladder-ii/
The way I tried to deal with is first find the shortest distance using BFS, lets say the shortest distance is d. Now apply DFS and in DFS recursive call don't go beyond recursive level d.
However this might end up exploring all paths as mentioned by #templatetypedef.
First, find the distance-to-start of all nodes using breadth-first search.
(if there are a lot of nodes, you can use A* and stop when top of the queue has distance-to-start > distance-to-start(end-node). This will give you all nodes that belong to some shortest path)
Then just backtrack from the end-node. Anytime a node is connected to two (or more) nodes with a lower distance-to-start, you branch off into two (or more) paths.
templatetypedef your answer was very good, thank you a lot for that one(!!), but it missed out one point:
If you have a graph like this:
A-B-C-E-F
| |
D------
Now lets imagine I want this path:
A -> E.
It will expand like this:
A-> B -> D-> C -> F -> E.
The problem there is,
that you will have F as a parent of E, but
A->B->D->F-E is longer than
A->B->C->E. You will have to take of tracking the distances of parents you are so happily adding.
Step 1: Traverse the graph from the source by BFS and assign each node the minimal distance from the source
Step 2: The distance assigned to the target node is the shortest length
Step 3: From source, do a DFS search along all paths where the minimal distance is increased one by one until the target node is reached or the shortest length is reached. Print the path whenever the target node is reached.
A transformation sequence from word beginWord to word endWord using a dictionary wordList is a sequence of words beginWord -> s1 -> s2 -> ... -> sk such that:
Every adjacent pair of words differs by a single letter.
Every si for 1 <= i <= k is in wordList. Note that beginWord does not need to be in wordList.
sk == endWord
Given two words, beginWord and endWord, and a dictionary wordList, return all the shortest transformation sequences from beginWord to endWord, or an empty list if no such sequence exists. Each sequence should be returned as a list of the words [beginWord, s1, s2, ..., sk].
Example 1:
Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log","cog"]
Output: [["hit","hot","dot","dog","cog"],["hit","hot","lot","log","cog"]]
Explanation: There are 2 shortest transformation sequences:
"hit" -> "hot" -> "dot" -> "dog" -> "cog"
"hit" -> "hot" -> "lot" -> "log" -> "cog"
Example 2:
Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log"]
Output: []
Explanation: The endWord "cog" is not in wordList, therefore there is no valid transformation sequence.
https://leetcode.com/problems/word-ladder-ii
class Solution {
public List<List<String>> findLadders(String beginWord, String endWord, List<String> wordList) {
List<List<String>> result = new ArrayList<>();
if (wordList == null) {
return result;
}
Set<String> dicts = new HashSet<>(wordList);
if (!dicts.contains(endWord)) {
return result;
}
Set<String> start = new HashSet<>();
Set<String> end = new HashSet<>();
Map<String, List<String>> map = new HashMap<>();
start.add(beginWord);
end.add(endWord);
bfs(map, start, end, dicts, false);
List<String> subList = new ArrayList<>();
subList.add(beginWord);
dfs(map, result, subList, beginWord, endWord);
return result;
}
private void bfs(Map<String, List<String>> map, Set<String> start, Set<String> end, Set<String> dicts, boolean reverse) {
// Processed all the word in start
if (start.size() == 0) {
return;
}
dicts.removeAll(start);
Set<String> tmp = new HashSet<>();
boolean finish = false;
for (String str : start) {
char[] chars = str.toCharArray();
for (int i = 0; i < chars.length; i++) {
char old = chars[i];
for (char n = 'a' ; n <='z'; n++) {
if(old == n) {
continue;
}
chars[i] = n;
String candidate = new String(chars);
if (!dicts.contains(candidate)) {
continue;
}
if (end.contains(candidate)) {
finish = true;
} else {
tmp.add(candidate);
}
String key = reverse ? candidate : str;
String value = reverse ? str : candidate;
if (! map.containsKey(key)) {
map.put(key, new ArrayList<>());
}
map.get(key).add(value);
}
// restore after processing
chars[i] = old;
}
}
if (!finish) {
// Switch the start and end if size from start is bigger;
if (tmp.size() > end.size()) {
bfs(map, end, tmp, dicts, !reverse);
} else {
bfs(map, tmp, end, dicts, reverse);
}
}
}
private void dfs (Map<String, List<String>> map,
List<List<String>> result , List<String> subList,
String beginWord, String endWord) {
if(beginWord.equals(endWord)) {
result.add(new ArrayList<>(subList));
return;
}
if (!map.containsKey(beginWord)) {
return;
}
for (String word : map.get(beginWord)) {
subList.add(word);
dfs(map, result, subList, word, endWord);
subList.remove(subList.size() - 1);
}
}
}

How to create distinct set from other sets?

While solving the problems on Techgig.com, I got struck with one one of the problem. The problem is like this:
A company organizes two trips for their employees in a year. They want
to know whether all the employees can be sent on the trip or not. The
condition is like, no employee can go on both the trips. Also to
determine which employee can go together the constraint is that the
employees who have worked together in past won't be in the same group.
Examples of the problems:
Suppose the work history is given as follows: {(1,2),(2,3),(3,4)};
then it is possible to accommodate all the four employees in two trips
(one trip consisting of employees 1& 3 and other having employees 2 &
4). Neither of the two employees in the same trip have worked together
in past. Suppose the work history is given as {(1,2),(1,3),(2,3)} then
there is no way possible to have two trips satisfying the company rule
and accommodating all the employees.
Can anyone tell me how to proceed on this problem?
I am using this code for DFS and coloring the vertices.
static boolean DFS(int rootNode) {
Stack<Integer> s = new Stack<Integer>();
s.push(rootNode);
state[rootNode] = true;
color[rootNode] = 1;
while (!s.isEmpty()) {
int u = s.peek();
for (int child = 0; child < numofemployees; child++) {
if (adjmatrix[u][child] == 1) {
if (!state[child]) {
state[child] = true;
s.push(child);
color[child] = color[u] == 1 ? 2 : 1;
break;
} else {
s.pop();
if (color[u] == color[child])
return false;
}
}
}
}
return true;
}
This problem is functionally equivalent to testing if an undirected graph is bipartite. A bipartite graph is a graph for which all of the nodes can be distributed among two sets, and within each set, no node is adjacent to another node.
To solve the problem, take the following steps.
Using the adjacency pairs, construct an undirected graph. This is pretty straightforward: each number represents a node, and for each pair you are given, form a connection between those nodes.
Test the newly generated graph for bipartiteness. This can be achieved in linear time, as described here.
If the graph is bipartite and you've generated the two node sets, the answer to the problem is yes, and each node set, along with its nodes (employees), correspond to one of the two trips.
Excerpt on how to test for bipartiteness:
It is possible to test whether a graph is bipartite, and to return
either a two-coloring (if it is bipartite) or an odd cycle (if it is
not) in linear time, using depth-first search. The main idea is to
assign to each vertex the color that differs from the color of its
parent in the depth-first search tree, assigning colors in a preorder
traversal of the depth-first-search tree. This will necessarily
provide a two-coloring of the spanning tree consisting of the edges
connecting vertices to their parents, but it may not properly color
some of the non-tree edges. In a depth-first search tree, one of the
two endpoints of every non-tree edge is an ancestor of the other
endpoint, and when the depth first search discovers an edge of this
type it should check that these two vertices have different colors. If
they do not, then the path in the tree from ancestor to descendant,
together with the miscolored edge, form an odd cycle, which is
returned from the algorithm together with the result that the graph is
not bipartite. However, if the algorithm terminates without detecting
an odd cycle of this type, then every edge must be properly colored,
and the algorithm returns the coloring together with the result that
the graph is bipartite.
I even used a recursive solution but this one is also passing the same number of cases. Am I leaving any special case handling ?
Below is the recursive solution of the problem:
static void dfs(int v, int curr) {
state[v] = true;
color[v] = curr;
for (int i = 0; i < numofemployees; i++) {
if (adjmatrix[v][i] == 1) {
if (color[i] == curr) {
bipartite = false;
return;
}
if (!state[i])
dfs(i, curr == 1 ? 2 : 1);
}
}
}
I am calling this function from main() as dfs(0,1) where 0 is the starting vertex and 1 is one of the color

Random contraction algorithm for finding Min Cuts in a graph

Okay so here's my algorithm for finding a Cut in a graph (I'm not talking about a min cut here)
Say we're given an adjacency list of a non-directed graph.
Choose any vertice on the graph (let this be denoted by pivot)
Choose any other vertice on the graph (randomly). (denote this by x)
If the two vertices have an edge between them, then remove that edge from the graph. And dump all the vertices that x is connected to, onto pivot. (if not then go back to Step 2.
If any other vertices were connected to x, then change the adjacency list so that now x is replaced by pivot. Ie they're connected to Pivot.
If number of vertices is greater than 2 (go back to step 2)
If equal to 2. Just count number of vertices present in adjacency list of either of the 2 points. This will give the cut
My question is, is this algorithm correct?
That is a nice explanation of Krager's Min-Cut Algorithm for undirected graphs.
I think there might one detail you missed. Or perhaps I just mis-read your description.
You want to remove all self-loops.
For instance, after you remove a vertex and run through your algorithm, Vertex A may now have an edge that goes from Vertex A to Vertex A. This is called a self-loop. And they are generated frequently in process of contracting two vertices. As a first step, you can simply check the whole graph for self-loops, though there are some more sophisticated approaches.
Does that make sense?
I'll only change your randomization.
After choosing first vertex, choose another from his adjacency list. Now you are sure that two vertices have the edge between them. Next step is finding the vertex from adjancecy list.
Agree that you should definitely remove self-loop.
Also another point I want to add is after you randomly choose the first vertice, you don't have to randomly choose another node until you have one that is connected to the first node, you can simply choose from the ones that are connected to the first vertice because you know how many nodes are the first chosen one connects to. So a second random selection within a smaller range. This is just effectively randomly choosing an edge (determined by two nodes/vertices). I have some c# code implementing krager's algorithm you can play around. It's not the most efficient code (especially a more efficient data structure can be used) as I tested it on a 200 nodes graph, for 10000 iterations it takes about 30 seconds to run.
using System;
using System.Collections.Generic;
using System.Linq;
namespace MinCut
{
internal struct Graph
{
public int N { get; private set; }
public readonly List<int> Connections;
public Graph(int n) : this()
{
N = n;
Connections = new List<int>();
}
public override bool Equals(object obj)
{
return Equals((Graph)obj);
}
public override int GetHashCode()
{
return base.GetHashCode();
}
private bool Equals(Graph g)
{
return N == g.N;
}
}
internal sealed class GraphContraction
{
public static void Run(IList<Graph> graphs, int i)
{
var liveGraphs = graphs.Count;
if (i >= liveGraphs)
{
throw new Exception("Wrong random index generation; index cannot be larger than the number of nodes");
}
var leftV = graphs[i];
var r = new Random();
var index = r.Next(0, leftV.Connections.Count);
var rightV = graphs.Where(x=>x.N == leftV.Connections[index]).Single();
foreach (var v in graphs.Where(x => !x.Equals(leftV) && x.Connections.Contains(leftV.N)))
{
v.Connections.RemoveAll(x => x == leftV.N);
}
foreach (var c in leftV.Connections)
{
if (c != rightV.N)
{
rightV.Connections.Add(c);
int c1 = c;
graphs.Where(x=> x.N == c1).First().Connections.Add(rightV.N);
}
}
graphs.Remove(leftV);
}
}
}

Resources