Find inclusion graph from ordered set - algorithm

Let's say I have a set of elements and an order relation (not total) between them.
To keep it simple, let's say its 1D segments with inclusion.
From the raw list of segments, how can I build a graph of direct inclusion (given it's possible from my set):
from the black segments, how can I rebuild the red graph?
I have a O(n^3) solution in C#, which is perfectly ugly, and I wonder if there is anything better [pseudo-code]:
interface INode
{
bool Includes(INode other);
List<INode> Childs { get; set; }
}
class Graph
{
public INode Root { get; set; }
}
class GraphBuilder
{
public static Graph Build(IList<INode> nodes)
{
Graph result = new Graph();
foreach (var segment in nodes) {
segment.Childs = new List<INode>();
bool isRoot = true;
foreach (var segment2 in nodes)
{
if (segment2.Includes(segment))
{
isRoot = false;
}
if (segment.Includes(segment2))
{
bool isDirectChild = true;
foreach (var segment3 in nodes)
{
if (segment.Includes(segment3) && segment3.Includes(segment2))
isDirectChild = false;
break;
}
if (isDirectChild)
segment.Childs.Add(segment2);
}
}
if (isRoot)
{
result.Root = segment;
}
}
return result;
}
}

First do a topological sort of the DAG using an efficient algorithm such as Kahn's algorithm in time O(V+E).
For each element, construct just the arrow from itself to the least (in the topological order) thing it is less than in the original DAG. Figuring these out also requires time O(V+E).
That's your red graph in time O(V+E).
Note that just reading the DAG takes time O(V+E) so this is, up to a constant, the best that you could possibly do.

Related

Unvisited neigbours in graph path

I am trying to find distance in an undirected graph, but when navigating to different path, the count cannot be calculated properly.
I am not sure what is the best approach for:
1) To count the path values excluding unnecessary paths.
2) To keep the path (I think to use LinkedList or ArrayList, etc. what is the best choices for this situation.
Any help would be appreciated.
Here is a code that solves this problem:
void Measure(Node node)
{
path.Add(node);
node.IsVisited = true;
if (node != destination)
{
foreach (var neighbor in node.Neighbors.Where(n=>!n.IsVisited))
{
Measure(neighbor);
}
path.RemoveAt(path.Count - 1);
}
}
You can use any dynamic length structure such as List or LinkedList for storing the path. List is recommended for simplicity.
usage:
var path = new List<Node>()
Measure(firstNode);
Print(path.Count);
this works if there is a path between the two nodes. otherwise the path is empty.
class Node
{
public string Name { get; set; }
public bool IsVisited { get; set; }
public List<Node> Neighbors { get; set; } = new List<Node>();
}

Randomised Path on graph - set length, no crossing, no dead ends

I am working on a game with a 8 wide 5 high grid. I have a 'snake' feature which needs to enter the grid and "walk" around for a set distance (20 for example). There are certain restrictions for the movement of the snake:
It needs go over the predetermined amount of blocks (20)
It cannot go over itself or double back (no dead ends)
Currently I am using a Randomised Depth First search, however I have found that it occasionally goes back over itself (crosses its own path) and am not sure if this is the best way to go about it.
Options considered: I have looked at using A*, but am struggling to figure out a good way to do it without a predetermined goal and the conditions above. I have also considered adding a heuristic to favour blocks that are not on the outside of the grid - but am not sure either of these will solve the issue at hand.
Any help is appreciated and I can add more detail or code if necessary:
public List<GridNode> RandomizedDepthFirst(int distance, GridNode startNode)
{
Stack<GridNode> frontier = new Stack<GridNode>();
frontier.Push(startNode);
List<GridNode> visited = new List<GridNode>();
visited.Add(startNode);
while (frontier.Count > 0 && visited.Count < distance)
{
GridNode current = frontier.Pop();
if (current.nodeState != GridNode.NodeState.VISITED)
{
current.nodeState = GridNode.NodeState.VISITED;
GridNode[] vals = current.FindNeighbours().ToArray();
List<GridNode> neighbours = new List<GridNode>();
foreach (GridNode g in vals.OrderBy(x => XMLReader.NextInt(0,0)))
{
neighbours.Add(g);
}
foreach (GridNode g in neighbours)
{
frontier.Push(g);
}
if (!visited.Contains(current))
{
visited.Add(current);
}
}
}
return visited;
}
An easy way to account for back tracking is using a recursive dfs search.
Consider the following graph:
And a java implementation of a dfs search, removing nodes from the path when backtracking (note the comments. Run it online here) :
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Stack;
public class Graph {
//all graph nodes
private Node[] nodes;
public Graph(int numberOfNodes) {
nodes = new Node[numberOfNodes];
//construct nodes
for (int i = 0; i < numberOfNodes; i++) {
nodes[i] = new Node(i);
}
}
// add edge from a to b
public Graph addEdge(int from, int to) {
nodes[from].addNeighbor(nodes[to]);
//unless unidirectional: //if a is connected to b
//than b should be connected to a
nodes[to].addNeighbor(nodes[from]);
return this; //makes it convenient to add multiple edges
}
//returns a list of path size of pathLength.
//if path not found : returns an empty list
public List<Node> dfs(int pathLength, int startNode) {
List<Node> path = new ArrayList<>(); //a list to hold all nodes in path
Stack<Node> frontier = new Stack<>();
frontier.push(nodes[startNode]);
dfs(pathLength, frontier, path);
return path;
}
private boolean dfs(int pathLength, Stack<Node> frontier, List<Node> path) {
if(frontier.size() < 1) {
return false; //stack is empty, no path found
}
Node current = frontier.pop();
current.setVisited(true);
path.add(current);
if(path.size() == pathLength) {
return true; //path size of pathLength found
}
System.out.println("testing node "+ current); //for testing
Collections.shuffle(current.getNeighbors()); //shuffle list of neighbours
for(Node node : current.getNeighbors()) {
if(! node.isVisited()) {
frontier.push(node);
if(dfs(pathLength, frontier, path)) { //if solution found
return true; //return true. continue otherwise
}
}
}
//if all neighbours tested and no solution found, current node
//is not part of the path
path.remove(current); // remove it
current.setVisited(false); //this accounts for loops: you may get to this node
//from another edge
return false;
}
public static void main(String[] args){
Graph graph = new Graph(9); //make graph
graph.addEdge(0, 4) //add edges
.addEdge(0, 1)
.addEdge(1, 2)
.addEdge(1, 4)
.addEdge(4, 3)
.addEdge(2, 3)
.addEdge(2, 5)
.addEdge(3, 5)
.addEdge(1, 6)
.addEdge(6, 7)
.addEdge(7, 8);
//print path with length of 6, starting with node 1
System.out.println( graph.dfs(6,1));
}
}
class Node {
private int id;
private boolean isVisited;
private List<Node>neighbors;
Node(int id){
this.id = id;
isVisited = false;
neighbors = new ArrayList<>();
}
List<Node> getNeighbors(){
return neighbors;
}
void addNeighbor(Node node) {
neighbors.add(node);
}
boolean isVisited() {
return isVisited;
}
void setVisited(boolean isVisited) {
this.isVisited = isVisited;
}
#Override
public String toString() {return String.valueOf(id);} //convenience
}
Output:
testing node 1
testing node 6
testing node 7
testing node 8
testing node 2
testing node 5
testing node 3
testing node 4
[1, 2, 5, 3, 4, 0]
Note that nodes 6,7,8 which are dead-end, are tested, but not included in the final path.

Algorithm to find if two sets of sets of numbers are isomorphic or not (under permutation)

Given two systems consisting of set of sets of numbers, I would like to know if they are isomorphic under permutation.
For example
{{1,2,3,4,5},{2,4,5,6,7},{2,3,4,6,7}} is a system of 3 sets of 5 numbers.
{{1,2,3,4,6},{2,3,5,6,7},{2,3,4,8,9}} is a another system of 3 sets of 5 numbers. I want to check if these systems are isomorphic.
There are not. The first system uses numbers { 1,2,3,4,5,6,7 }, the second one uses numbers { 1,2,3,4,5,6,7,8,9}.
Here is another example.
{{1,2,3}, {1,2,4}, {3,4,5}} and {{1,2,4}, {1,3,5}, {2,3,5}}. Those two systems of 3 sets of 3 numbers are isomorphic.
If I use permutation (5 3 1 2 4) where 1 becomes 5, 2 becomes 3, etc. The first set becomes {5,3,1}. The second becomes {5,3,2}. The third one becomes {1,2,4}. So the transformed system by this permutation is {{5,3,1},{5,3,2},{1,2,4}} that is equivalently rewritten to {{1,2,4},{1,3,5},{2,3,5}} as I am not interested in order. This is the second system, so the answer is yes.
Currently, on the first example, I apply all 9! permutations of {1,2,3,...,9}
to the first system and check if I can get the second one. It gives me an answer, but very slowly.
Is there a clever algorithm ?
(I only want the answer, yes or no. I am not interested in getting a permutation that transform the first system to the second one.)
As pointed out in the comments, this might correspond to graph-theoretic problems that are still under investigation regarding the complexity and the algorithms that can be employed to tackle them.
However, the complexity always refers to some input size. And here, it is not clear what your input size is. As an example: I think that the most appropriate algorithm might depend on whether you are going to scale up...
the number of numbers (1...9 in your example) or
the number of sets in each set (3, in your example) or
the size of the sets in the sets (5, in your example)
Using your current approach, scaling the number of numbers would not be feasible, because you can't compute all permutations for numbers much larger than 9 due to the exponential running time. But if your intention was to check the isomorphy of sets containing 1000 sets, an algorithm that was polynomial in the number of sets (if such an algorithm existed) might still be slower in practice.
Here, I'd like to sketch an approach that I tried. I did not perform a detailed complexity analysis (which might be pointless if there exist no polynomial time solution at all - and to prove or disprove that can't be the subject of an answer here).
The basic idea is as follows:
Initially, you compute the valid "domains" for each input number. These are possible values that each number may be mapped to, based on the permutation. If the given numbers are 1,2 and 3, then the domains initially could be
1 -> { 1, 2, 3 }
2 -> { 1, 2, 3 }
3 -> { 1, 2, 3 }
But for the given sets, one can already derive some information that allows reducing the domains. For example: Any number that appears n times in the first sets must be mapped to a number that appears n times in the second sets.
Imagine that the given sets are
{{1,2},{1,3}}
{{3,1},{3,2}}
Then the domains would only be
1 -> { 3 }
2 -> { 1, 2 }
3 -> { 1, 2 }
because the 1 appears twice in the first sets, and the only value that appears twice in the second sets is the 3.
After the initial domains are computed, one can perform a backtracking of the possible assignments (permutations) of the numbers. The backtracking can roughly be done as
for (each number n that has no permutation value assigned) {
assign a permutation value (from the current domain of n) to n
update the domains of all other numbers
if the domains are no longer valid, then backtrack
if the solution was found, then return it
}
(The idea is somehow "inspired" by the Arc Consistency 3 Algorithm, although technically, the problems are not directly related)
During the backtracking, one can employ different pruning criteria. That is, one can think of various tricks in order to quickly check whether a certain assignment (a partial permutation) and the domains that are implied by this assignent are "valid" or not.
The obvious (necessary) criterion for an assignment to be valid is that none of the domains may be empty. More generally: Each domain may not appear more often than the number of elements that it contains. When you find out that the domains are
1 -> { 4 }
2 -> { 2,3 }
3 -> { 2,3 }
4 -> { 2,3 }
then there can no longer be a valid solution, and the algorithm may track back.
Of course, bactracking tends to have exponential complexity in the input size. But it might be that there simply exists no efficient algorithm for this problem. For this case, the pruning that may be employed during the backtracking may at least help to reduce the running time for certain cases (or for small input sizes in general) compared to a brute-force exhausting search.
Here is an implementation of my experiments, in Java. This is not particularly elegant, but shows that it basically works: It quickly finds a solution if there exists one, and (for the given input sizes) does not take long to detect when there is no solution.
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.Comparator;
import java.util.LinkedHashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
public class SetSetIsomorphisms
{
public static void main(String[] args)
{
Map<Integer, Integer> p = new LinkedHashMap<Integer, Integer>();
p.put(0, 3);
p.put(1, 4);
p.put(2, 8);
p.put(3, 2);
p.put(4, 1);
p.put(5, 5);
p.put(6, 0);
p.put(7, 9);
p.put(8, 7);
p.put(9, 6);
Set<Set<Integer>> sets0 = new LinkedHashSet<Set<Integer>>();
sets0.add(new LinkedHashSet<Integer>(Arrays.asList(1,2,3,4,5)));
sets0.add(new LinkedHashSet<Integer>(Arrays.asList(2,4,5,6,7)));
sets0.add(new LinkedHashSet<Integer>(Arrays.asList(0,8,3,9,7)));
Set<Set<Integer>> sets1 = new LinkedHashSet<Set<Integer>>();
for (Set<Integer> set0 : sets0)
{
sets1.add(applyMapping(set0, p));
}
// Uncomment these lines for a case where NO permutation is found
//sets1.remove(sets1.iterator().next());
//sets1.add(new LinkedHashSet<Integer>(Arrays.asList(4,8,2,3,5)));
System.out.println("Initially valid? "+
areIsomorphic(sets0, sets1, p));
boolean areIsomorphic = areIsomorphic(sets0, sets1);
System.out.println("Result: "+areIsomorphic);
}
private static <T> boolean areIsomorphic(
Set<Set<T>> sets0, Set<Set<T>> sets1)
{
System.out.println("sets0:");
for (Set<T> set0 : sets0)
{
System.out.println(" "+set0);
}
System.out.println("sets1:");
for (Set<T> set1 : sets1)
{
System.out.println(" "+set1);
}
Set<T> all0 = flatten(sets0);
Set<T> all1 = flatten(sets1);
System.out.println("All elements");
System.out.println(" "+all0);
System.out.println(" "+all1);
if (all0.size() != all1.size())
{
System.out.println("Different number of elements");
return false;
}
Map<T, Set<T>> domains = computeInitialDomains(sets0, sets1);
System.out.println("Domains initially:");
print(domains, "");
Map<T, T> assignment = new LinkedHashMap<T, T>();
return compute(assignment, domains, sets0, sets1, "");
}
private static <T> Map<T, Set<T>> computeInitialDomains(
Set<Set<T>> sets0, Set<Set<T>> sets1)
{
Set<T> all0 = flatten(sets0);
Set<T> all1 = flatten(sets1);
Map<T, Set<T>> domains = new LinkedHashMap<T, Set<T>>();
for (T e0 : all0)
{
Set<T> domain0 = new LinkedHashSet<T>();
for (T e1 : all1)
{
if (isFeasible(e0, sets0, e1, sets1))
{
domain0.add(e1);
}
}
domains.put(e0, domain0);
}
return domains;
}
private static <T> boolean isFeasible(
T e0, Set<Set<T>> sets0,
T e1, Set<Set<T>> sets1)
{
int c0 = countContaining(sets0, e0);
int c1 = countContaining(sets1, e1);
return c0 == c1;
}
private static <T> int countContaining(Set<Set<T>> sets, T value)
{
int count = 0;
for (Set<T> set : sets)
{
if (set.contains(value))
{
count++;
}
}
return count;
}
private static <T> boolean compute(
Map<T, T> assignment, Map<T, Set<T>> domains,
Set<Set<T>> sets0, Set<Set<T>> sets1, String indent)
{
if (!validCounts(domains.values()))
{
System.out.println(indent+"There are too many domains "
+ "with too few elements");
print(domains, indent);
return false;
}
if (assignment.keySet().equals(domains.keySet()))
{
System.out.println(indent+"Found assignment: "+assignment);
return true;
}
List<Entry<T, Set<T>>> entryList =
new ArrayList<Map.Entry<T,Set<T>>>(domains.entrySet());
Collections.sort(entryList, new Comparator<Map.Entry<T,Set<T>>>()
{
#Override
public int compare(Entry<T, Set<T>> e0, Entry<T, Set<T>> e1)
{
return Integer.compare(
e0.getValue().size(),
e1.getValue().size());
}
});
for (Entry<T, Set<T>> entry : entryList)
{
T key = entry.getKey();
if (assignment.containsKey(key))
{
continue;
}
Set<T> domain = entry.getValue();
for (T value : domain)
{
Map<T, Set<T>> newDomains = copy(domains);
removeFromOthers(newDomains, key, value);
assignment.put(key, value);
newDomains.get(key).clear();
newDomains.get(key).add(value);
System.out.println(indent+"Using "+assignment);
Set<Set<T>> setsContainingKey =
computeSetsContainingValue(sets0, key);
Set<Set<T>> setsContainingValue =
computeSetsContainingValue(sets1, value);
Set<T> keyElements = flatten(setsContainingKey);
Set<T> valueElements = flatten(setsContainingValue);
for (T otherKey : keyElements)
{
Set<T> otherValues = newDomains.get(otherKey);
otherValues.retainAll(valueElements);
}
System.out.println(indent+"Domains when "+assignment);
print(newDomains, indent);
boolean done = compute(assignment, newDomains,
sets0, sets1, indent+" ");
if (done)
{
return true;
}
assignment.remove(key);
}
}
return false;
}
private static boolean validCounts(
Collection<? extends Collection<?>> collections)
{
Map<Collection<?>, Integer> counts =
new LinkedHashMap<Collection<?>, Integer>();
for (Collection<?> c : collections)
{
Integer count = counts.get(c);
if (count == null)
{
count = 0;
}
counts.put(c, count+1);
}
for (Entry<Collection<?>, Integer> entry : counts.entrySet())
{
Collection<?> c = entry.getKey();
Integer count = entry.getValue();
if (count > c.size())
{
return false;
}
}
return true;
}
private static <K, V> Map<K, Set<V>> copy(Map<K, Set<V>> map)
{
Map<K, Set<V>> copy = new LinkedHashMap<K, Set<V>>();
for (Entry<K, Set<V>> entry : map.entrySet())
{
K k = entry.getKey();
Set<V> values = entry.getValue();
copy.put(k, new LinkedHashSet<V>(values));
}
return copy;
}
private static <T> Set<Set<T>> computeSetsContainingValue(
Iterable<? extends Set<T>> sets, T value)
{
Set<Set<T>> containing = new LinkedHashSet<Set<T>>();
for (Set<T> set : sets)
{
if (set.contains(value))
{
containing.add(set);
}
}
return containing;
}
private static <T> void removeFromOthers(
Map<T, Set<T>> map, T key, T value)
{
for (Entry<T, Set<T>> entry : map.entrySet())
{
if (!entry.getKey().equals(key))
{
Set<T> values = entry.getValue();
values.remove(value);
}
}
}
private static <T> Set<T> flatten(
Iterable<? extends Collection<? extends T>> collections)
{
Set<T> set = new LinkedHashSet<T>();
for (Collection<? extends T> c : collections)
{
set.addAll(c);
}
return set;
}
private static <T> Set<T> applyMapping(
Set<T> set, Map<T, T> map)
{
Set<T> result = new LinkedHashSet<T>();
for (T e : set)
{
result.add(map.get(e));
}
return result;
}
private static <T> boolean areIsomorphic(
Set<Set<T>> sets0, Set<Set<T>> sets1, Map<T, T> p)
{
for (Set<T> set0 : sets0)
{
Set<T> set1 = applyMapping(set0, p);
if (!sets1.contains(set1))
{
return false;
}
}
return true;
}
private static void print(Map<?, ?> map, String indent)
{
for (Entry<?, ?> entry : map.entrySet())
{
System.out.println(indent+entry.getKey()+": "+entry.getValue());
}
}
}
I believe your problem is equivalent to the Graph Isomorphism problem (GI). Your set of sets can be modelled as a (bipartite) graph, with nodes representing the base values of your set (e.g., 1, 2, 3, ... 7), while nodes on the right represent sets (e.g., {1,2,3,4,6} or {2,3,5,6,7}). Draw an edge connecting a node on the left with a node on the right if the number is an element of the set; in my example, 1 is connected only to {1,2,3,4,6} while 2 is connected to both {1,2,3,4,6} and to {2,3,5,6,7}. 1 is connected to all sets which contain it; {1,2,3,4,6} is connected to all numbers contained in it.
Any bipartite graph can be realized in this manner. Conversely, GI can be reduced to solving GI on bipartite graphs. (Any graph can be made into a bipartite graph by replacing each edge with two new edges and a new vertex. Isomorphism in the resulting bipartite graphs is equivalent to isomorphism in the original graphs.)
GI is in NP, but it is not known whether it is NP complete. In practice, GI can be solved quickly for hundreds of vertices with e.g., NAUTY.

how to connect circular doubly-linked lists

Consider, I have given 2 items of the circular doubly-linked lists A and B. I want to implement a function which connects both of the lists.
This task is simple. However, I want to handle the case where A and B are the members of the same linked list. In this case it would just do nothing. Is it possible to implement it in O(1)? Do I need to check whether A and B are from the same list first? Or can I somehow magically swap/mix the pointers?
IMO it is not possible, but I'm unable to prove it.
thanks
You can. Being curious myself, I sketched an implementation in Java. Assuming a linked list as follows
public class CLinkedList {
class Node {
Node prev, next;
int val;
public Node(int v) {
val = v;
}
}
Node s;
public CLinkedList(Node node) {
s = node;
}
void traverse() {
if (s == null)
return;
Node n = s;
do {
System.out.println(n.val);
n = n.next;
} while (n != s);
}
...
}
a merging method would look like
void join(CLinkedList list) {
Node prev = list.s.prev;
Node sprev = s.prev;
prev.next = s;
sprev.next = list.s;
s.prev = prev;
list.s.prev = sprev;
}
which works just fine when the lists are different.
If they're not, all this does is just split the original list into two perfectly valid, different linked lists. All you should do is just join them again.
Edit: The join method joins (lol) two lists if they are different or (contrary to its name) splits the list if the nodes belong to the same list. Applying join twice thus has no effect, indeed. But you can make use of this property in other ways. The method below works fine:
public void merge(CLinkedList list) {
CLinkedList nList = new CLinkedList(s.next);
join(nList);
nList.join(list);
join(nList);
}
public static void main(String[] args) {
CLinkedList list = new CLinkedList(new int[] {1,2,3});
CLinkedList nlist = new CLinkedList(list.s.next);
list.merge(nlist);
list.traverse();
}
Still O(1) :) Keeping the small disclaimer - not the best quality code, but you get the picture.

How do I find all paths in a sequence of edges in a fast way?

Let E be a given directed edge set. Suppose it is known that the edges in E can form a directed tree T with all the nodes (except the root node) has only 1 in-degree. The problem is how to efficiently traverse the edge set E, in order to find all the paths in T?
For example, Given a directed edge set E={(1,2),(1,5),(5,6),(1,4),(2,3)}. We know that such a set E can generate a directed tree T with only 1 in-degree (except the root node). Is there any fast method to traverse the edge set E, in order to find all the paths as follows:
Path1 = {(1,2),(2,3)}
Path2 = {(1,4)}
Path3 = {(1,5),(5,6)}
By the way, suppose the number of edges in E is |E|, is there complexity bound to find all the paths?
I have not worked on this kind of problems earlier. So just tried out a simple solution. Check this out.
public class PathFinder
{
private static Dictionary<string, Path> pathsDictionary = new Dictionary<string, Path>();
private static List<Path> newPaths = new List<Path>();
public static Dictionary<string, Path> GetBestPaths(List<Edge> edgesInTree)
{
foreach (var e in edgesInTree)
{
SetNewPathsToAdd(e);
UpdatePaths();
}
return pathsDictionary;
}
private static void SetNewPathsToAdd(Edge currentEdge)
{
newPaths.Clear();
newPaths.Add(new Path(new List<Edge> { currentEdge }));
if (!pathsDictionary.ContainsKey(currentEdge.PathKey()))
{
var pathKeys = pathsDictionary.Keys.Where(c => c.Split(",".ToCharArray())[1] == currentEdge.StartPoint.ToString()).ToList();
pathKeys.ForEach(key => { var newPath = new Path(pathsDictionary[key].ConnectedEdges); newPath.ConnectedEdges.Add(currentEdge); newPaths.Add(newPath); });
pathKeys = pathsDictionary.Keys.Where(c => c.Split(",".ToCharArray())[0] == currentEdge.EndPoint.ToString()).ToList();
pathKeys.ForEach(key => { var newPath = new Path(pathsDictionary[key].ConnectedEdges); newPath.ConnectedEdges.Insert(0, currentEdge); newPaths.Add(newPath); });
}
}
private static void UpdatePaths()
{
Path oldPath = null;
foreach (Path newPath in newPaths)
{
if (!pathsDictionary.ContainsKey(newPath.PathKey()))
pathsDictionary.Add(newPath.PathKey(), newPath);
else
{
oldPath = pathsDictionary[newPath.PathKey()];
if (newPath.PathWeights < oldPath.PathWeights)
pathsDictionary[newPath.PathKey()] = newPath;
}
}
}
}
public static class Extensions
{
public static bool IsNullOrEmpty(this IEnumerable<object> collection) { return collection == null || collection.Count() > 0; }
public static string PathKey(this ILine line) { return string.Format("{0},{1}", line.StartPoint, line.EndPoint); }
}
public interface ILine
{
int StartPoint { get; }
int EndPoint { get; }
}
public class Edge :ILine
{
public int StartPoint { get; set; }
public int EndPoint { get; set; }
public Edge(int startPoint, int endPoint)
{
this.EndPoint = endPoint;
this.StartPoint = startPoint;
}
}
public class Path :ILine
{
private List<Edge> connectedEdges = new List<Edge>();
public Path(List<Edge> edges) { this.connectedEdges = edges; }
public int StartPoint { get { return this.IsValid ? this.connectedEdges.First().StartPoint : 0; } }
public int EndPoint { get { return this.IsValid ? this.connectedEdges.Last().EndPoint : 0; } }
public bool IsValid { get { return this.EdgeCount > 0; } }
public int EdgeCount { get { return this.connectedEdges.Count; } }
// For now as no weights logics are defined
public int PathWeights { get { return this.EdgeCount; } }
public List<Edge> ConnectedEdges { get { return this.connectedEdges; } }
}
I think DFS(Depth First Search) should suit your requirements. Have a look at it here - Depth First Search - Wikipedia. You can tailor it to print the paths in the format that you require. As regards the complexity, since every node in your tree has in-degree one , the number of edges for your tree is bounded as - |E| = O(|V|). Since DFS operates with a complexity of O(|V|+|E|), your overall complexity comes out to be O(|V|).
I did this question as a part of a my assignment. The gentleman above has correctly pointed out to use pathID. You must visit each edge atleast once hence the complexity bound is O(V+E) but for tree E=O(V) therefore the complexity is O(v). I will give you a glimpse since the details are bit involved -
you will label each path with a unique ID and the path are alloted IDs in the incremental values such as 0,1,2.... A pathID of a path is the sum of weights of the edges on the path. So using DFS allocate weights to the path. You may begin by using 0 for edges until you encounter your first path and then you keep adding 1 and so on. You will also have to argue the correctness and properly allocate the weights. DFS will do the trick.

Resources