How to find if the distance between destination and source is less than 5? - algorithm

I am trying to write a method that will return true if the distance between two nodes is less than 5 in a graph. I try to write with the minimum distances algorithm as shown :
class Movie{ //this is the node in the graph
String name;
List<Movie> movies;
}
private static boolean isgoodMovies(Movie origin, Movie destination){
Queue<Movie> nextToVisit = new LinkedList<>();
Set<Movie> visited = new HashSet<>();
HashMap<Movie, Integer> distances = new HashMap<>();
nextToVisit.add(origin);
distances.put(origin, 0);
while (!nextToVisit.isEmpty()){
Movie visitedNode = nextToVisit.remove();
if(visited.equals(destination)) {break;}
if(!visited.contains(visitedNode)) {continue;}
visited.add(visitedNode);
for (Movie movie : visitedNode.movies) {
nextToVisit.add(movie);
distances.put(movie, distances.get(visitedNode) + 1);
}
}
return distances.get(origin) < 5;
}
By modifying the minimum distances algorithm, I return the boolean based on the distance of the origin node. I want to optimize it in a way that I do not use a hashmap or any collection, simply having a distance variable. Do you think it is possible?

You could use the recursion here if the number of movies isn't huge (you can get StackOverflowError if the number of method invocations exceeds the maximum stack depth). So, don't use any Collection except for a HashSet as shown below:
private static boolean isGoodMovies(Movie origin, Movie destination) {
Set<Movie> visited = new HashSet<>();
return isGoodMovies(origin, destination, visited, 0);
}
private static boolean isGoodMovies(Movie current, Movie destination, visited, int depth) {
if (depth >= 5) {
return false;
}
if (destination.equals(current)) {
return true;
}
boolean isGood = false;
for (Movie child : current.movies) {
if (!visited.contains(child) {
visited.add(child);
isGood |= isGoodMovies(child, destination, depth + 1);
}
}
return isGood;
}

Related

Clone a directed graph - Leetcode question

I'm having some trouble understanding the bug in my code and why it's timing out.
The problem is to create a clone of a directed graph.
Here's a link to the question: https://www.educative.io/m/clone-directed-graph
My solution uses a queue twice. The first time I map all the nodes in the graph to what its corresponding node in the clone would be.
The second time I use to queue to iterate over the neighbours but get their corresponding mapped values and add those to the neighbours list for the current node I'm on.
Here's my code.
class Node {
public int data;
public List<Node> neighbors = new ArrayList<Node>();
public Node(int d) {data = d;}
}
class graph {
public static Node clone(Node root) {
//use a queue to search the graph
//use a haspmap to map graph node to clone node
Queue<Node> q = new LinkedList<>();
Map<Node, Node> map = new HashMap<>();
q.add(root);
map.put(root, new Node(root.data));
while(!q.isEmpty()) {
Node current = q.remove();
for(Node temp : current.neighbors) {
Node cloneTemp = new Node(temp.data);
if(!map.containsKey(temp)) {
q.add(temp);
map.put(temp, cloneTemp);
}
}
}
q.add(root);
while(!q.isEmpty()) {
Node current = q.remove();
Node currentClone = map.get(current);
for(Node temp : current.neighbors) {
Node mapNode = map.get(temp)
if(!currentClone.neighbors.contains(mapNode)) {
currentClone.neighbors.add(mapNode);
q.add(temp);
}
}
}
return map.get(root);
}
}

Randomised Path on graph - set length, no crossing, no dead ends

I am working on a game with a 8 wide 5 high grid. I have a 'snake' feature which needs to enter the grid and "walk" around for a set distance (20 for example). There are certain restrictions for the movement of the snake:
It needs go over the predetermined amount of blocks (20)
It cannot go over itself or double back (no dead ends)
Currently I am using a Randomised Depth First search, however I have found that it occasionally goes back over itself (crosses its own path) and am not sure if this is the best way to go about it.
Options considered: I have looked at using A*, but am struggling to figure out a good way to do it without a predetermined goal and the conditions above. I have also considered adding a heuristic to favour blocks that are not on the outside of the grid - but am not sure either of these will solve the issue at hand.
Any help is appreciated and I can add more detail or code if necessary:
public List<GridNode> RandomizedDepthFirst(int distance, GridNode startNode)
{
Stack<GridNode> frontier = new Stack<GridNode>();
frontier.Push(startNode);
List<GridNode> visited = new List<GridNode>();
visited.Add(startNode);
while (frontier.Count > 0 && visited.Count < distance)
{
GridNode current = frontier.Pop();
if (current.nodeState != GridNode.NodeState.VISITED)
{
current.nodeState = GridNode.NodeState.VISITED;
GridNode[] vals = current.FindNeighbours().ToArray();
List<GridNode> neighbours = new List<GridNode>();
foreach (GridNode g in vals.OrderBy(x => XMLReader.NextInt(0,0)))
{
neighbours.Add(g);
}
foreach (GridNode g in neighbours)
{
frontier.Push(g);
}
if (!visited.Contains(current))
{
visited.Add(current);
}
}
}
return visited;
}
An easy way to account for back tracking is using a recursive dfs search.
Consider the following graph:
And a java implementation of a dfs search, removing nodes from the path when backtracking (note the comments. Run it online here) :
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Stack;
public class Graph {
//all graph nodes
private Node[] nodes;
public Graph(int numberOfNodes) {
nodes = new Node[numberOfNodes];
//construct nodes
for (int i = 0; i < numberOfNodes; i++) {
nodes[i] = new Node(i);
}
}
// add edge from a to b
public Graph addEdge(int from, int to) {
nodes[from].addNeighbor(nodes[to]);
//unless unidirectional: //if a is connected to b
//than b should be connected to a
nodes[to].addNeighbor(nodes[from]);
return this; //makes it convenient to add multiple edges
}
//returns a list of path size of pathLength.
//if path not found : returns an empty list
public List<Node> dfs(int pathLength, int startNode) {
List<Node> path = new ArrayList<>(); //a list to hold all nodes in path
Stack<Node> frontier = new Stack<>();
frontier.push(nodes[startNode]);
dfs(pathLength, frontier, path);
return path;
}
private boolean dfs(int pathLength, Stack<Node> frontier, List<Node> path) {
if(frontier.size() < 1) {
return false; //stack is empty, no path found
}
Node current = frontier.pop();
current.setVisited(true);
path.add(current);
if(path.size() == pathLength) {
return true; //path size of pathLength found
}
System.out.println("testing node "+ current); //for testing
Collections.shuffle(current.getNeighbors()); //shuffle list of neighbours
for(Node node : current.getNeighbors()) {
if(! node.isVisited()) {
frontier.push(node);
if(dfs(pathLength, frontier, path)) { //if solution found
return true; //return true. continue otherwise
}
}
}
//if all neighbours tested and no solution found, current node
//is not part of the path
path.remove(current); // remove it
current.setVisited(false); //this accounts for loops: you may get to this node
//from another edge
return false;
}
public static void main(String[] args){
Graph graph = new Graph(9); //make graph
graph.addEdge(0, 4) //add edges
.addEdge(0, 1)
.addEdge(1, 2)
.addEdge(1, 4)
.addEdge(4, 3)
.addEdge(2, 3)
.addEdge(2, 5)
.addEdge(3, 5)
.addEdge(1, 6)
.addEdge(6, 7)
.addEdge(7, 8);
//print path with length of 6, starting with node 1
System.out.println( graph.dfs(6,1));
}
}
class Node {
private int id;
private boolean isVisited;
private List<Node>neighbors;
Node(int id){
this.id = id;
isVisited = false;
neighbors = new ArrayList<>();
}
List<Node> getNeighbors(){
return neighbors;
}
void addNeighbor(Node node) {
neighbors.add(node);
}
boolean isVisited() {
return isVisited;
}
void setVisited(boolean isVisited) {
this.isVisited = isVisited;
}
#Override
public String toString() {return String.valueOf(id);} //convenience
}
Output:
testing node 1
testing node 6
testing node 7
testing node 8
testing node 2
testing node 5
testing node 3
testing node 4
[1, 2, 5, 3, 4, 0]
Note that nodes 6,7,8 which are dead-end, are tested, but not included in the final path.

Algorithm to find if two sets of sets of numbers are isomorphic or not (under permutation)

Given two systems consisting of set of sets of numbers, I would like to know if they are isomorphic under permutation.
For example
{{1,2,3,4,5},{2,4,5,6,7},{2,3,4,6,7}} is a system of 3 sets of 5 numbers.
{{1,2,3,4,6},{2,3,5,6,7},{2,3,4,8,9}} is a another system of 3 sets of 5 numbers. I want to check if these systems are isomorphic.
There are not. The first system uses numbers { 1,2,3,4,5,6,7 }, the second one uses numbers { 1,2,3,4,5,6,7,8,9}.
Here is another example.
{{1,2,3}, {1,2,4}, {3,4,5}} and {{1,2,4}, {1,3,5}, {2,3,5}}. Those two systems of 3 sets of 3 numbers are isomorphic.
If I use permutation (5 3 1 2 4) where 1 becomes 5, 2 becomes 3, etc. The first set becomes {5,3,1}. The second becomes {5,3,2}. The third one becomes {1,2,4}. So the transformed system by this permutation is {{5,3,1},{5,3,2},{1,2,4}} that is equivalently rewritten to {{1,2,4},{1,3,5},{2,3,5}} as I am not interested in order. This is the second system, so the answer is yes.
Currently, on the first example, I apply all 9! permutations of {1,2,3,...,9}
to the first system and check if I can get the second one. It gives me an answer, but very slowly.
Is there a clever algorithm ?
(I only want the answer, yes or no. I am not interested in getting a permutation that transform the first system to the second one.)
As pointed out in the comments, this might correspond to graph-theoretic problems that are still under investigation regarding the complexity and the algorithms that can be employed to tackle them.
However, the complexity always refers to some input size. And here, it is not clear what your input size is. As an example: I think that the most appropriate algorithm might depend on whether you are going to scale up...
the number of numbers (1...9 in your example) or
the number of sets in each set (3, in your example) or
the size of the sets in the sets (5, in your example)
Using your current approach, scaling the number of numbers would not be feasible, because you can't compute all permutations for numbers much larger than 9 due to the exponential running time. But if your intention was to check the isomorphy of sets containing 1000 sets, an algorithm that was polynomial in the number of sets (if such an algorithm existed) might still be slower in practice.
Here, I'd like to sketch an approach that I tried. I did not perform a detailed complexity analysis (which might be pointless if there exist no polynomial time solution at all - and to prove or disprove that can't be the subject of an answer here).
The basic idea is as follows:
Initially, you compute the valid "domains" for each input number. These are possible values that each number may be mapped to, based on the permutation. If the given numbers are 1,2 and 3, then the domains initially could be
1 -> { 1, 2, 3 }
2 -> { 1, 2, 3 }
3 -> { 1, 2, 3 }
But for the given sets, one can already derive some information that allows reducing the domains. For example: Any number that appears n times in the first sets must be mapped to a number that appears n times in the second sets.
Imagine that the given sets are
{{1,2},{1,3}}
{{3,1},{3,2}}
Then the domains would only be
1 -> { 3 }
2 -> { 1, 2 }
3 -> { 1, 2 }
because the 1 appears twice in the first sets, and the only value that appears twice in the second sets is the 3.
After the initial domains are computed, one can perform a backtracking of the possible assignments (permutations) of the numbers. The backtracking can roughly be done as
for (each number n that has no permutation value assigned) {
assign a permutation value (from the current domain of n) to n
update the domains of all other numbers
if the domains are no longer valid, then backtrack
if the solution was found, then return it
}
(The idea is somehow "inspired" by the Arc Consistency 3 Algorithm, although technically, the problems are not directly related)
During the backtracking, one can employ different pruning criteria. That is, one can think of various tricks in order to quickly check whether a certain assignment (a partial permutation) and the domains that are implied by this assignent are "valid" or not.
The obvious (necessary) criterion for an assignment to be valid is that none of the domains may be empty. More generally: Each domain may not appear more often than the number of elements that it contains. When you find out that the domains are
1 -> { 4 }
2 -> { 2,3 }
3 -> { 2,3 }
4 -> { 2,3 }
then there can no longer be a valid solution, and the algorithm may track back.
Of course, bactracking tends to have exponential complexity in the input size. But it might be that there simply exists no efficient algorithm for this problem. For this case, the pruning that may be employed during the backtracking may at least help to reduce the running time for certain cases (or for small input sizes in general) compared to a brute-force exhausting search.
Here is an implementation of my experiments, in Java. This is not particularly elegant, but shows that it basically works: It quickly finds a solution if there exists one, and (for the given input sizes) does not take long to detect when there is no solution.
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.Comparator;
import java.util.LinkedHashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
public class SetSetIsomorphisms
{
public static void main(String[] args)
{
Map<Integer, Integer> p = new LinkedHashMap<Integer, Integer>();
p.put(0, 3);
p.put(1, 4);
p.put(2, 8);
p.put(3, 2);
p.put(4, 1);
p.put(5, 5);
p.put(6, 0);
p.put(7, 9);
p.put(8, 7);
p.put(9, 6);
Set<Set<Integer>> sets0 = new LinkedHashSet<Set<Integer>>();
sets0.add(new LinkedHashSet<Integer>(Arrays.asList(1,2,3,4,5)));
sets0.add(new LinkedHashSet<Integer>(Arrays.asList(2,4,5,6,7)));
sets0.add(new LinkedHashSet<Integer>(Arrays.asList(0,8,3,9,7)));
Set<Set<Integer>> sets1 = new LinkedHashSet<Set<Integer>>();
for (Set<Integer> set0 : sets0)
{
sets1.add(applyMapping(set0, p));
}
// Uncomment these lines for a case where NO permutation is found
//sets1.remove(sets1.iterator().next());
//sets1.add(new LinkedHashSet<Integer>(Arrays.asList(4,8,2,3,5)));
System.out.println("Initially valid? "+
areIsomorphic(sets0, sets1, p));
boolean areIsomorphic = areIsomorphic(sets0, sets1);
System.out.println("Result: "+areIsomorphic);
}
private static <T> boolean areIsomorphic(
Set<Set<T>> sets0, Set<Set<T>> sets1)
{
System.out.println("sets0:");
for (Set<T> set0 : sets0)
{
System.out.println(" "+set0);
}
System.out.println("sets1:");
for (Set<T> set1 : sets1)
{
System.out.println(" "+set1);
}
Set<T> all0 = flatten(sets0);
Set<T> all1 = flatten(sets1);
System.out.println("All elements");
System.out.println(" "+all0);
System.out.println(" "+all1);
if (all0.size() != all1.size())
{
System.out.println("Different number of elements");
return false;
}
Map<T, Set<T>> domains = computeInitialDomains(sets0, sets1);
System.out.println("Domains initially:");
print(domains, "");
Map<T, T> assignment = new LinkedHashMap<T, T>();
return compute(assignment, domains, sets0, sets1, "");
}
private static <T> Map<T, Set<T>> computeInitialDomains(
Set<Set<T>> sets0, Set<Set<T>> sets1)
{
Set<T> all0 = flatten(sets0);
Set<T> all1 = flatten(sets1);
Map<T, Set<T>> domains = new LinkedHashMap<T, Set<T>>();
for (T e0 : all0)
{
Set<T> domain0 = new LinkedHashSet<T>();
for (T e1 : all1)
{
if (isFeasible(e0, sets0, e1, sets1))
{
domain0.add(e1);
}
}
domains.put(e0, domain0);
}
return domains;
}
private static <T> boolean isFeasible(
T e0, Set<Set<T>> sets0,
T e1, Set<Set<T>> sets1)
{
int c0 = countContaining(sets0, e0);
int c1 = countContaining(sets1, e1);
return c0 == c1;
}
private static <T> int countContaining(Set<Set<T>> sets, T value)
{
int count = 0;
for (Set<T> set : sets)
{
if (set.contains(value))
{
count++;
}
}
return count;
}
private static <T> boolean compute(
Map<T, T> assignment, Map<T, Set<T>> domains,
Set<Set<T>> sets0, Set<Set<T>> sets1, String indent)
{
if (!validCounts(domains.values()))
{
System.out.println(indent+"There are too many domains "
+ "with too few elements");
print(domains, indent);
return false;
}
if (assignment.keySet().equals(domains.keySet()))
{
System.out.println(indent+"Found assignment: "+assignment);
return true;
}
List<Entry<T, Set<T>>> entryList =
new ArrayList<Map.Entry<T,Set<T>>>(domains.entrySet());
Collections.sort(entryList, new Comparator<Map.Entry<T,Set<T>>>()
{
#Override
public int compare(Entry<T, Set<T>> e0, Entry<T, Set<T>> e1)
{
return Integer.compare(
e0.getValue().size(),
e1.getValue().size());
}
});
for (Entry<T, Set<T>> entry : entryList)
{
T key = entry.getKey();
if (assignment.containsKey(key))
{
continue;
}
Set<T> domain = entry.getValue();
for (T value : domain)
{
Map<T, Set<T>> newDomains = copy(domains);
removeFromOthers(newDomains, key, value);
assignment.put(key, value);
newDomains.get(key).clear();
newDomains.get(key).add(value);
System.out.println(indent+"Using "+assignment);
Set<Set<T>> setsContainingKey =
computeSetsContainingValue(sets0, key);
Set<Set<T>> setsContainingValue =
computeSetsContainingValue(sets1, value);
Set<T> keyElements = flatten(setsContainingKey);
Set<T> valueElements = flatten(setsContainingValue);
for (T otherKey : keyElements)
{
Set<T> otherValues = newDomains.get(otherKey);
otherValues.retainAll(valueElements);
}
System.out.println(indent+"Domains when "+assignment);
print(newDomains, indent);
boolean done = compute(assignment, newDomains,
sets0, sets1, indent+" ");
if (done)
{
return true;
}
assignment.remove(key);
}
}
return false;
}
private static boolean validCounts(
Collection<? extends Collection<?>> collections)
{
Map<Collection<?>, Integer> counts =
new LinkedHashMap<Collection<?>, Integer>();
for (Collection<?> c : collections)
{
Integer count = counts.get(c);
if (count == null)
{
count = 0;
}
counts.put(c, count+1);
}
for (Entry<Collection<?>, Integer> entry : counts.entrySet())
{
Collection<?> c = entry.getKey();
Integer count = entry.getValue();
if (count > c.size())
{
return false;
}
}
return true;
}
private static <K, V> Map<K, Set<V>> copy(Map<K, Set<V>> map)
{
Map<K, Set<V>> copy = new LinkedHashMap<K, Set<V>>();
for (Entry<K, Set<V>> entry : map.entrySet())
{
K k = entry.getKey();
Set<V> values = entry.getValue();
copy.put(k, new LinkedHashSet<V>(values));
}
return copy;
}
private static <T> Set<Set<T>> computeSetsContainingValue(
Iterable<? extends Set<T>> sets, T value)
{
Set<Set<T>> containing = new LinkedHashSet<Set<T>>();
for (Set<T> set : sets)
{
if (set.contains(value))
{
containing.add(set);
}
}
return containing;
}
private static <T> void removeFromOthers(
Map<T, Set<T>> map, T key, T value)
{
for (Entry<T, Set<T>> entry : map.entrySet())
{
if (!entry.getKey().equals(key))
{
Set<T> values = entry.getValue();
values.remove(value);
}
}
}
private static <T> Set<T> flatten(
Iterable<? extends Collection<? extends T>> collections)
{
Set<T> set = new LinkedHashSet<T>();
for (Collection<? extends T> c : collections)
{
set.addAll(c);
}
return set;
}
private static <T> Set<T> applyMapping(
Set<T> set, Map<T, T> map)
{
Set<T> result = new LinkedHashSet<T>();
for (T e : set)
{
result.add(map.get(e));
}
return result;
}
private static <T> boolean areIsomorphic(
Set<Set<T>> sets0, Set<Set<T>> sets1, Map<T, T> p)
{
for (Set<T> set0 : sets0)
{
Set<T> set1 = applyMapping(set0, p);
if (!sets1.contains(set1))
{
return false;
}
}
return true;
}
private static void print(Map<?, ?> map, String indent)
{
for (Entry<?, ?> entry : map.entrySet())
{
System.out.println(indent+entry.getKey()+": "+entry.getValue());
}
}
}
I believe your problem is equivalent to the Graph Isomorphism problem (GI). Your set of sets can be modelled as a (bipartite) graph, with nodes representing the base values of your set (e.g., 1, 2, 3, ... 7), while nodes on the right represent sets (e.g., {1,2,3,4,6} or {2,3,5,6,7}). Draw an edge connecting a node on the left with a node on the right if the number is an element of the set; in my example, 1 is connected only to {1,2,3,4,6} while 2 is connected to both {1,2,3,4,6} and to {2,3,5,6,7}. 1 is connected to all sets which contain it; {1,2,3,4,6} is connected to all numbers contained in it.
Any bipartite graph can be realized in this manner. Conversely, GI can be reduced to solving GI on bipartite graphs. (Any graph can be made into a bipartite graph by replacing each edge with two new edges and a new vertex. Isomorphism in the resulting bipartite graphs is equivalent to isomorphism in the original graphs.)
GI is in NP, but it is not known whether it is NP complete. In practice, GI can be solved quickly for hundreds of vertices with e.g., NAUTY.

Is it possible to design a tree where nodes have infinitely many children?

How can design a tree with lots (infinite number) of branches ?
Which data structure we should use to store child nodes ?
You can't actually store infinitely many children, since that won't fit into memory. However, you can store unboundedly many children - that is, you can make trees where each node can have any number of children with no fixed upper bound.
There are a few standard ways to do this. You could have each tree node store a list of all of its children (perhaps as a dynamic array or a linked list), which is often done with tries. For example, in C++, you might have something like this:
struct Node {
/* ... Data for the node goes here ... */
std::vector<Node*> children;
};
Alternatively, you could use the left-child/right-sibling representation, which represents a multiway tree as a binary tree. This is often used in priority queues like binomial heaps. For example:
struct Node {
/* ... data for the node ... */
Node* firstChild;
Node* nextSibling;
};
Hope this helps!
Yes! You can create a structure where children are materialized on demand (i.e. "lazy children"). In this case, the number of children can easily be functionally infinite.
Haskell is great for creating "functionally infinite" data structures, but since I don't know a whit of Haskell, here's a Python example instead:
class InfiniteTreeNode:
''' abstract base class for a tree node that has effectively infinite children '''
def __init__(self, data):
self.data = data
def getChild(self, n):
raise NotImplementedError
class PrimeSumNode(InfiniteTreeNode):
def getChild(self, n):
prime = getNthPrime(n) # hypothetical function to get the nth prime number
return PrimeSumNode(self.data + prime)
prime_root = PrimeSumNode(0)
print prime_root.getChild(3).getChild(4).data # would print 18: the 4th prime is 7 and the 5th prime is 11
Now, if you were to do a search of PrimeSumNode down to a depth of 2, you could find all the numbers that are sums of two primes (and if you can prove that this contains all even integers, you can win a big mathematical prize!).
Something like this
Node {
public String name;
Node n[];
}
Add nodes like so
public Node[] add_subnode(Node n[]) {
for (int i=0; i<n.length; i++) {
n[i] = new Node();
p("\n Enter name: ");
n[i].name = sc.next();
p("\n How many children for "+n[i].name+"?");
int children = sc.nextInt();
if (children > 0) {
Node x[] = new Node[children];
n[i].n = add_subnode(x);
}
}
return n;
}
Full working code:
class People {
private Scanner sc;
public People(Scanner sc) {
this.sc = sc;
}
public void main_thing() {
Node head = new Node();
head.name = "Head";
p("\n How many nodes do you want to add to Head: ");
int nodes = sc.nextInt();
head.n = new Node[nodes];
Node[] n = add_subnode(head.n);
print_nodes(head.n);
}
public Node[] add_subnode(Node n[]) {
for (int i=0; i<n.length; i++) {
n[i] = new Node();
p("\n Enter name: ");
n[i].name = sc.next();
p("\n How many children for "+n[i].name+"?");
int children = sc.nextInt();
if (children > 0) {
Node x[] = new Node[children];
n[i].n = add_subnode(x);
}
}
return n;
}
public void print_nodes(Node n[]) {
if (n!=null && n.length > 0) {
for (int i=0; i<n.length; i++) {
p("\n "+n[i].name);
print_nodes(n[i].n);
}
}
}
public static void p(String msg) {
System.out.print(msg);
}
}
class Node {
public String name;
Node n[];
}
I recommend you to use a Node class with a left child Node and right child Node and a parent Node.
public class Node
{
Node<T> parent;
Node<T> leftChild;
Node<T> rightChild;
T value;
Node(T val)
{
value = val;
leftChild = new Node<T>();
leftChild.parent = this;
rightChild = new Node<T>();
rightChild.parent = this;
}
You can set grand father and uncle and sibling like this.
Node<T> grandParent()
{
if(this.parent.parent != null)
{
return this.parent.parent;
}
else
return null;
}
Node<T> uncle()
{
if(this.grandParent() != null)
{
if(this.parent == this.grandParent().rightChild)
{
return this.grandParent().leftChild;
}
else
{
return this.grandParent().rightChild;
}
}
else
return null;
}
Node<T> sibling()
{
if(this.parent != null)
{
if(this == this.parent.rightChild)
{
return this.parent.leftChild;
}
else
{
return this.parent.rightChild;
}
}
else
return null;
}
And is impossible to have infinite child, at least you have infinite memory.
good luck !
Hope this will help you.

How do I find all paths in a sequence of edges in a fast way?

Let E be a given directed edge set. Suppose it is known that the edges in E can form a directed tree T with all the nodes (except the root node) has only 1 in-degree. The problem is how to efficiently traverse the edge set E, in order to find all the paths in T?
For example, Given a directed edge set E={(1,2),(1,5),(5,6),(1,4),(2,3)}. We know that such a set E can generate a directed tree T with only 1 in-degree (except the root node). Is there any fast method to traverse the edge set E, in order to find all the paths as follows:
Path1 = {(1,2),(2,3)}
Path2 = {(1,4)}
Path3 = {(1,5),(5,6)}
By the way, suppose the number of edges in E is |E|, is there complexity bound to find all the paths?
I have not worked on this kind of problems earlier. So just tried out a simple solution. Check this out.
public class PathFinder
{
private static Dictionary<string, Path> pathsDictionary = new Dictionary<string, Path>();
private static List<Path> newPaths = new List<Path>();
public static Dictionary<string, Path> GetBestPaths(List<Edge> edgesInTree)
{
foreach (var e in edgesInTree)
{
SetNewPathsToAdd(e);
UpdatePaths();
}
return pathsDictionary;
}
private static void SetNewPathsToAdd(Edge currentEdge)
{
newPaths.Clear();
newPaths.Add(new Path(new List<Edge> { currentEdge }));
if (!pathsDictionary.ContainsKey(currentEdge.PathKey()))
{
var pathKeys = pathsDictionary.Keys.Where(c => c.Split(",".ToCharArray())[1] == currentEdge.StartPoint.ToString()).ToList();
pathKeys.ForEach(key => { var newPath = new Path(pathsDictionary[key].ConnectedEdges); newPath.ConnectedEdges.Add(currentEdge); newPaths.Add(newPath); });
pathKeys = pathsDictionary.Keys.Where(c => c.Split(",".ToCharArray())[0] == currentEdge.EndPoint.ToString()).ToList();
pathKeys.ForEach(key => { var newPath = new Path(pathsDictionary[key].ConnectedEdges); newPath.ConnectedEdges.Insert(0, currentEdge); newPaths.Add(newPath); });
}
}
private static void UpdatePaths()
{
Path oldPath = null;
foreach (Path newPath in newPaths)
{
if (!pathsDictionary.ContainsKey(newPath.PathKey()))
pathsDictionary.Add(newPath.PathKey(), newPath);
else
{
oldPath = pathsDictionary[newPath.PathKey()];
if (newPath.PathWeights < oldPath.PathWeights)
pathsDictionary[newPath.PathKey()] = newPath;
}
}
}
}
public static class Extensions
{
public static bool IsNullOrEmpty(this IEnumerable<object> collection) { return collection == null || collection.Count() > 0; }
public static string PathKey(this ILine line) { return string.Format("{0},{1}", line.StartPoint, line.EndPoint); }
}
public interface ILine
{
int StartPoint { get; }
int EndPoint { get; }
}
public class Edge :ILine
{
public int StartPoint { get; set; }
public int EndPoint { get; set; }
public Edge(int startPoint, int endPoint)
{
this.EndPoint = endPoint;
this.StartPoint = startPoint;
}
}
public class Path :ILine
{
private List<Edge> connectedEdges = new List<Edge>();
public Path(List<Edge> edges) { this.connectedEdges = edges; }
public int StartPoint { get { return this.IsValid ? this.connectedEdges.First().StartPoint : 0; } }
public int EndPoint { get { return this.IsValid ? this.connectedEdges.Last().EndPoint : 0; } }
public bool IsValid { get { return this.EdgeCount > 0; } }
public int EdgeCount { get { return this.connectedEdges.Count; } }
// For now as no weights logics are defined
public int PathWeights { get { return this.EdgeCount; } }
public List<Edge> ConnectedEdges { get { return this.connectedEdges; } }
}
I think DFS(Depth First Search) should suit your requirements. Have a look at it here - Depth First Search - Wikipedia. You can tailor it to print the paths in the format that you require. As regards the complexity, since every node in your tree has in-degree one , the number of edges for your tree is bounded as - |E| = O(|V|). Since DFS operates with a complexity of O(|V|+|E|), your overall complexity comes out to be O(|V|).
I did this question as a part of a my assignment. The gentleman above has correctly pointed out to use pathID. You must visit each edge atleast once hence the complexity bound is O(V+E) but for tree E=O(V) therefore the complexity is O(v). I will give you a glimpse since the details are bit involved -
you will label each path with a unique ID and the path are alloted IDs in the incremental values such as 0,1,2.... A pathID of a path is the sum of weights of the edges on the path. So using DFS allocate weights to the path. You may begin by using 0 for edges until you encounter your first path and then you keep adding 1 and so on. You will also have to argue the correctness and properly allocate the weights. DFS will do the trick.

Resources