Understanding ordering within a graph when doing traversals - data-structures

I'm trying to understand depth first and breadth first traversals within the context of a graph. Most visual examples I've seen use trees to illustrate the difference between the two. The ordering of nodes within a tree is much more intuitive than in a graph (at least to me) and it makes perfect sense that nodes would be ordered top down, left to right from the root node.
When dealing with graphs, I see no such natural ordering. I've seen an example with various nodes labeled A though F, where the author explains traversals with nodes assuming the lexical order of their label. This would seem to imply that the type of value represented by a node must be inherently comparable. Is this the case? Any clarification would be much appreciated!

Node values in graphs need not be comparable.
An intuitive/oversimplified way to think about BFS vs DFS is this:
In DFS, you choose a direction to move, then you go as far as you can in that direction until you hit a dead end. Then you backtrack as little as possible, until you find a different direction you can go in. Follow that to its end, then backtrack again, and so on.
In BFS, you sequentially take one step in every possible direction. Then you take two steps in every possible direction, and so on.
Consider the following simple graph (I've deliberately chosen labels that are not A, B, C... to avoid the implication that the ordering of labels matters):
Q --> X --> T
| |
| |
v v
K --> W
A DFS starting at Q might proceed like this: Q to X to W (dead end), backtrack to X, go to T (dead end), backtrack to X and then to Q, go to K (dead end since W has already been visited).
A BFS starting at Q might proceed like this: Q, then X (one step away from Q), then K (one step away from Q), then W (two steps away from Q), then T (two steps away from Q).

Related

Cycle detection that handles a series of directed edges? [duplicate]

I came upon wait-for graphs and I wonder, are there any efficient algorithms for detecting if adding an edge to a directed graph results in a cycle?
The graphs in question are mutable (they can have nodes and edges added or removed). And we're not interested in actually knowing an offending cycle, just knowing there is one is enough (to prevent adding an offending edge).
Of course it'd be possible to use an algorithm for computing strongly connected components (such as Tarjan's) to check if the new graph is acyclic or not, but running it again every time an edge is added seems quite inefficient.
If I understood your question correctly, then a new edge (u,v) is only inserted if there was no path from v to u before (i.e., if (u,v) does not create a cycle). Thus, your graph is always a DAG (directed acyclic graph). Using Tarjan's Algorithm to detect strongly connected components (http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm) sounds like an overkill in this case. Before inserting (u,v), all you have to check is whether there is a directed path from v to u, which can be done with a simple BFS/DFS.
So the simplest way of doing it is the following (n = |V|, m = |E|):
Inserting (u,v): Check whether there is a path from v to u (BFS/DFS). Time complexity: O(m)
Deleting edges: Simply remove them from the graph. Time complexity: O(1)
Although inserting (u,v) takes O(m) time in the worst case, it is probably pretty fast in your situation. When doing the BFS/DFS starting from v to check whether u is reachable, you only visit vertices that are reachable from v. I would guess that in your setting the graph is pretty sparse and that the number of vertices reachable by another is not that high.
However, if you want to improve the theoretical running time, here are some hints (mostly showing that this will not be very easy). Assume we aim for testing in O(1) time whether there exists a directed path from v to u. The keyword in this context is the transitive closure of a DAG (i.e., a graph that contains an edge (u, v) if and only if there is a directed path from u to v in the DAG). Unfortunately, maintaining the transitive closure in a dynamic setting seems to be not that simple. There are several papers considering this problem and all papers I found were STOC or FOCS papers, which indicates that they are very involved. The newest (and fastest) result I found is in the paper Dynamic Transitive Closure via Dynamic Matrix Inverse by Sankowski (http://dl.acm.org/citation.cfm?id=1033207).
Even if you are willing to understand one of those dynamic transitive closure algorithms (or even want to implement it), they will not give you any speed up for the following reason. These algorithms are designed for the situation, where you have a lot of connectivity queries (which then can be performed in O(1) time) and only few changes in the graph. The goal then is to make these changes cheaper than recomputing the transitive closure. However, this update is still slower that a single check for connectivity. Thus, if you need to do an update on every connectivity query, it is better to use the simple approach mentioned above.
So why do I mention this approach of maintaining the transitive closure if it does not fit your needs? Well, it shows that searching an algorithm consuming only O(1) query time does probably not lead you to a solution faster than the simple one using BFS/DFS. What you could try is to get a query time that is faster than O(m) but worse than O(1), while updates are also faster than O(m). This is a very interesting problem, but it sounds to me like a very ambitious goal (so maybe do not spend too much time on trying to achieve it..).
As Mark suggested it is possible to use data structure that stores connected nodes. It is the best to use boolean matrix |V|x|V|. Values can be initialized with Floyd–Warshall algorithm. That is done in O(|V|^3).
Let T(i) be set of vertices that have path to vertex i, and F(j) set of vertices where exists path from vertex j. First are true's in i'th row and second true's in j'th column.
Adding an edge (i,j) is simple operation. If i and j wasn't connected before, than for each a from T(i) and each b from F(j) set matrix element (a,b) to true. But operation isn't cheap. In worst case it is O(|V|^2). That is in case of directed line, and adding edge from end to start vertex makes all vertices connected to all other vertices.
Removing an edge (i,j) is not so simple, but not more expensive operation in the worst case :-) If there is a path from i to j after removing edge, than nothing changes. That is checked with Dijkstra, less than O(|V|^2). Vertices that are not connected any more are (a,b):
a in T(i) - i - T(j),
b in F(j) + j
Only T(j) is changed with removing edge (i,j), so it has to be recalculated. That is done by any kind of graph traversing (BFS, DFS), by going in opposite edge direction from vertex j. That is done in less then O(|V|^2). Since setting of matrix element is in worst case is again O(|V|^2), this operation has same worst case complexity as adding edge.
This is a problem which I recently faced in a slightly different situation (optimal ordering of interdependent compiler instructions).
While I can't improve on O(n*n) theoretical bounds, after a fair amount of experimentation and assuming heuristics for my case (for example, assuming that the initial ordering wasn't created maliciously) the following was the best compromise algorithm in terms of performance.
(In my case I had an acceptable "right side failure": after the initial nodes and arcs were added (which was guaranteed to be possible), it was acceptable for the optimiser to occasionally reject the addition of further arcs where one could actually be added. This approximation isn't necessary for this algorithm when carried to completion, but it does admit such an approximation if you wish to do so, and so limiting its runtime further).
While a graph is topologically sorted, it is guaranteed to be cycle-free. In the first phase when I had a static bulk of nodes and arcs to add, I added the nodes and then topologically sorted them.
During the second phase, adding additional arcs, there are two situations when considering an arc from A to B. If A already lies to the left of B in the sort, an arc can simply be added and no cycle can be generated, as the list is still topologically sorted.
If B is to the left of A, we consider the sub-sequence between B and A and partition it into two disjoint sequences X, Y, where X is those nodes which can reach A (and Y the others). If A is not reachable from B, ie there are no direct arcs from B into X or to A, then the sequence can be reordered XABY before adding the A to B arc, showing it is still cycle-free and maintaining the topological sort. The efficiency over the naive algorithm here is that we only need consider the subsequence between B and A as our list is topologically sorted: A is not reachable from any node to the right of A. For my situation, where localised reorderings are the most frequent and important, this an important gain.
As we don't reorder within the sequences X,A,B,Y, clearly any arcs which start or end within the same sequence are still ordered correctly, and the same in each flank, and any "fly-over" arcs from the left to the right flanks. Any arcs between the flanks and X,A,B,Y are also still ordered correctly as our reordering is restricted to this local region. So we only need to consider arcs between our four sequences. Consider each possible "problematic" arc for our final ordering XABY in turn: YB YA YX BA BX AX. Our initial order was B[XY]A, so AX and YB cannot occur. X reaches A, but Y does not, therefore YX and YA do not occur or A could be reached from the source of the arc in Y (potentially via X) a contradiction. Our criterion for acceptability was that there are no links BX or BA. So there are no problematic arcs, and we are still topologically sorted.
Our only acceptability criterion (that A is not reachable from B) is clearly sufficient to create a cycle on adding the arc A->B: B -(X)-> A -> B, so the converse is also shown.
This can be implemented reasonably efficiently if we can add a flag to each node. Consider the nodes [BXY] going right-to-left from the node immediately to the left of A. If that node has a direct arc to A then set the flag. At an arbitrary such node, we need only consider direct outgoing arcs: the nodes to its right are either after A (and so irrelevant), or else have already been flagged if reachable from A, so the flag on such an arbitrary node is set when any flagged nodes are encountered by direct link. If B is not flagged at the end of the process, the reordering is acceptable and the flagged nodes comprise X.
Though this always yields a correct ordering if carried to completion (as far as I can tell), as I mentioned in the introduction it is particularly efficient if your initial build is approximately correct (in the sense of accommodating of likely additional arcs without reordering).
There also exists an effective approximation, if your context is such that "outrageous" arcs can be rejected (those which would massively reorder) by limiting the A to B distance you are prepared to scan. If you have an initial list of the additional arcs you wish to add, they can be ordered by increasing distance in the initial ordering until you run out of some scanning "credit", and call your optimisation a day at that point.
If the graph is directed, you would only have to check the parent nodes (navigate up until you reach the root) of the node where the new edge should start. If one of the parent nodes is equal to the end of the edge, adding the edge would create a cycle.
If all previous jobs are in Topologically sorted order. Then if you add an edge that appears to brake the sort, and can not be fixed, then you have a cycle.
https://stackoverflow.com/a/261621/831850
So if we have a sorted list of nodes:
1, 2, 3, ..., x, ..., z, ...
Such that each node is waiting for nodes to its left.
Say we want to add an edge from x->z. Well that appears to brake the sort. So we can move the node at x to position z+1 which will fix the sort iif none of the nodes (x, z] have an edge to the node at x.

Haskell depth-first-search for a graph

For hours now I am trying to implement a depth-first search for Haskell. My depthfirst algorithm has given a starting node and a graph. That is what I have so far + the definition of the graph datatype.
data Graph a = G [a] (BRfun a)
with:
type BRfun a = a -> [a]
current attempt:
depthFirst :: Eq a => a -> Graph a -> [a]
depthFirst a (G [a] sucs) = [a]
So if only one element is in the nodes list that's the only one I have to put in the final list (I think that should be the cancellation condition).
But now I am struggling to create an recursive algorithm to first get the deepest nodes.
I've had one too much of a drink and have a somewhat fuzzy idea of what I'm talking about, but here's a solution I came up with.
depthFirst :: Eq a => a -> Graph a -> [a]
depthFirst root (G _nodes edges)
= reverse $ go [] root
where
go seen x
| x `elem` seen = seen
| otherwise = foldl' go (x:seen) (edges x)
I use foldl' from Data.List here because we want to traverse nodes left-to-right, which is somewhat challenging with foldr. And straight up using foldl without ' is usually not a good idea, since it builds up thunks unless forced (while forcing is exactly what foldl' does).
So, the general idea, as I outlined in my comment, is as follows. Go down the tree the first chance you get, maintaining the list of nodes you've seen along the way. If a node has no outgoing edges, cool, you're done here. If you've already seen a given node, bail, you don't need infinite recursion.
Fold starts from current node prepended to the list of already seen nodes (at the beginning, empty list). Then, from left to right, it visits every node directly reachable from current node. At every "next" node, it builds reverse depth-first order of a subtree plus already seen nodes. Already seen nodes are carried over to each "next" node (left-to-right order). If there are no nodes reachable from current node, it returns just current node prepended to list of all seen nodes.
List of seen nodes is reversed because prepending is O(1) while appending is O(n). Easier to reverse once and get complexity O(n) rather than append every time and get complexity of roughly O(n²) (complexities are from the top of my head, and I'm more than a bit tipsy, so apply salt liberally)
If elem x seen, function bails returning the list of all nodes seen so far. It makes sure we don't recurse into the nodes we've visited already, and hence avoids infinite recursion on cyclic graphs.
This is classical depth-first search. It could be optimized, and potential for optimization is rather obvious (for one, elem x seen has O(n) worst-case complexity, while it could've been O(log n). Feel free to improve on the code.
As a last bit of advice, type of Graph doesn't guarantee that nodes are unique. A stricter implementation would look like this: data Graph a = G (Set a) (BRfun a), where Set is from Data.Set (or something similar). Given the stated definition with list, it might be a good idea to relabel all nodes, f.ex. nodes' = zip [1..] nodes or something like that.
For graph searches like DFS and BFS, you need to keep around a list of vertices that you've previously visited. This makes it possible to check if you've seen a vertex before, so that you don't visit a vertex twice (and this handles cycles too, although it can't actually detect for sure if cycles exist).
Here's my implementation. The visited list keeps track of which vertices have been visited. For each vertex we encounter, we check to see if it's been visited by traversing the list. When we "visit" a vertex (that is, in the else branch), we add the vertex to the list. The visited list is kept up-to-date by passing it around in the foldl.
In this approach, we can actually hijack the visited list for recording the depth-first order. Since we add vertices to the list when we first see them, the visited list is in reverse depth-first order. So we simply reverse it once the search has completed.
depthFirst source (G _ sucs) = reverse (search [] source)
where
search visited v =
if v `elem` visited
then visited -- already seen v, so skip it
else foldl search (v:visited) (sucs v)
I'd recommend walking through how the code executes on a small graph to get a sense for how it works and why it is correct. For example, try it on the graph defined as follows, from source 0.
edges = [[1,2,3],[4],[5],[4,6],[5],[1],[4]]
g = G [0,1,2,3,4,5,6] (edges!!)
Finally, note that this implementation is correct but highly inefficient, taking time O(nm) for a graph of n vertices and m edges, because we traverse the visited list once per edge. In a more efficient implementation, you would want to keep around two data structures, one for looking up whether or not a vertex has been visited (such as a hash set or binary search tree) and a second one for writing down the depth-first ordering.

What is meant by the set of all possible configuration in a given graph G

I'm trying to understand a Solved exercise 2, Chapter 3 - Algorithm design by tardos.
But i'm not getting the idea of the answer.
In short the question is
We are given two robots located at node a & node b. The robots need to travel to node c and d respectively. The problem is if one of the nodes gets close to each other. "Let's assume the distance is r <= 1 so that if they become close to each other by one node or less" they will have an interference problem, So they won't be able to transmit data to the base station.
The answer is quite long and it does not make any sense to me or I'm not getting its idea.
Anyway I was thinking can't we just perform DFS/BFS to find a path from node a to c, & from b to d. then we modify the DFS/BFS Algorithm so that we keep checking at every movement if the robots are getting close to each other?
Since it's required to solve this problem in polynomial time, I don't think this modification to any of the algorithm "BFS/DFS" will consume a lot of time.
The solution is "From the book"
This problem can be tricky to think about if we view things at the level of the underlying graph G: for a given configuration of the robots—that is, the current location of each one—it’s not clear what rule we should be using to decide how to move one of the robots next. So instead we apply an idea that can be very useful for situations in which we’re trying to perform this type of search. We observe that our problem looks a lot like a path-finding problem, not in the original graph G but in the space of all possible configurations.
Let us define the following (larger) graph H. The node set of H is the set of all possible configurations of the robots; that is, H consists of all possible pairs of nodes in G. We join two nodes of H by an edge if they represent configurations that could be consecutive in a schedule; that is, (u,v) and (u′,v′)will be joined by an edge in H if one of the pairs u,u′ or v,v′ are equal, and the other pair corresponds to an edge in G.
Why the need for larger graph H?
What does he mean by: The node set of H is the set of all possible configurations of the robots; that is, H consists of all possible pairs of nodes in G.
And what does he mean by: We join two nodes of H by an edge if they represent configurations that could be consecutive in a schedule; that is, (u,v) and (u′,v′) will be joined by an edge in H if one of the pairs u,u′ or v,v′ are equal, and the other pair corresponds to an edge in G.?
I do not have the book, but it seems from their answer that at each step they move one robot or the other. Assuming that, H consists of all possible pairs of nodes that are more than distance r apart. The nodes in H are adjacent if they can be reached by moving one robot or the other.
There are not enough details in your proposed algorithm to say anything about it.
Anyway I was thinking can't we just perform DFS/BFS to find a path from node a to c, & from b to d. then we modify the DFS/BFS Algorithm so that we keep checking at every movement if the robots are getting close to each other?
I don't think this would be possible. What you're proposing is to calculate the full path, and afterwards check if the given path could work. If not, how would you handle the situation so that when you rerun the algorithm, it won't find that pathological path? You could exclude that from the set of possible options, but I don't see think that'd be a good approach.
Suppose a path of length n, and now suppose that the pathology resides in the first step of the given path. Suppose now that this happens every time you recalculate the path. You would have to recalculate the path a lot of times just because the algorithm itself isn't aware of the restrictions needed to get to the right answer.
I think this is the point: the algorithm itself doesn't consider the problem's restrictions, and that is the main problem, because there's no easy way of correcting the given (wrong) solution.
What does he mean by: The node set of H is the set of all possible configurations of the robots; that is, H consists of all possible pairs of nodes in G.
What they mean by that is that each node in H represents each possible position of the two robots, which is the same as "all possible pairs of nodes in G".
E.g.: graph G has nodes A, B, C, D, E. H will have nodes AB, AC, AD, AE, BC, BD, BE, CD, CE, DE (consider AB = BA for further analysis).
Let the two robots be named r1 and r2, they start at nodes A and B (given info in the question), so the path will start in node AB in graph H. Next, the possibilities are:
r1 moves to a neighbor node from A
r2 moves to a neighbor node from B
(...repeat for each step unitl r1 and r2 each reach its destination).
All these possible positions of the two robots at the same time are the configurations the answer talks about.
And what does he mean by: We join two nodes of H by an edge if they represent configurations that could be consecutive in a schedule; that is, (u,v) and (u′,v′) will be joined by an edge in H if one of the pairs u,u′ or v,v′ are equal, and the other pair corresponds to an edge in G.?
Let's look at the possibilities from what they state here:
(u,v) and (u′,v′) will be joined by an edge in H if one of the pairs u,u′ or v,v′ are equal, and the other pair corresponds to an edge in G.
The possibilities are:
(u,v) and (u,w) / (v,w) is and edge in E. In this case r2 moves to one of the neighbors from its current node.
(u,v) and (w,v) / (u,w) is and edge in E. In this case r1 moves to one of the neighbors from its current node.
This solution was a bit tricky to me too at first. But after reading it several times and drawing some examples, when I finally bumped into your question, the way you separated each part of the problem then helped me to fully understand each part of the solution. So, a big thanks to you for this question!
Hope it's clearer now for anyone stuck with this problem!

How to prove that finding a successor n-1 times in the BST from the minimum node is O(n)?

How to prove that finding a successor n-1 times in the BST from the minimum node is O(n)?
The questions is that we can create sorted order by
1) let the node = minimum node of the BST.
2) From that node, we recursively call find a successor.
I was told that the result is O(n) but I do not understand and do not know how to prove it.
Should not it be O(n*log n) instead? Because for the step 1, it is O(log n), for the step 2, it is also O(log n) but it is called n-1 times. Therefore, it will be O(n*log n)
Please clarify my doubt. Thank you! :)
You are correct that any individual operation might take O(log n) time, so if you perform those operations n times, you should get a runtime of O(n log n). This bound is correct, but it's not tight. The actual runtime is Θ(n).
One way to see this is to look at any individual edge in the tree. How many times will you visit each edge if you start at the leftmost node and repeatedly perform a successor query? If you look closely at how the operations work, you'll discover that every edge is visited exactly twice: once downward and once upward. Since all the work done is done traversing up and down edges, this means that the total amount of work done is proportional to twice the number of edges. In any tree, the number of edges is the number of nodes minus one, and so the total work done is Θ(n).
To formalize this as a proof, try showing that you never descend down the same edge twice and that when you ascend up an edge, you never descend down that edge again. Once you've done this, the conclusion that the runtime is Θ(n) follows from the above logic.
Hope this helps!
I wanted to post this as a comment on templatetypedef's answer, but it's too long.
His answer is right in that the easiest way to see that this is linear is because every edge is visited exactly twice, and the number of edges in a tree is always one less than the number of nodes (because every node has one parent, except the root!).
The issue is that the way he phrases the formal proof uses words that seem to imply contradiction as the way to go. In general, mathematicians frown on using contradiction because it often produces proofs with superfluous content. For instance:
Proof that 2 + 2 != 5:
Assume for contradiction that 2 + 2 = 5 (<- Remove this line)
Well 2 + 2 = 4
And 4 != 5
Contradiction! (<- Remove this line)
Contradiction tends to be verbose, and sometimes it can even obfuscate the idea behind the proof! There are times when contradiction seems pretty much necessary, but it's relatively rare and that's a separate discussion.
In this case, I don't see a proof by contradiction being any easier than a direct proof. On the other hand, regardless of proof technique, this proof is pretty ugly to do formally. Here's an attempt:
1) The succ(n) algorithm traverses one of two paths
In the first case every edge is visited on the simple path from a node to the leftmost node of its right subtree
In the other case, the node n has no right child in which case we go up its ancestors p_1, p_2, p_3, ..., p_k such that p_(k-1) is the first ancestor which is the left child of it's parent. All of those edges are visited in that simple path
We want to show that an arbitrary edge is traversed in precisely two succ() calls, once for the first case of succ() and once for the second case of succ(). Well, this is true for every edge other than the rightmost branch, but you can handle those edge cases separately. Alternatively we could prove the simpler argument where we return to the root after visiting the last element
This is two-fold because for a given edge e we have to find the n1 and n2 such that succ(n1) traverses e and succ(n2) also traverses e, as well as prove that every other succ() generates a path which does not include e.
2) First we actually prove that for each type of path that succ() visits, no two paths overlap (i.e. if succ(n) and succ(n') both traverse paths of the same type, those paths share no edges)
In the first case, the simple path is precisely defined as follows. Start at node n and go one edge to the right to r. Then traverse the left branch of the subtree rooted at r. Now consider any other such path that starts at some other node n' (note, we don't assume that n != n'). It must go right one node to r'. Then it traverses the leftmost branch of the subtree rooted at r'. If the paths overlap then pick one of the edges that overlap. If it's (n,r) = (n',r') then we have n = n' and so it's the same path. If it's some e = e' in both leftmost branches then you can show, again, that n = n' (you can trace the leftmost branches and show that every edge is the same, then finally reach the conclusion that r = r' => n = n' because for a tree the parent is unique. You'll see this tracing argument below). Thus we know that for any n and n', if their paths overlap, they are actually the same node! The contrapositive says this: if they are different nodes, then their paths don't overlap. That's exactly what we want (and the contrapositive is always equally true to the original statement).
In the second case we define the simple path starting at node n and go up the ancestors p_1, p_2, ..., p_k = g until we reach the first node p_k such that p_(k-1) is to the left of p_k. Consider some other path of the same type that starts at node n' where n != n'. Similarly it visits p_1', p_2', ..., p_k' = g'. Because it's a tree, none of those ancestors are the same as the first set. Because none of the nodes on the two paths are the same, none of the edges can be the same and hence succ(n) and succ(n') do not traverse any of the same edges
3) Now we just need to show that at least one path of each type exists for a given edge. Well take any such edge e = (c,p) (note here I am ignoring the special edges on the rightmost branch which are technically only visited once and I am also ignoring the special edges on the leftmost branch which are technically visited once by find_min() and then once by succ() calls)
If it's from a left child c to its parent p then succ(c) will cover the second type of path. To find the other path, keep going up p's ancestors p_1, p_2, ..., p_k such that p_(k-1) is to the right of p_k. succ(p_k) will traverse a path containing e by definition (since e is on the leftmost branch of the subtree of p_(k-1) which is p_k's right child).
A similar argument holds for symmetric case when c is the right child of p
To summarize the proof we've shown that succ() generates two types of path. For each type of path, all of the paths of those types do not overlap. Furthermore, for any edge we have at least one of each of those types of paths. Since we call succ() on every node we can finally conclude that each edge is traversed twice (and hence the algorithm is Theta(n)).
Despite how long this proof was, it isn't actually complete (even ignoring the points when I explicitly said I was skipping details!). There are cases where I said something exists without proving it exists. You can figure out those details if you want and it is actually really satisfying to get it completely right (in my opinion at least. Maybe when you're a genius you'll find it tedious, heh)
Hope this helped. Let me know if you want me to clarify some steps

What algorithm can I apply to this DAG?

I have a DAG representing a list of properties. These properties are such that if a>b, then a has a directed edge to b. It is transitive as well, so that if a>b and b>c, then a has a directed edge to c.
However, the directed edge from a to c is superfluous because a has a directed edge to b and b has a directed edge to c. How can I prune all these superfluous edges? I was thinking of using a minimum spanning tree algorithm, but I'm not really sure what is the appropriate algorithm to apply in this situation
I suppose I could do a depth first search from each node and all its outgoing edges and compare if it can reach certain nodes without using certain edges, but this seems horribly inefficient and slow.
After the algorithm is complete, the output would be a linear list of all the nodes in an order that is consistent with the graph. So if a has three directed edges to b,c, and d. b and c also each of which has a directed edge to d, the output could be either abcd or acbd.
This is called the transitive reduction problem. Formally speaking, you are looking for a minimal (fewest edges) directed graph, the transitive closure of which is equal to the transitive closure of the input graph. (The diagram on the above Wikipedia link makes it clear.)
Apparently there exists an efficient algorithm for solving this problem that takes the same time as for producing a transitive closure (i.e. the more common inverse problem of adding transitive links instead of removing them), however the link to the 1972 paper by Aho, Garey, and Ullman costs $25 to download, and some quick googling didn't turn up any nice descriptions.
EDIT: Scott Cotton's graphlib contains a Java implementation! This Java library looks to be very well organised.
Actually, after looking around a little more, I think a Topologicalsort is what I'm really after here.
If these are already n nodes with directed edges:
Starting from any point M, loop all its child edge, select the biggest child (like N), remove other edges, the complexity should be o(n) . If no N exists (no child edge, goto step 3).
start from N, repeat step 1.
start from point M, select the smallest parent node ( like T), remove others' edges.
start from T, repeat step 3.....
Actually it's just a ordering algorithm, and the totally complexity should be o(0.5n^2).
One problem is that if we want loop one node's parent nodes, then we need more memory to log edge so we can trace back from child to parent. This can be improved in the step 3 where we choose one node from the left nodes bigger than M, this means we need to keep a list of nodes to know what nodes are left..

Resources