Set node direction on graphviz - graphviz

Suppose this code using neato:
graph sample {
layout=neato
overlap=false
splines=true
tailclip=false
headclip=false
A -- I
A -- J
A -- B
A -- H
A -- E
A -- K
B -- D
B -- C
B -- L
C -- M
C -- N
C -- O
D -- P
D -- Q
E -- R
F -- A
G -- F
H -- J
}
This gives us this diagram:
What I need is to place a node X, always fixed in a position south from his parent node. i.e. If I put another relation A -- X, X should be always placed south from A. And I don't really care where everything else ends up.
I've looked into the pos attribute, but it doesn't seems to be the solution since X is not really in a fixed position, but on a position relative to his relation.
Also tailport and headport, but they only define from where the edge will come out/in, but don't really affect the direction of the node.
Update
An additional image to make things clearer:
I don't require neato, but I don't want the graph to look like a UD or LR dot tree, I don't want it to be linearly ordered. circo, fdp, sfdp, twopi are alright too.

The neato program supports multiple modes, one of which can probably give you what you want. In particular, if you set mode=ipsep, you can specify dot-like constraints that are honored during the layout. For example, I take your graph and use the graph attributes
mode=ipsep
diredgeconstraints=true
levelsgap=0.5
The first turns on ipsep mode, the second tells the model to support directed edges as in dot,
and the last specifies how strong the separation should be. I then set the edge dir attribute to none
edge[dir=none]
and add an edge A -- X [dir=1]
The dir=1 indicates this edge should induce a directional constraint. If I then run neato, I get the appended picture.
The Graphviz attribute documentation http://www.graphviz.org/content/attrs provides more information about these attributes.

In response to the updated constraints, one solution is to pin A and X, and then lay out the graph around them:
graph sample {
overlap=false;
splines=true;
tailclip=false;
headclip=false;
A [pin=true,pos="0,.2"]
X [pin=true,pos="0,.1"]
A -- I
A -- J
A -- B
A -- H
A -- E
A -- K
B -- D
B -- C
B -- L
C -- M
C -- N
C -- O
D -- P
D -- Q
E -- R
F -- A
G -- F
H -- J
A -- X
I tried layout with both neato and fdp, and it seems to produce a graph like what you want. Naturally, if you want to impose such a constraint on arbitrary pairs of nodes in the same graph, this solution may not work.
--- Earlier answer ---
If you're committed to using neato, I'm not certain there is a way to solve the problem without modifying the graph in a post-processing step. If neato is just a convenient default, then you should be able to solve your problem by using dot as your layout engine instead, and using "rankdir=UD", plus a couple of additional kludges if X needs to be due south.
In the event that you only need the constraint to apply for a single node X, then putting X and A together in a cluster should do the job:
graph sample {
rankdir=UD
layout=dot
overlap=false
// .. as before
A -- X
subgraph clusterone {
style=invisible
A
X
}
}
If you need a strictly-south constraint to apply to arbitrary children of A, then that kind of clustering followed by the trick described in:
How to force all nodes in the same column in graphviz?
might do the trick. The attribute clusterrank=local might also be useful in that case, but I'm not certain. Hope this helps.

Related

How to cluster similar item together in a 2d Graph

well I am not looking for how to draw items on 2d Graph, Its just a pictorial representation of what the expected output need to be
I have a list like
a=[]
b=['c','d','e']
c=['a','b','d']
d=['a']
e=['b','a']
l=['g','r','p']
g=['r']
r=['g']
p=['l']
now from above it is clear that b is pointing to c , d ,e
a,b,c,d are closely linked , while the l,g,r,p are linked
can any one tell me an algo ( keeping 2d Picture in mind) how these similar items can be reprsented together.
Above is just an example.
The list will be dynamically created
Have you come across Graphviz? It has algorithms for various different forms of graph layout which I imagine would do a nice job of laying out your small example above. It also includes some simple GUIs to allow you to experiment with the different layouts it supports.
Edit: in response to some clarifications:
If you need to find dense subgraphs within your graph, even if it is fully connected, then you are looking for algorithms that find communities in networks. An example of a recently-developed algorithm doing such on large graphs (2 million+ nodes, representing a social network) efficiently can be found in this paper.
Just to extend Alex's answer, here is an example of graphviz use for your graph:
graph.dot:
digraph G
{
b -> c;
b -> d;
b -> e;
c -> a;
c -> b;
c -> d;
d -> a;
e -> b;
e -> a;
l -> g;
l -> r;
l -> p;
g -> r;
r -> g;
p -> l;
}
Output of Graphviz:
If you just want to know what are the clusters in your graph without drawing it, just use this algorithm.

How can I give a graph nodes fixed position in graphviz and how can i make the edges not overlapped?

I have seen some similar questions here, but the answers dont solve my problem.
I want to draw a graph. I write some code like this:
digraph {
{rank = same a b c d e f }
a -> b -> c -> d -> e -> f
a -> f
b -> d -> f
b -> f
}
but the result is that some of the edges overlapped each other.
So my question is how can I fix the edge to make it not overlap
and I also wanna know how can I give the node a fixed position? There is no problem this graph. But some times when I wanna a graph with a sequence of
a b c d e f
but when i create some edges and the sequence will change like:
a->e b c d f
You can use the attribute pos of a node or edge to specify coordinates. To see where dot places your nodes and edges you can simply run dot myinputfile.dot without any output parameter. This will produce the dot file with added coordinates (among other additions).
Based on this you can force dot to place some or all nodes at certain coordinates.

Why does Graphviz no longer minimise edge lengths when subgraphs are introduced

I have this Graphviz graph:
digraph
{
rankdir="LR";
overlap = true;
Node[shape=record, height="0.4", width="0.4"];
Edge[dir=none];
A B C D E F G H I
A -> B -> C
D -> E -> F
G -> H -> I
Edge[constraint=false]
A -> D -> G
subgraph clusterX
{
A
B
}
subgraph clusterY
{
E
H
F
I
}
}
which produces this output:
I would have expected the length of the edge between A and D to be minimised so that the nodes would be arranged as:
A B C
D E F
G H I
rather than
D E F
G H I
A B C
This works as expected if I remove the subgraph definitions.
Why does Graphviz place A B C at the bottom when the subgraphs are introduced?
This is not really about minimizing edge lengths, especially since in the example the edges are defined with the attribute constraint=false.
While this is not a complete answer, I think it can be found somewhere within the following two points:
The order of appearance of nodes in the graph is important.
Changing rankdir to LR contains unpredictable (or at least difficult to predict) behaviour, and/or probably still a bug or two (search rankdir).
I'll try to explain as good as I can and understand graphviz, but you may want to go ahead and read right away this reply of Emden R. Gansner on the graphviz mailing list as well as the following answer of Stephen North - they ought to know, so I will cite some of it...
Why is the order of appearance of nodes important? By default, in a top-down graph, first mentioned nodes will appear on the left of the following nodes unless edges and constraints result in a better layout.
Therefore, without clusters and rankdir=LR, the graphs appears like this (no surprises):
A D G
B E H
C F I
So far, so good. But what happens when rankdir=LR is applied?
ERG wrote:
Dot handles rankdir=LR by a normal TB layout and then rotating the
layout counterclockwise by 90 degrees (and then, of course, handling
node rotation, edge direction, etc.). Thus, subgraph one is
positioned to the left of subgraph two in the TB layout as you would
expect, and then ends up lower than it after rotation. If you want
subgraph one to be on top, list it second in the graph.
So if that would be correct, without clusters, the nodes should appear like this:
G H I
D E F
A B C
In reality, they do appear like this:
A B C
D E F
G H I
Why? Stephen North replied:
At some point we decided that top-to-bottom should be the default,
even if the graph is rotated, so there's code that flips the flat
edges internally.
So, the graph is layed out TB, rotated counterclock wise and flat edges flipped:
A D G G H I A B C
B E H --> D E F --> D E F
C F I A B C G H I
While this works quite well for simple graphs, it seems that when clusters are involved, things are a little different. Usually edges are also flipped within clusters (as in clusterY), but there are cases where the flat edge flipping does not work as one would think. Your example is one of those cases.
Why is the error or limitation in the flipping of those edges? Because the same graphs usually display correctly when using rankdir=TB.
Fortunately, workarounds are often easy - for example, you may use the order of appearance of the nodes to influence the layout:
digraph
{
rankdir="LR";
node[shape=record, height="0.4", width="0.4"];
edge[dir=none];
E; // E is first node to appear
A -> B -> C;
D -> E -> F;
G -> H -> I;
edge[constraint=false]
A -> D -> G;
subgraph clusterX { A; B; }
subgraph clusterY { E; F; H; I; }
}

Problems with a simple dependency algorithm

In my webapp, we have many fields that sum up other fields, and those fields sum up more fields. I know that this is a directed acyclic graph.
When the page loads, I calculate values for all of the fields. What I'm really trying to do is to convert my DAG into a one-dimensional list which would contain an efficient order to calculate the fields in.
For example:
A = B + D, D = B + C, B = C + E
Efficient calculation order: E -> C -> B -> D -> A
Right now my algorithm just does simple inserts into a List iteratively, but I've run into some situations where that starts to break. I'm thinking what would be needed instead would be to work out all the dependencies into a tree structure, and from there convert that into the one dimensional form? Is there a simple algorithm for converting such a tree into an efficient ordering?
Are you looking for topological sort? This imposes an ordering (a sequence or list) on a DAG. It's used by, for example, spreadsheets, to figure out dependencies between cells for calculations.
What you want is a depth-first search.
function ExamineField(Field F)
{
if (F.already_in_list)
return
foreach C child of F
{
call ExamineField(C)
}
AddToList(F)
}
Then just call ExamineField() on each field in turn, and the list will be populated in an optimal ordering according to your spec.
Note that if the fields are cyclic (that is, you have something like A = B + C, B = A + D) then the algorithm must be modified so that it doesn't go into an endless loop.
For your example, the calls would go:
ExamineField(A)
ExamineField(B)
ExamineField(C)
AddToList(C)
ExamineField(E)
AddToList(E)
AddToList(B)
ExamineField(D)
ExamineField(B)
(already in list, nothing happens)
ExamineField(C)
(already in list, nothing happens)
AddToList(D)
AddToList(A)
ExamineField(B)
(already in list, nothing happens)
ExamineField(C)
(already in list, nothing happens)
ExamineField(D)
(already in list, nothing happens)
ExamineField(E)
(already in list, nothing happens)
And the list would end up C, E, B, D, A.

Seeking algorithm to invert (reverse? mirror? turn inside-out) a DAG

I'm looking for an algorithm to "invert" (reverse? turn inside-out?) a
DAG:
A* # I can't ascii-art the arrows, so just
/ \ # pretend the slashes are all pointing
B C # "down" (south-east or south-west)
/ / \ # e.g.
G E D # A -> (B -> G, C -> (E -> F, D -> F))
\ /
F
The representation I'm using is immutable truly a DAG (there are no
"parent" pointers). I'd like to traverse the graph in some fashion
while building a "mirror image" graph with equivalent nodes, but with
the direction of relations between nodes inverted.
F*
/ \
G* E D # F -> (E -> C -> A, D -> C -> A), G -> B -> A
\ \ / #
B C # Again, arrows point "down"
\ / #
A #
So the input is a set of "roots" (here, {A}). The output should be a
set of "roots" in the result graph: {G, F}. (By root I mean a node
with no incoming references. A leaf is a node with no outgoing
references.)
The roots of the input become the leaves of the output and visa
versa. The transformation should be an inverse of itself.
(For the curious, I'd like to add a feature to a library I'm using to
represent XML for structural querying by which I can map each node in
the first tree to its "mirror image" in the second tree (and back
again) to provide more navigational flexibility for my query rules.)
Traverse the graph building a set of reversed edges and a list of leaf nodes.
Perform a topological sort of the reversed edges using the leaf (which are now root) nodes to start with.
Construct the reversed graph based on the reversed edges starting from the end of the sorted list. As the nodes are constructed in reverse topological order, you are guaranteed to have constructed the children of a given node before constructing the node, so creating an immutable representation is possible.
This is either O(N) if you use structures for your intermediate representation which track all links in both directions associated with a node, or O(NlnN) if you use sorting to find all the links of a node. For small graphs, or languages which don't suffer from stack overflows, you can just construct the graph lazily rather than explicitly performing the topological sort. So it depends a little what you're implementing it all in how different this would be.
A -> (B -> G, C -> (E -> F, D -> F))
original roots: [ A ]
original links: [ AB, BG, AC, CE, EF, CD, DF ]
reversed links: [ BA, GB, CA, EC, FE, DC, FD ]
reversed roots: [ G, F ]
reversed links: [ BA, CA, DC, EC, FE, FD, GB ] (in order of source)
topologically sorted: [ G, B, F, E, D, C, A ]
construction order : A, C->A, D->C, E->C, F->(D,E), B->A, G->B
Just do a depth-first search marking where you have already been, and each time you traverse an arrow you add the reverse to your result DAG. Add the leaves as roots.
My intuitive suggestion would be to perform a Depth First traversal of your graph, and construct your mirrored graph simultaneously.
When traversing each node, create a new node in the mirrored graph, and create an edge between it and its predecessor in the new graph.
If at any point you reach a node which has no children, mark it as a root.
I solved this with a simple graph traversal. Keep in mind topological sorting will only be useful for directed acyclic graphs.
I used an adjacency list, but you can do a similar thing with an adjacency matrix.
In Python it looks like this:
# Basic Graph Structure
g = {}
g[vertex] = [v1, v2, v3] # Each vertex contains a lists of its edges
To find all the edges for v, you then traverse the list g[v] and that will give you all (v, u) edges.
To build the reversed graph make a new dictionary and build it something like this:
reversed = {}
for v in g:
for e in g[v]:
if e not in reversed:
reversed[e] = []
reversed[e].append(v)
This is very memory intensive for large graphs (doubling your memory usage), but it is a very easy way to work with them and quite quick. There may be more clever solutions out there involving building a generator and using a dfs algorithm of some sort, but I have not put a lot of thought into it.
Depth-first search might be able to generate what you're after: Note your path through the tree and each time you traverse add the reverse to the resulting DAG (leaves are roots).

Resources