Hi I want to visualize infection algorithm on graph, graph code and changes has wrote with Python, but there is no need that Library or visualization tool (VT) contract with my code. it could be possible and more sensible that first code run and write result in file, then VT read the structure of graph and changes in time steps, so end user just can forward and backward time.
Abstract example for interface file:
a-b
a-c
b-c
all blue
----
1:a=red
2:b=red,c=red
Thanks
EDIT: graph could be visualized on web, windows panel, java applet or something else, it is not important
EDIT 2: i found igraph that seem to work with R and Python,it just results in image format so it is not possible to show over time changes.
I found dynnetwork that is Cytoscape 3.0 app, it support importing graph from CSV that is great and visualizing graph changes over time in term of variety of node and edge attribute size, label, color, ...
that is what i want
EDIT: it has problem with loading graph with about 10000 node! so i must found another solution
Related
If a graph is simple, it's easy enough to just look through. For more complex graphs though, it's hard to make sense of if it's not arranged in a way that resembles how developers conceptualize a class or method hierarchy. Understandably, NDepend wouldn't be able to do this automatically.
Can I move graph nodes around by hand? Or alternatively, is there another program that I can export the graph to and rearrange the nodes there?
No so far NDepend' graph' nodes cannot be re-arranged and cannot be exported to another tool that lets achieve this. Did you try to tick or untick the cluster settings that will provoque re-arranging complex graph or sub-graph?
Same if you tinker with layout settings:
Also a screenshot of a graph that is not in a way that resembles how developers conceptualize a class or method hierarchy would be welcome.
I have an existing system model, representing services calling APIs backed by one or more implementations in perhaps other services, in the form of a DAG. I've been rendering this by GraphViz dot files' digraph which is fairly reasonable, if sometimes difficult to enforce layering.
While considering various possible refactorings of services and APIs, I'd like to be able to chart alternative routes towards an end goal. Each refactoring step would yield a different DAG – represented in terms of diffs from the previous DAG (e.g., convert service A's use of API x in service B to API y in service C) – and similarly renderable.
What tools are there to be able to create such "refactoring paths" and then visualize flows between them, determine dependencies and parallelism? Extra bonus points for goal seeking (e.g., no dependencies from any service other than service A on service C; cheapest path based on weights) and providing a loose ordering of refactorings that demonstrate their (presumably) monotonically increasing system value.
I am picturing two UI components:
a DAG diff that shows visually the nodes/vectors that got replaced in the source graph with nodes/vectors in the second
a controlling display that acts like D3's force layout that lets you navigate the loosely connected DAG of refactorings and select which refactoring you'd like to see the before/after picture in the DAG diff.
That said, I'm totally up to other tooling, formats, etc. Just would like to be able to produce these and display them to other people as to why what we're doing is valuable (goal assertion) and taking as long as it does (dependencies and Gantt charts).
Would it be feasible for you to create a separate tool that, given two similar DAGs, outputs the "merge" of them? If that's possible, then visualizing the merge DAG will probably tell you a lot about both DAGs. You can color-code the nodes by whether they appear in both DAGs or in either.
That's how we originally designed the visual diffs of workflow graphs in VisTrails, see here.
If you insist on showing the two DAGs side by side, creating the merge DAG might still be the right idea, because then dot can lay the merge graph out, and you can simply hide the appropriate nodes for each subgraph. In this way, the shared structure will be laid out consistently by construction.
pl. help me with this noob questions. I want to show a network with large number (70000) of nodes, and 2.1 million links in force layout. Looking for a good and scalable way to do this.
How do we actually show such large nodes practically, can we do some kind of approximation and show semantically same network (e.g: http://www.visualcomplexity.com/vc/project.cfm?id=76 )
How do we actually reduce such data in back end [ say using KDE ? We cannot afford to use science.js in front end as the volume is large ]
Initial view can be the network with pre-determined locations of the nodes or clusters. How do we predertmine the locations in back end, before sending the data to d3js. Do we have to use topojson ?
Any such examples are available using d3js (and a backend - say java, python etc) ?
Sorry about the question, but do you really need to show all that information in one shot?
If you really need it, have first a look with Gephi and see what it looks like, then pass to the next step.
If you see that you can focus on specific nodes or patterns at the beginning and then explore the result of the chart, probably this is the best solution from a performance point of view.
In case the discovery approach works but you are still having troubles with many items on the screen, just control the force layout with a time based threshold. It's not perfect but it will work for hundred nodes.
Next step
If you decide to go anyway on this path, I would recommend the followings:
Aggregate: that's probably the most useful thing you can do here: let the user interact with the data and dig in it to see more in detail. That is the best solution if you have to serve many clients.
Do not run the force directed layout on the front end with the entire network as is: it will eat all the browser resources for at least tens of minutes in any case.
Compute the layout on the back end - e.g. using JUNG or Gephi core itself in Java or NetworkX in Python - and then just display the result.
Cache the result of the point above as well: they are many even for the server if you have many clients, so cache it.
When the user drag the network, hide the links: it should speed up the computation ( sigmajs uses this trick)
In a system I Have a list of nodes which are connected like in a normal graph. We know the whole system and all of their connections and we also have a startpoint. All my edges has a direction.
Now I want to draw all of these nodes and edges automatically. The problem is not the actual drawing, but calculating the (x,y) coordinates. So basically I would like to draw this whole graph so it looks good.
My datastructure would be something like:
class node:
string text
List<edge> connections
There must be some well known algorithms for this problem? I haven't been able to find any, but I might be using the wrong keywords.
My thoughts:
One way would be to position our startnode at (0,0), and then have some constant which is "distance". Then for each neighbor, it would add distance to the y position, and for each node which is a neighbor, set x= distance*n.
But this will really give a lot of problems - so that's definetely not the way to go.
By far the most common approach for this is to use a force-directed layout instead of a deterministic one. The gist is that you have every node repel each other (anti-gravity) and have any connected pairs of nodes attract each other. After several iterations of a physics simulation you can get a reasonable layout.
There are many layout algorithms you can use, with vastly different results. The GraphViz fdp (Fruchterman & Reingold '91) and neato (Kamada & Kawai '89) algorithms work, but are rather old and there are much better alternatives. The Fruchterman & Reingold '91 algorithm is also available in Python in NetworkX.
Prefuse provides a ForceDirectedLayout Java class that is pretty fast and good. Hachul & Jünger '05 detail the FM^3 algorithm, which appears to do quite well in practice (Hachul & Jünger '06) and is available in C++ in Tulip.
There are tons of other open source tools to visualize graphs, like
NodeXL (C#), a great introductory tool that integrates network analysis into Excel 2007/2010 (Disclaimer: I'm an advisor for it). Other awesome tools include Gephi (Java) and Cytoscape (Java), while Pajek, UCINet, yEd and Tom Sawyer are some proprietary alternatives.
In general this is a tricky problem, especially if you want to start dealing with edge routing and making things look pretty. You might look at http://www.graphviz.org/ and using either their command line tools, or using the graphviz library to do your layout and get your x,y coordinates within your application.
Sorry if this question seems a bit complex but I think its all related so I wanted try to get the answer in one shot. Basically I have a layered graph*, that has various sets of data that are connected to only the next set of data(so set1 has vertexes that have edges to set2, and so on but set1 has nothing connecting to set3 or anything other than set2. It might be relevant not sure). Generally, you can think of my data as one massive family tree(every set I add about a billion nodes) that I keep loading new generations with every new set(families create new families and no edges go backwards).
I have an Hbase/hadoop system running and I know how to use java to add columns and values, but what I don't know how to do is:
add data to hbase in a graph type format(since its hbase, I want to load it in a way that I can add a ton of data and it'll scale..unlike other databases that limit graphs to the size of the system). I know how to add data but don't understand how to do it in a scalable graph way.
Once the graph is loaded I want to know how to apply some kind of analytics to it. Pagerank is popular so I thought I would say it, but pretty much anything that is based on processing a graph.
I guess the simplified way of asking the question is how to do I specifically get a graph into hbase and once its there how do I analyze it? Is there a tutorial? There's a lot of hbase information on the internet(I read the hbase book) but I could not find anything specific to graphs. I found, giraph, but I don't think it can connect to hbase(yet). Seeing how hadoop/hbase are versions of mapreduce/bigtables I suspect there is a way to process graphs I'm just not having luck finding anything.
*A layered graph is a directed graph with a level for different set of vertex's, like so: http://en.wikipedia.org/wiki/Layered_graph_drawing
I think this question on SO could help:
https://stackoverflow.com/questions/9865738/is-it-possible-to-store-graphs-hbase-if-so-how-do-you-model-the-database-to-sup/9867563#9867563
This part of my answer to this question might be of use.
Using HBase/Accumulo as input to giraph has been submitted recently (7
Mar 2012) as a new feature request to Giraph: HBase/Accumulo Input
and Output formats (GIRAPH-153)
We use giraph in this way, it only store minimum data in each vertex, and then run the graph algorithm with giraph, then we assemble the result with rich data using pig, for page rank algo, each vertex only needs to store vertex id, rank, thus it could scale to almost billion level.