I can calculate some network centrality metrics such as degree centrality and closeness centrality using cytoscape.js but I didn't see any built-in function to compute eigenvector centrality (eigencentrality). Is there any way to do that?
From your link, Page Rank should suffice, as it's a type of eigenvector centrality: http://js.cytoscape.org/#eles.pageRank
Related
I am looking for a simple to understand algorithm to calculate betweenness centrality. Input should be an adjacency or/and distance matrix.
The algorithm could be provided in all forms and implementations (e.g. pseudo code, python, ...). Running time or storage usage are not important so it could be a naive approach. Most important is understandability.
I'm just wondering if, like for strings where we have the Levenshtein distance (or edit distance) between two strings, is there something similar for graphs?
I mean, a scalar measure that identifies the number of atomic operations (node and edges insertion/deletion) to transform a graph G1 to a graph G2.
I think graph edit distance is the measure that you were looking for.
Graph edit distance measures the minimum number of graph edit operations to transform one graph to another, and the allowed graph edit operations includes:
Insert/delete an isolated vertex
Insert/delete an edge
Change the label of a vertex/edge (if labeled graphs)
However, computing the graph edit distance between two graphs is NP-hard. The most efficient algorithm for computing this is an A*-based algorithm, and there are other sub-optimal algorithms.
You should look at the paper A survey of graph edit distance
For a general graph it is a NP-complete problem as others mentioned in their answer. But for tree graph there are well known polynomial algorithms. May be most famous of them is "Zhang Shasha" algorithm which was published in 1989.
Note:
The Levenshtein distance (or edit distance) is between two strings
But in Graph you should search between at least N! position that you find Identity of each edge and vertex.
You can compare between two graph by unique index easily,But
The master question is define identity for each vertex and edge.this question (find identity for each vertex and edge in two graph that they can to transform ) is very hard and was called isomorphism problem (NP-Complete).
You can search about isomorphism graph.
What is the time complexity of computing betweenness centrality if we are given the shortest path predecessor matrix of a graph?
Predecessor matrix cells look like this:
if node i and node j are directly connected then value in the cell is 0;
if node i and node j are not connected then value in the cell is -1;
else cell = predecessor(j) - this can be only one predecessor if there is a single shortest path or an array of predecessors if there are more than one shortest paths between i and j.
Thank you for your answer,
I am familiar with Brandes Algorithm. However Brandes Algorithm will compute the betweenness for all the nodes inside a network. I think that time spent for computing CB for one vertex is the same as the time for computing CB for all vertices as Brandes algorithm can't be adapted for such a case.
So, my idea was to store the predecessor matrix, and to be able to compute CB for a certain vertex (and not have to wait for all vertices CB computations).
I am aware I can't achieve smaller time complexity but I think that the difference in amount of time can be made by not computing CB for all 7000 vertices. Instead, by having this matrix I am able to compute CB for only one single vertex.
I think it is possible to compute CB in O(n^2*L) where L is the average shortest path when we have predecessor matrix.
What is your opinion about this concept?
As far as I can find out, the best known algorithm for computing betweenness centrality is the one described in this paper:
Ulrik Brandes (2001). A Faster Algorithm for Betweenness Centrality. Journal of Mathematical Sociology 25(2):163-177.
You'll see that this algorithm computes, as a first step, the number of shortest paths between every pair of nodes. It is natural to do so in a way that simultaneously computes the predecessor matrix too. So it appears that there's no benefit to pre-computing the predecessor matrix: you get it essentially for free anyway while executing Brandes' algorithm.
(Of course, this isn't a proof that it makes no difference, and maybe someone else knows better. You might want to ask on cstheory.stackexchange.com.)
I have a huge graph that I would like to process using many machines.
I had like to compute if the graph diameter is higher than 50.
How would I split the data and I would I write a parallel algorithm that can calculate it?
(the return value is boolean)
The graph diameter is the greatest distance between any pair of vertices
The standard way to figure this out would be an all-pairs shortest path algorithm -- the Floyd-Warshall algorithm is a good place to start. Another option using Hadoop is located here.
Take a look at Parallel implementation of graph diameter algorithms
Also: Parallel Graph Algorithms
In a n-dimensional Euclidean metric space, given a Voronoi decomposition for a set of S points (computed with Bowyer-Watson algorithm), is it possible to generalize k (with k<S) geometric clusters by converging multiple Dirichlet domains?