Simple to understand algorithm for calculating betweenness centrality - algorithm

I am looking for a simple to understand algorithm to calculate betweenness centrality. Input should be an adjacency or/and distance matrix.
The algorithm could be provided in all forms and implementations (e.g. pseudo code, python, ...). Running time or storage usage are not important so it could be a naive approach. Most important is understandability.

Related

Do I have to implement Adjacency matrix with BFS?

I am trying to implement BFS algorithm using queue and I do not want to look for any online code for learning purposes. All what I am doing is just following algorithms and try to implement it. I have a question regarding for Adjacency matrix (data structure for graph).
I know one common graph data structures is adjacency matrix. So, my question here, Do I have to implement Adjacency matrix along with BFS algorithm or it does not matter.
I really got confused.
one of the things that confused me, the data for graph, where these data should be stored if there is not data structure ?
Sincerely
Breadth-first search assumes you have some kind of way of representing the graph structure that you're working with and its efficiency depends on the choice of representation you have, but you aren't constrained to use an adjacency matrix. Many implementations of BFS have the graph represented implicitly somehow (for example, as a 2D array storing a maze or as some sort of game) and work just fine. You can also use an adjacency list, which is particularly efficient for us in BFS.
The particular code you'll be writing will depend on how the graph is represented, but don't feel constrained to do it one way. Choose whatever's easiest for your application.
The best way to choose data structures is in terms of the operations. With a complete list of operations in hand, evaluate implementations wrt criteria important to the problem: space, speed, code size, etc.
For BFS, the operations are pretty simple:
Set<Node> getSources(Graph graph) // all in graph with no in-edges
Set<Node> getNeighbors(Node node) // all reachable from node by out-edges
Now we can evaluate graph data structure options in terms of n=number of nodes:
Adjacency matrix:
getSources is O(n^2) time
getNeighbors is O(n) time
Vector of adjacency lists (alone):
getSources is O(n) time
getNeighbors is O(1) time
"Clever" vector of adjacency lists:
getSources is O(1) time
getNeighbors is O(1) time
The cleverness is just maintaining the sources set as the graph is constructed, so the cost is amortized by edge insertion. I.e., as you create a node, add it to the sources list because it has no out edges. As you add an edge, remove the to-node from the sources set.
Now you can make an informed choice based on run time. Do the same for space, simplicity, or whatever other considerations are in play. Then choose and implement.

Why Kruskal clustering generates suboptimal classes?

I was trying to develop a clustering algorithm tasked with finding k classes on a set of 2D points, (with k given as input) using use the Kruskal algorithm lightly modified to find k spanning trees instead of one.
I compared my output to a proposed optimum (1) using the rand index, which for k = 7 resulted on 95.5%. The comparison can be seen on the link below.
Problem:
The set have 5 clearly spaced clusters that are easily classified by the algorithm, but the results are rather disappointing for k > 5, which is when things start to get tricky. I believe that my algorithm is correct, and maybe the data is particularly bad for a Kruskal approach. Single Linkage Agglomerative Clustering, such as Kruskal's, are known to perform badly at some problems since it reduces the assessment of cluster quality to a single similarity between a pair of points.
The idea of the algorithm is very simple:
Make a complete graph with the data set, with the weight of the edges
being the euclidean distance between the pair.
Sort the edge list by weight.
For each edge (in order), add it to the spanning forest if it doesn't form a cycle. Stop when all the edges have been traversed or when the remaining forest has k trees.
Bottomline:
Why is the algorithm failing like that? Is it Kruskal's fault? If so, why precisely? Any suggestions to improve the results without abandoning Kruskal?
(1): Gionis, A., H. Mannila, and P. Tsaparas, Clustering aggregation. ACM Transactions on
Knowledge Discovery from Data(TKDD),2007.1(1):p.1-30.
This is known as single-link effect.
Kruskal seems to be a semi-clever way of computing single-linkage clustering. The naive approach for "hierarchical clustering" is O(n^3), and the Kruskal approach should be O(n^2 log n) due to having to sort the n^2 edges.
Note that SLINK can do single-linkage clustering in O(n^2) runtime and O(n) memory.
Have you tried loading your data set e.g. into ELKI, and compare your result to single-link clustering.
To get bette results, try other linkages (usually in O(n^3) runtime) or density-based clustering such as DBSCAN (in O(n^2) without index, and O(n log n) with index). On this toy data set, epsilon=2 and minPts=5 should work good.
The bridges between clusters that should be different are a classic example of Kruskal getting things wrong. You might try, for each point, overwriting the shortest distance from that point with the second shortest distance from that point - this might increase the lengths in the bridges without increasing other lengths.
By eye, this looks like something K-means might do well - except for the top left, the clusters are nearly circular.
You can try the manhattan distance but to get better you can try a classic line and circle detection algorithm.

Is A* really better than Dijkstra in real-world path finding?

I'm developing a path finding program. It is said theoretically that A* is better than Dijkstra. In fact, the latter is a special case of the former. However, when testing in the real world, I begin to doubt that is A* really better?
I used data of New York City, from 9th DIMACS Implementation Challenge - Shortest Paths, in which each node's latitude and longitude is given.
When applying A*, I need to calculate the spherical distance between two points, using Haversine Formula, which involves sin, cos, arcsin, square root. All of those are very very time-consuming.
The result is,
Using Dijkstra: 39.953 ms, expanded 256540 nodes.
Using A*, 108.475 ms, expanded 255135 nodes.
Noticing that in A*, we expanded less 1405 nodes. However, the time to compute a heuristic is much more than that saved.
To my understanding, the reason is that in a very large real graph, the weight of the heuristic will be very small, and the effect of it can be ignored, while the computing time is dominating.
I think you're more or less missing the point of A*. It is intended to be a very performant algorithm, partially by intentionally doing more work but with cheap heuristics, and you're kind of tearing that to bits when burdening it with a heavy extremely accurate prediction heuristic.
For node selection in A* you should just use an approximation of distance. Simply using (latdiff^2)+(lngdiff^2) as the approximated distance should make your algorithm much more performant than Dijkstra, and not give much worse results in any real world scenario. Actually the results should even be exactly the same if you do calculate the travelled distance on a selected node properly with the Haversine. Just use a cheap algorithm for selecting potential next traversals.
A* can be reduced to Dijkstra by setting some trivial parameters. There are three possible ways in which it does not improve on Dijkstra:
The heuristic used is incorrect: it is not a so-called admissible heuristic. A* should use a heuristic which does not overestimate the distance to the goal as part of its cost function.
The heuristic is too expensive to calculate.
The real-world graph structure is special.
In the case of the latter you should try to build on existing research, e.g. "Highway Dimension, Shortest Paths, and Provably Efficient Algorithms" by Abraham et al.
Like everything else in the universe, there's a trade-off. You can take dijkstra's algorithm to precisely calculate the heuristic, but that would defeat the purpose wouldn't it?
A* is a great algorithm in that it makes you lean towards the goal having a general idea of which direction to expand first. That said, you should keep the heuristic as simple as possible because all you need is a general direction.
In fact, more precise geometric calculations that are not based on the actual data do not necessarily give you a better direction. As long as they are not based on the data, all those heuristics give you just a direction which are all equally (in)correct.
In general A* is more performant than Dijkstra's but it really depends the heuristic function you use in A*. You'll want an h(n) that's optimistic and finds the lowest cost path, h(n) should be less than the true cost. If h(n) >= cost, then you'll end up in a situation like the one you've described.
Could you post your heuristic function?
Also bear in mind that the performance of these algorithms highly depends on the nature of the graph, which is closely related to the accuracy of the heuristic.
If you compare Dijkstra vs A* when navigating out of a labyrinth where each passage corresponds to a single graph edge, there is probably very little difference in performance. On the other hand, if the graph has many edges between "far-away" nodes, the difference can be quite dramatic. Think of a robot (or an AI-controlled computer game character) navigating in almost open terrain with a few obstacles.
My point is that, even though the New York dataset you used is definitely a good example of "real-world" graph, it is not representative of all real-world path finding problems.

General purpose algorithm for triangulating an undirected graph?

I am playing around with implementing a junction tree algorithm for belief propagation on a Bayesian Network. I'm struggling a bit with triangulating the graph so the junction trees can be formed.
I understand that finding the optimal triangulation is NP-complete, but can you point me to a general purpose algorithm that results in a 'good enough' triangulation for relatively simple Bayesian Networks?
This is a learning exercise (hobby, not homework), so I don't care much about space/time complexity as long as the algorithm results in a triangulated graph given any undirected graph. Ultimately, I'm trying to understand how exact inference algorithms work before I even try doing any sort of approximation.
I'm tinkering in Python using NetworkX, but any pseudo-code description of such an algorithm using typical graph traversal terminology would be valuable.
Thanks!
If Xi is a possible variable (node) to be deleted then,
S(i) will be the size of the clique created by deleting this variable
C(i) will be the sum of the size of the cliques of the subgraph given by Xi and its adjacent nodes
Heuristic:
In each case select a variable Xi among the set of possible variables to be deleted with minimal S(i)/C(i)
Reference: Heuristic Algorithms for the Triangulation of Graphs

Incidence matrix instead of adjacency matrix

What kind of problems on graphs is faster (in terms of big-O) to solve using incidence matrix data structures instead of more widespread adjacency matrices?
The space complexities of the structures are:
Adjacency: O(V^2)
Incidence: O(VE)
With the consequence that an incidence structure saves space if there are many more vertices than edges.
You can look at the time complexity of some typical graph operations:
Find all vertices adjacent to a vertex:
Adj: O(V)
Inc: O(VE)
Check if two vertices are adjacent:
Adj: O(1)
Inc: O(E)
Count the valence of a vertex:
Adj: O(V)
Inc: O(E)
And so on. For any given algorithm, you can use building blocks like the above to calculate which representation gives you better overall time complexity.
As a final note, using a matrix of any kind is extremely space-inefficient for all but the most dense of graphs, and I recommend against using either unless you've consciously dismissed alternatives like adjacency lists.
I personally have never found a real application of the incidence matrix representation in a programming contest or research problem. I think that is may be useful for proving some theorems or for some very special problems. One book gives an example of "counting the number of spanning trees" as a problem in which this representation is useful.
Another issue with this representation is that it makes no sense to store it, because it is really easy to compute it dynamically (to answer what given cell contains) from the list of edges.
It may seem more useful in hyper-graphs however, but only if it is dense.
So my opinion is - it is useful only for theoretical works.

Resources