Community Detection in complete and weighted networks - algorithm

I do have a complete network graph where every vertex is connected with each other and they only differ in form of their different weights. A example network would be: a trade network, where every country is connected with each other somehow and only differ in form of different trading volumina.
Now the question is how I could perform a community detection in that form of network. The usual suspects (algorithm) are only able to perform in either unweighted or incomplete networks well. The main problem is that the geodesic is everywhere the same.
Two option came into my mind:
Cut the network into smaller pieces by cutting them at a certain "weight-threshold-level"
Or use a hierarchical cluster algorithm to turn the whole network into a blockmodel. But I think the problem "no variance in geodesic terms" will remain.

Several methods were suggested.
One simple yet effective method was suggested in Fast unfolding of communities in large networks (Blondel et al., 2008). It supports weighted networks. Quoting from the abstract:
We propose a simple method to extract the community structure of large
networks. Our method is a heuristic method that is based on modularity
optimization. It is shown to outperform all other known community
detection method in terms of computation time. Moreover, the quality
of the communities detected is very good, as measured by the so-called
modularity.
Quoting from the paper:
We now introduce our algorithm that finds high modularity partitions
of large networks in short time and that unfolds a complete
hierarchical community structure for the network, thereby giving
access to different resolutions of community detection.
So it supposed to work well for complete graph, but you should better check it.
A C++ implementation is available here (now maintained here).
Your other idea - using weight-threshold - may prove as a good pre-processing step, especially for algorithms which won't partition complete graphs. I believe it is best to set it to some percentile (e.g. to the median) of the weights.

Related

top-down community detection in a network

I'm trying to find a way to find network communities in a top-down way. Most of the algorithms available (e.g. in the igraph package) are working bottom up - that is they start by assuming all nodes are singleton communities, and then combine them to larger communities. I want to got the other way around, similar to how decision trees are built: start with the whole network, then find a split that improves some "measure of information", etc.
Does anyone know of such algorithm or such a measure? I can't find such in the literature, but maybe I am missing something.
Also, what bothers me with some measures of modularity is that if you think of the whole network as one module, then all edges are within module and no out-module edges exist, so this seems like a perfect partition into a modules. Is there a measure that overcome this limitation?
I think Newman's algorithm meets your requirements.
It works by computing "network modularity" and then splitting the network into two groups. After that it recursively applies the same principle to the newly formed groups until no further increase in modularity is possible.
It should also be implemented in igraph. At least in the r version.

Neural Network Basics

I'm a computer science student and for this years project, I need to create and apply a Genetic Algorithm to something. I think Neural Networks would be a good thing to apply it to, but I'm having trouble understanding them. I fully understand the concepts but none of the websites out there really explain the following which is blocking my understanding:
How the decision is made for how many nodes there are.
What the nodes actually represent and do.
What part the weights and bias actually play in classification.
Could someone please shed some light on this for me?
Also, I'd really appreciate it if you have any similar ideas for what I could apply a GA to.
Thanks very much! :)
Your question is quite complex and I don't think a small answer will fully satisfy you. Let me try, nonetheless.
First of all, there must be at least three layers in your neural network (assuming a simple feedforward one). The first is the input layer and there will be one neuron per input. The third layer is the output one and there will be one neuron per output value (if you are classifying, there might be more than one f you want to assign a "belong to" meaning to each neuron).. The remaining layer is the hidden one, which will stand between the input and output. Determining its size is a complex task as you can see in the following references:
comp.ai faq
a post on stack exchange
Nevertheless, the best way to proceed would be for you to state your problem more clearly (as weel as industrial secrecy might allow) and let us think a little more on your context.
The number of input and output nodes is determined by the number of inputs and outputs you have. The number of intermediate nodes is up to you. There is no "right" number.
Imagine a simple network: inputs( age, sex, country, married ) outputs( chance of death this year ). Your network might have a 2 "hidden values", one depending on age and sex, the other depending on country and married. You put weights on each. For example, Hidden1 = age * weight1 + sex * weight2. Hidden2 = country * weight3 + married * weight4. You then make another set of weights, Hidden3 and Hidden4 connecting to the output variable.
Then you get a data from, say the census, and run through your neural network to find out what weights best match the data. You can use genetic algorithms to test different sets of weights. This is useful if you have so many edges you could not try every possible weighting. You need to find good weights without exhaustively trying every possible set of weights, so GA lets you "evolve" a good set of weights.
Then you test your weights on data from a different census to see how well it worked.
... my major barrier to understanding this though is understanding how the hidden layer actually works; I don't really understand how a neuron functions and what the weights are for...
Every node in the middle layer is a "feature detector" -- it will (hopefully) "light up" (i.e., be strongly activated) in response to some important feature in the input. The weights are what emphasize an aspect of the previous layer; that is, the set of input weights to a neuron correspond to what nodes in the previous layer are important for that feature.
If a weight connecting myInputNode to myMiddleLayerNode is 0, then you can tell that myInputNode is not important to whatever feature myMiddleLayerNode is detecting. If, though, the weight connecting myInputNode to myMiddleLayerNode is very large (either positive or negative), you know that myInputNode is quite important (if it's very negative it means "No, this feature is almost certainly not there", while if it's very positive it means "Yes, this feature is almost certainly there").
So a corollary of this is that you want the number of your middle-layer nodes to have a correspondence to how many features are needed to classify the input: too few middle-layer nodes and it will be hard to converge during training (since every middle-layer node will have to "double up" on its feature-detection) while too many middle-layer nodes may over-fit your data.
So... a possible use of a genetic algorithm would be to design the architecture of your network! That is, use a GA to set the number of middle-layer nodes and initial weights. Some instances of the population will converge faster and be more robust -- these could be selected for future generations. (Personally, I've never felt this was a great use of GAs since I think it's often faster just to trial-and-error your way into a decent NN architecture, but using GAs this way is not uncommon.)
You might find this wikipedia page on NeuroEvolution of Augmenting Topologies (NEAT) interesting. NEAT is one example of applying genetic algorithms to create the neural network topology.
The best way to explain an Artificial Neural Network (ANN) is to provide the biological process that it attempts to simulate - a neural network. The best example of one is the human brain. So how does the brain work (highly simplified for CS)?
The functional unit (for our purposes) of the brain is the neuron. It is a potential accumulator and "disperser". What that means is that after a certain amount of electric potential (think filling a balloon with air) has been reached, it "fires" (balloon pops). It fires electric signals down any connections it has.
How are neurons connected? Synapses. These synapses can have various weights (in real life due to stronger/weaker synapses from thicker/thinner connections). These weights allow a certain amount of a fired signal to pass through.
You thus have a large collection of neurons connected by synapses - the base representation for your ANN. Note that the input/output structures described by the others are an artifact of the type of problem to which ANNs are applied. Theoretically, any neuron can accept input as well. It serves little purpose in computational tasks however.
So now on to ANNs.
NEURONS: Neurons in an ANN are very similar to their biological counterpart. They are modeled either as step functions (that signal out "1" after a certain combined input signal, or "0" at all other times), or slightly more sophisticated firing sequences (arctan, sigmoid, etc) that produce a continuous output, though scaled similarly to a step. This is closer to the biological reality.
SYNAPSES: These are extremely simple in ANNs - just weights describing the connections between Neurons. Used simply to weight the neurons that are connected to the current one, but still play a crucial role: synapses are the cause of the network's output. To clarify, the training of an ANN with a set structure and neuron activation function is simply the modification of the synapse weights. That is it. No other change is made in going from a a "dumb" net to one that produces accurate results.
STRUCTURE:
There is no "correct" structure for a neural network. The structures are either
a) chosen by hand, or
b) allowed to grow as a result of learning algorithms (a la Cascade-Correlation Networks).
Assuming the hand-picked structure, these are actually chosen through careful analysis of the problem and expected solution. Too few "hidden" neurons/layers, and you structure is not complex enough to approximate a complex function. Too many, and your training time rapidly grows unwieldy. For this reason, the selection of inputs ("features") and the structure of a neural net are, IMO, 99% of the problem. The training and usage of ANNs is trivial in comparison.
To now address your GA concern, it is one of many, many efforts used to train the network by modifying the synapse weights. Why? because in the end, a neural network's output is simply an extremely high-order surface in N dimensions. ANY surface optimization technique can be use to solve the weights, and GA are one such technique. The simple backpropagation method is alikened to a dimension-reduced gradient-based optimization technique.

How to sample a scale-free graph

Given a large scale-free graph (a social network graph), what's the best way to sample it such that the sample retains an acceptable abstraction of the properties of the original?
I have a large graph (Munmun's twitter dataset, if you know it). But I need a connected sample of that graph with a reasonably large diameter (tl;dr... reasons why on request... a diameter of 10 would be good).
The problem is that any kinda breadth-first search always is likely to come across some massively connected nodes. So I start such a search, getting the friends of all nodes which I come across. I inevitably come across some massively-connected nodes, and have to get all their friends. This is a problem because I end up with a large number of nodes which are close to each other in the graph. To make programmatic analysis feasible, I have to limit the number of nodes (and edges). The whole point of this exercise is to find shortest paths between nodes, so I'm generally interested in ALL of a node's neighbours. And that's the problem.
One hack around this is to limit the max. number of nodes connected to a user which I'm interested in. For instance, if I come across #barackobama in my breadth-first search, I ensure that I only accept some small proportion of his friends and ignore the rest. But would this hacked graph be worth a damn, or am I losing too much information in terms of finding shortest paths??
Hope that makes sense...
Several sampling methods exist, how to choose one depends (amongst other things) of the properties you want to preserve. I found the literature review (section 3) in the thesis Sampling and Inference in Complex Networks [Maiya '11] very informative, for that matter.
But you seem to have found a way of sampling your network, and now you want to find out if the sample is representative of the whole graph in terms of shortest paths. You can try to have a look at this paper: Complex Network Measurements: Estimating the Relevance of Observed Properties [Latapy & Magnien '08]. They describe a method to assess the representativeness of a sample, regarding various classic topological properties. To summarize their approach, they initially have access to the whole studied network, and simulate some sampling process on these data, with increasing sample size. They monitor how properties evolve depending on sample size, and decide of an appropriate size when the properties of interest are stable enough. Their tool is freely available online.
Edit: the only ready-to-use tool I could find online is Albatross. The associated article Albatross Sampling: Robust and Effective Hybrid Vertex Sampling for Social Graphs [Jin et al. '11] also contains a nice review of existing sampling methods, some of which are implemented in the source code they provide.
Edit 2: I needed to use Albatross on a Linux system, so I did a Java port. It's very raw, but it seems to work fine. It's available on GitHub: https://github.com/vlabatut/Albatross
I am not sure, if I understand your question correctly. I think the main question you have is, about how you can compute the shortest path of two nodes in a giant, directed graph. Creating a subsample of the graph seems to be your attempt to create an efficient solution. (But I probably misunderstood you completely.)
Perhaps this SO-Question has some pointers for you: Efficiently finding the shortest path in large graphs
The graphs in that question seem to be significantly smaller, though.
You might want to check the following: Gscaler: https://github.com/jayCool/Gscaler
This is a recent tool which produces synthetic scaled graphs.
It contains the jar file and the related paper for your reference.

What are good examples of problems that graphs can solve better than the alternative? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
After reading Stevey Yegge's Get That Job At Google article, I found this little quote interesting:
Whenever someone gives you a problem, think graphs. They are the most fundamental and flexible way of representing any kind of a relationship, so it's about a 50–50 shot that any interesting design problem has a graph involved in it. Make absolutely sure you can't think of a way to solve it using graphs before moving on to other solution types. This tip is important!
What are some examples of problems that are best represented and/or solved by graph data structures/algorithms?
One example I can think of: navigation units (ala Garmin, TomTom), that supply road directions from your current location to another, utilize graphs and advanced pathing algorithms.
What are some others?
Computer Networks: Graphs model intuitively model computer networks and the Internet. Often nodes will represent end-systems or routers, while edges represent connections between these systems.
Data Structures: Any data structure that makes use of pointers to link data together is making use of a graph of some kind. This includes tree structures and linked lists which are used all the time.
Pathing and Maps: Trying to find shortest or longest paths from some location to a destination makes use of graphs. This can include pathing like you see in an application like Google maps, or calculating paths for AI characters to take in a video game, and many other similar problems.
Constraint Satisfaction: A common problem in AI is to find some goal that satisfies a list of constraints. For example, for a University to set it's course schedules, it needs to make sure that certain courses don't conflict, that a professor isn't teaching two courses at the same time, that the lectures occur during certain timeslots, and so on. Constraint satisfaction problems like this are often modeled and solved using graphs.
Molecules: Graphs can be used to model atoms and molecules for studying their interaction and structure among other things.
I am very very interested in graph theory and ive used it solved so many different kinds of problem. You can solve a lot of Path related problem, matching problem, structure problems using graph.
Path problems have a lot of applications.
This was in a career cup's interview question.
Say you want to find the longest sum of a sub array. For example, [1, 2, 3, -1] has the longest sum of 6. Model it as a Directed Acyclic Graph (DAG), add a dummy source, dummy destination. Connect each node with an edge which has a weight corresponding to the number. Now use the Longest Path algorithm in the DAG to solve this problem.
Similarly, Arbitrage problems in financial world or even geometry problems of finding the longest overlapping structure is a similar path problem.
Some obvious ones would be the network problems (where your network could have computers people, organisation charts, etc).
You can glean a lot of structural information like
which point breaks the graph into two pieces
what is the best way to connect them
what is the best way to reach one place to another
is there a way to reach one place from another, etc.
I've solved a lot of project management related problems using graphs. A sequence of events can be pictured as a directed graph (if you don't have cycles then thats even better). So, now you can
sort the events according to their priority
you can find the event that is the most crucial (that is would free a lot of other projects)
you can find the duration needed to solve the total project (path problem), etc.
A lot of matching problems can be solved by graph. For example, if you need to match processors to the work load or match workers to their jobs. In my final exam, I had to match people to tables in restaurants. It follows the same principle (bipartite matching -> network flow algorithms). Its simple yet powerful.
A special graph, a tree, has numerous applications in the computer science world. For example, in the syntax of a programming language, or in a database indexing structure.
Most recently, I also used graphs in compiler optimization problems. I am using Morgan's Book, which is teaching me fascinating techniques.
The list really goes on and on and on. Graphs are a beautiful math abstraction for relation. You really can do wonders, if you can model it correctly. And since the graph theory has found so many applications, there are many active researches in the field. And because of numerous researches, we are seeing even more applications which is fuelling back researches.
If you want to get started on graph theory, get a good beginner discrete math book (Rosen comes to my mind), and you can buy books from authors like Fould or Even. CLRS also has good graph algorithms.
Your source code is tree structured, and a tree is a type of graph. Whenever you hear people talking about an AST (Abstract Syntax Tree), they're talking about a kind of graph.
Pointers form graph structures. Anything that walks pointers is doing some kind of graph manipulation.
The web is a huge directed graph. Google's key insight, that led them to dominate in search, is that the graph structure of the web is of comparable or greater importance than the textual content of the pages.
State machines are graphs. State machines are used in network protocols, regular expressions, games, and all kinds of other fields.
It's rather hard to think of anything you do that does not involve some sort of graph structure.
An example most people are familiar: build systems. Make is the typical example, but almost any good build system relies on a Directed Acyclic Graph. The basic idea is that the direction models the dependency between a source and a target, and you should "walk" the graph in a certain order to build things correctly -> this is an example of topological sort.
Another example is source control system: again based on a DAG. It is used for merging, for example, to find common parent.
Well, many program optimization algorithms that compilers use are based on graphs (e.g., figure out call graph, flow control, lots of static analysis).
Many optimization problems are based on graph. Since many problems are reducable to graph colouring and similar problems, then many other problems are also graph based.
I'm not sure I agree that graphs are the best way to represent every relation and I certainly try to avoid these "got a nail, let's find a hammer" approaches. Graphs often have poor memory representations and many algorithms are actually more efficient (in practice) when implemented with matrices, bitsets, and other things.
OCR. Picture a page of text scanned at an angle, with some noise in the image, where you must find the space between lines of text. One way is to make a graph of pixels, and find the shortest path from one side of the page to the other, where the difference in brightness is the distance between pixels.
This example is from the Algorithm Design Manual, which has lots of other real world examples of graph problems.
One popular example is garbage collection.
The collector starts with a set of references, then traverses all the objects they reference, then all the objects referenced there and so on. Everything it finds is added into a graph of reachable objects. All other objects are unreachable and collected.
To find out if two molecules can fit together. When developing drugs one is often interested in seeing if the drug molecules can fit into larger molecules in the body. The problem with determining whether this is possible is that molecules are not static. Different parts of the molecule can rotate around their chemical bindings so that a molecule can change into quite a lot of different shapes.
Each shape can be said to represent a point in a space consisting of shapes. Solving this problem involves finding a path through this space. You can do that by creating a roadmap through space, which is essentially a graph consisting of legal shapes and saying which shape a shape can turn into. By using a A* graph search algorithm through this roadmap you can find a solution.
Okay that was a lot of babble that perhaps wasn't very understandable or clear. But my point was that graphs pop up in all kinds of problems.
Graphs are not data structures. They are mathematical representation of relations. Yes, you can think and theoretize about problems using graphs, and there is a large body of theory about it. But when you need to implement an algorithm, you are choosing data structures to best represent the problem, not graphs. There are many data structures that represent general graphs, and even more for special kinds of graphs.
In your question, you mix these two things. The same theoretical solution may be in terms of graph, but practical solutions may use different data structures to represent the graph.
The following are based on graph theory:
Binary trees and other trees such as Red-black trees, splay trees, etc.
Linked lists
Anything that's modelled as a state machine (GUIs, network stacks, CPUs, etc)
Decision trees (used in AI and other applications)
Complex class inheritance
IMHO most of the domain models we use in normal applications are in some respect graphs. Already if you look at the UML diagrams you would notice that with a directed, labeled graph you could easily translate them directly into a persistence model. There are some examples of that over at Neo4j
Cheers
/peter
Social connections between people make an interesting graph example. I've tried to model these connections at the database level using a traditional RDMS but found it way too hard. I ended up choosing a graph database and it was a great choice because it makes it easy to follow connections (edges) between people (nodes).
Graphs are great for managing dependencies.
I recently started to use the Castle Windsor Container, after inspecting the Kernel I found a GraphNodes property. Castle Windsor uses a graph to represent the dependencies between objects so that injection will work correctly. Check out this article.
I have also used simple graph theory to develop a plugin framework, each graph node represent a plugin, once the dependencies have been defined I can traverse the graph to create a plugin load order.
I am planning on changing the algorithm to implement Dijkstra's algorithm so that each plugin is weighted with a specific version, thus a simple change will only load the latest version of the plugin.
I with I had discovered this sooner. I like that quote "Whenever someone gives you a problem, think graphs." I definitely think that's true.
Profiling and/or Benchmarking algorithms and implementations in code.
Anything that can be modelled as a foreign key in a relational database is essentially an edge and nodes in a graph.
Maybe that will help you think of examples, since most things are readily modelled in a RDBMS.
You could take a look at some of the examples in the Neo4j wiki,
http://wiki.neo4j.org/content/Domain_Modeling_Gallery
and the projects that Neo4j is used in (the known ones)
http://wiki.neo4j.org/content/Neo4j_In_The_Wild .
Otherwise, Recommender Algorithms are a good use for Graphs, see for instance PageRank, and other stuff at
https://github.com/tinkerpop/gremlin/wiki/pagerank
Analysing transaction serialisability in database theory.
You can utilise graphs anywhere you can define the problem domain objects into nodes and the solution as the flow of control and/or data amongst the nodes.
Considering the fact that trees are indeed connected-acyclic graphs, there are even more areas you can use the graph theory.
Basically nearlly all common data structures like trees, lists, queues, etc, can be thought as type of graph, some with different type of constraint.
To my experiences, I have used graph intensively in network flow problems, which is used in lots of areas like telecommunication network routing and optimisation, workload assignment, matching, supply chain optimisation and public transport planning.
Another interesting area is social network modelling as previous answer mentioned.
There are far more, like integrated circuit optimisation, etc.

Artificial Neural Network Question

Generally speaking what do you get out of extending an artificial neural net by adding more nodes to a hidden layer or more hidden layers?
Does it allow for more precision in the mapping, or does it allow for more subtlety in the relationships it can identify, or something else?
There's a very well known result in machine learning that states that a single hidden layer is enough to approximate any smooth, bounded function (the paper was called "Multilayer feedforward networks are universal approximators" and it's now almost 20 years old). There are several things to note, however.
The single hidden layer may need to be arbitrarily wide.
This says nothing about the ease with which an approximation may be found; in general large networks are hard to train properly and fall victim to overfitting quite frequently (the exception are so-called "convolutional neural networks" which really are only meant for vision problems).
This also says nothing about the efficiency of the representation. Some functions require exponential numbers of hidden units if done with one layer but scale much more nicely with more layers (for more discussion of this read Scaling Learning Algorithms Towards AI)
The problem with deep neural networks is that they're even harder to train. You end up with very very small gradients being backpropagated to the earlier hidden layers and the learning not really going anywhere, especially if weights are initialized to be small (if you initialize them to be of larger magnitude you frequently get stuck in bad local minima). There are some techniques for "pre-training" like the ones discussed in this Google tech talk by Geoff Hinton which attempt to get around this.
This is very interesting question but it's not so easy to answer. It depends on the problem you try to resolve and what neural network you try to use. There are several neural network types.
I general it's not so clear that more nodes equals more precision. Research show that you need mostly only one hidden layer. The numer of nodes should be the minimal numer of nodes that are required to resolve a problem. If you don't have enough of them - you will not reach solution.
From the other hand - if you have reached the number of nodes that is good to resolve solution - you can add more and more of them and you will not see any further progress in result estimation.
That's why there are so many types of neural networks. They try to resolve different types of problems. So you have NN to resolve static problems, to resolve time related problems and so one. The number of nodes is not so important like the design of them.
When you have a hidden layer is that you are creating a combined feature of the input. So, is the problem better tackled by more features of the existing input, or through higher-order features that come from combining existing features? This is the trade-off for a standard feed-forward network.
You have a theoretical reassurance that any function can be represented by a neural network with two hidden layers and non-linear activation.
Also, consider using additional resources for boosting, instead of adding more nodes, if you're not certain of the appropriate topology.
Very rough rules of thumb
generally more elements per layer for bigger input vectors.
more layers may let you model more non-linear systems.
If the kind of network you are using has delays in propagation , more layers may allow modelling of time series . Take care to have time jitter in the delays or it wont work very well. If this is just gobbledegook to you, ignore it.
More layers lets you insert recurrent features. This can be very useful for discrimination tasks. You ANN implementation my not permit this.
HTH
The number of units per hidden layer accounts for the ANN's potential to describe an arbitrarily complex function. Some (complicated) functions may require many hidden nodes, or possibly more than one hidden layer.
When a function can be roughly approximated by a certain number of hidden units, any extra nodes will provide more accuracy...but this is only true if the training samples used are enough to justify this addition - otherwise what will happen is "overconvergence". Overconvergence means that your ANN has lost its generalization abilities because it has overemphasized on the particular samples.
In general it is best to use the less hidden units possible, if the resulting network can give good results. The additional training patterns required to justify more hidden nodes can not be found easily in most cases, and accuracy is not the NNs' strong point.

Resources