How to sample a scale-free graph - social-networking

Given a large scale-free graph (a social network graph), what's the best way to sample it such that the sample retains an acceptable abstraction of the properties of the original?
I have a large graph (Munmun's twitter dataset, if you know it). But I need a connected sample of that graph with a reasonably large diameter (tl;dr... reasons why on request... a diameter of 10 would be good).
The problem is that any kinda breadth-first search always is likely to come across some massively connected nodes. So I start such a search, getting the friends of all nodes which I come across. I inevitably come across some massively-connected nodes, and have to get all their friends. This is a problem because I end up with a large number of nodes which are close to each other in the graph. To make programmatic analysis feasible, I have to limit the number of nodes (and edges). The whole point of this exercise is to find shortest paths between nodes, so I'm generally interested in ALL of a node's neighbours. And that's the problem.
One hack around this is to limit the max. number of nodes connected to a user which I'm interested in. For instance, if I come across #barackobama in my breadth-first search, I ensure that I only accept some small proportion of his friends and ignore the rest. But would this hacked graph be worth a damn, or am I losing too much information in terms of finding shortest paths??
Hope that makes sense...

Several sampling methods exist, how to choose one depends (amongst other things) of the properties you want to preserve. I found the literature review (section 3) in the thesis Sampling and Inference in Complex Networks [Maiya '11] very informative, for that matter.
But you seem to have found a way of sampling your network, and now you want to find out if the sample is representative of the whole graph in terms of shortest paths. You can try to have a look at this paper: Complex Network Measurements: Estimating the Relevance of Observed Properties [Latapy & Magnien '08]. They describe a method to assess the representativeness of a sample, regarding various classic topological properties. To summarize their approach, they initially have access to the whole studied network, and simulate some sampling process on these data, with increasing sample size. They monitor how properties evolve depending on sample size, and decide of an appropriate size when the properties of interest are stable enough. Their tool is freely available online.
Edit: the only ready-to-use tool I could find online is Albatross. The associated article Albatross Sampling: Robust and Effective Hybrid Vertex Sampling for Social Graphs [Jin et al. '11] also contains a nice review of existing sampling methods, some of which are implemented in the source code they provide.
Edit 2: I needed to use Albatross on a Linux system, so I did a Java port. It's very raw, but it seems to work fine. It's available on GitHub: https://github.com/vlabatut/Albatross

I am not sure, if I understand your question correctly. I think the main question you have is, about how you can compute the shortest path of two nodes in a giant, directed graph. Creating a subsample of the graph seems to be your attempt to create an efficient solution. (But I probably misunderstood you completely.)
Perhaps this SO-Question has some pointers for you: Efficiently finding the shortest path in large graphs
The graphs in that question seem to be significantly smaller, though.

You might want to check the following: Gscaler: https://github.com/jayCool/Gscaler
This is a recent tool which produces synthetic scaled graphs.
It contains the jar file and the related paper for your reference.

Related

Community Detection in complete and weighted networks

I do have a complete network graph where every vertex is connected with each other and they only differ in form of their different weights. A example network would be: a trade network, where every country is connected with each other somehow and only differ in form of different trading volumina.
Now the question is how I could perform a community detection in that form of network. The usual suspects (algorithm) are only able to perform in either unweighted or incomplete networks well. The main problem is that the geodesic is everywhere the same.
Two option came into my mind:
Cut the network into smaller pieces by cutting them at a certain "weight-threshold-level"
Or use a hierarchical cluster algorithm to turn the whole network into a blockmodel. But I think the problem "no variance in geodesic terms" will remain.
Several methods were suggested.
One simple yet effective method was suggested in Fast unfolding of communities in large networks (Blondel et al., 2008). It supports weighted networks. Quoting from the abstract:
We propose a simple method to extract the community structure of large
networks. Our method is a heuristic method that is based on modularity
optimization. It is shown to outperform all other known community
detection method in terms of computation time. Moreover, the quality
of the communities detected is very good, as measured by the so-called
modularity.
Quoting from the paper:
We now introduce our algorithm that finds high modularity partitions
of large networks in short time and that unfolds a complete
hierarchical community structure for the network, thereby giving
access to different resolutions of community detection.
So it supposed to work well for complete graph, but you should better check it.
A C++ implementation is available here (now maintained here).
Your other idea - using weight-threshold - may prove as a good pre-processing step, especially for algorithms which won't partition complete graphs. I believe it is best to set it to some percentile (e.g. to the median) of the weights.

top-down community detection in a network

I'm trying to find a way to find network communities in a top-down way. Most of the algorithms available (e.g. in the igraph package) are working bottom up - that is they start by assuming all nodes are singleton communities, and then combine them to larger communities. I want to got the other way around, similar to how decision trees are built: start with the whole network, then find a split that improves some "measure of information", etc.
Does anyone know of such algorithm or such a measure? I can't find such in the literature, but maybe I am missing something.
Also, what bothers me with some measures of modularity is that if you think of the whole network as one module, then all edges are within module and no out-module edges exist, so this seems like a perfect partition into a modules. Is there a measure that overcome this limitation?
I think Newman's algorithm meets your requirements.
It works by computing "network modularity" and then splitting the network into two groups. After that it recursively applies the same principle to the newly formed groups until no further increase in modularity is possible.
It should also be implemented in igraph. At least in the r version.

Which data clustering algorithm is appropriate to detect an unknown number of clusters in a time series of events?

Here's my scenario. Consider a set of events that happen at various places and times - as an example, consider someone high above recording the lightning strikes in a city during a storm. For my purpose, lightnings are instantaneous and can only hit certain locations (such as high buildings). Also imagine each lightning strike has a unique id so one can reference the strike later. There are about 100,000 such locations in this city (as you guess, this is an analogy as my current employer is sensitive about the actual problem).
For phase 1, my input is the set of (strike id, strike time, strike location) tuples. The desired output is the set of the clusters of more than 1 event that hit the same location within a short time. The number of clusters is not known in advance (so k-means is not that useful here). What is being considered as 'short' could be predefined for a given clustering attempt. That is, I can set it to, say, 3 minutes, than run the algorithm; later try with 4 minutes or 10 minutes. Perhaps a nice touch would be for the algorithm to determine a 'strength' of clustering and recommend that for a given input, the most compact clustering is achieved by using a particular value for 'short', but this is not required initially.
For phase 2, I'd like to take into consideration the amplitude of the strike (i.e., a real number) and look for clusters that are both within a short time and with similar amplitudes.
I googled and checked the answers here about data clustering. The information is a bit bewildering (below is the list of links I found useful). AFAIK, k-means and related algorithms would not be useful because they require the number of clusters to be specified apriori. I'm not asking for someone to solve my problem (I like solving it), but some orientation in the large world of data clustering algorithms would be useful in order to save some time. Specifically, what clustering algorithms are appropriate for when the number of clusters is unknown.
Edit: I realized the location is irrelevant, in the sense that although events happen all the time, I only need to cluster them per location. So each location has its own time-series of events that can thus be analyzed independently.
Some technical details:
- as the dataset is not that large, it can fit all in memory.
- parallel processing is a nice to have, but not essential. I only have a 4-core machine and MapReduce and Hadoop would be too much.
- the language I'm mostly familiar with is Java. I haven't yet used R and the learning curve for it would probably be too much for what time I was given. I'll have a look at it anyway in my spare time.
- for the time being, using tools to run the analysis is ok, I don't have to produce just code. I'm mentioning this because probably Weka will be suggested.
- visualization would be useful. As the dataset is large enough so it doesn't fit in memory, the visualization should at least support zooming and panning. And to clarify: I don't need to build a visualization GUI, it's just a nice capability to use for checking the results produced with a tool.
Thank you. Questions that I found useful are: How to find center of clusters of numbers? statistics problem?, Clustering Algorithm for Paper Boys, Java Clustering Library, How to cluster objects (without coordinates), Algorithm for detecting "clusters" of dots
I would suggest you to look into Mean Shift Clustering. The basic idea behind mean shift clustering is to take the data and perform a kernel density estimation, then find the modes in the density estimate, the regions of convergence of data points towards modes defines the clusters.
The nice thing about mean shift clustering is that the number of clusters do not have to be specified ahead of time.
I have not used Weka, so I am not sure if it has mean shift clustering. However if you are using MATLAB, here is a toolbox (KDE toolbox) to do it. Hope that helps.
Couldn't you just use hierarchical clustering with the difference in times of strikes as part of the distance metric?
It is too late, but still I would add it:
In R, there is a package fpc and it has a method pamk() which provides you the clusters. Using pamk(), you do not need to mention the number of clusters intially. It calculates itself the number of clusters in the input data.

Ask for resource about fast ray-tracing algorithm

First, I am sorry for this rough question, but I don't want to introduce too much details, so I just ask for related resource like articles, libraries or tips.
My program need to do intensive computation of ray-triangle intersection (there are millions of rays and triangles), and my goal is to make it as fast as I can.
What I have done is:
Use the fastest ray-triangle algorithm that I know.
Use Octree.(From Game Programming Gem 1, 4.10. 4.11)
Use An Efficient and Robust Ray–Box Intersection Algorithm which is used in octree algorithm.
It is faster than before I applied those better algorithms, but I believe it could be faster, Could you please shed lights on any possible places that could make it faster?
Thanks.
The place to ask these questions is ompf2.com. A forum with topics about realtime (although also non-realtime) raytracing
OMPF forum is the right place for this question, but since I'm here today...
Don't use a ray/box intersection for OctTree traversal. You may use it for the root node of the tree, but that's it. Once you know the distance to the entry and exit of the root box, you can calculate the distances to the x,y, and z partition planes - the planes that subdivide the box. If the distance to front and back are f and b respectively then you can determine which child nodes of the box are hit by analyzing f,b,x,y,z distances. You can also determine the order to traverse the child nodes and completely reject many of them.
At most 4 of the children can be hit since the ray starts in one octant and only changes octants when it crosses one of the 3 partition planes.
Also, since it becomes recursive you'll be needing the entry and exit distances for the child nodes. These distances are chosen from the set (f,b,x,y,z) which you've already computed.
I have been optimizing this for a very long time, and can safely say you have about an order of magnitude performance still on the table for trees many levels deep. I started right where you are now.
There are several optimizations you can do, but all of them depend on the exact domain of your problem. As far as general algorithms go, you are on the right track. Depending on the domain, you could:
Introduce a portal system
Move the calculations to a GPU and take advantage of parallel computation
A quite popular trend in raytracing recently is Bounding Volume Hierarchies
You've already gotten a good start using a spatial sort coupled with fast intersection algorithms. For tracing single rays at a time, one of the best structures out there (for static scenes) is a K-d tree built using the Surface Area Heuristic.
However, for truly high-speed ray tracing you need to take advantage of:
Coherent packets of rays
Frusta
SIMD
I would suggest you start with "Ray Tracing Animated Scenes using Coherent Grid Traversal". It gives an easy-to-follow example of such a modern approach. You can also follow the references to see how these ideas are applied to K-d trees and BVHs.
On the same page, also check out "State of the Art in Ray Tracing Animated Scenes".
Another great set of resources are all the SIGGRAPH publications over the years. This is a very competitive conference, so these papers tend to be top-notch.
Finally, if you're willing to use existing code, check out the project page for OpenRT.
A useful resource I've seen is the journal of graphics tools. Depending on your scenes, another BVH might be more appropriate than an octree.
Also, if you haven't looked at your performance with a profiler then you should. Shark is great on OSX, and I've gotten good results with Very Sleepy on windows.

What are good examples of problems that graphs can solve better than the alternative? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
After reading Stevey Yegge's Get That Job At Google article, I found this little quote interesting:
Whenever someone gives you a problem, think graphs. They are the most fundamental and flexible way of representing any kind of a relationship, so it's about a 50–50 shot that any interesting design problem has a graph involved in it. Make absolutely sure you can't think of a way to solve it using graphs before moving on to other solution types. This tip is important!
What are some examples of problems that are best represented and/or solved by graph data structures/algorithms?
One example I can think of: navigation units (ala Garmin, TomTom), that supply road directions from your current location to another, utilize graphs and advanced pathing algorithms.
What are some others?
Computer Networks: Graphs model intuitively model computer networks and the Internet. Often nodes will represent end-systems or routers, while edges represent connections between these systems.
Data Structures: Any data structure that makes use of pointers to link data together is making use of a graph of some kind. This includes tree structures and linked lists which are used all the time.
Pathing and Maps: Trying to find shortest or longest paths from some location to a destination makes use of graphs. This can include pathing like you see in an application like Google maps, or calculating paths for AI characters to take in a video game, and many other similar problems.
Constraint Satisfaction: A common problem in AI is to find some goal that satisfies a list of constraints. For example, for a University to set it's course schedules, it needs to make sure that certain courses don't conflict, that a professor isn't teaching two courses at the same time, that the lectures occur during certain timeslots, and so on. Constraint satisfaction problems like this are often modeled and solved using graphs.
Molecules: Graphs can be used to model atoms and molecules for studying their interaction and structure among other things.
I am very very interested in graph theory and ive used it solved so many different kinds of problem. You can solve a lot of Path related problem, matching problem, structure problems using graph.
Path problems have a lot of applications.
This was in a career cup's interview question.
Say you want to find the longest sum of a sub array. For example, [1, 2, 3, -1] has the longest sum of 6. Model it as a Directed Acyclic Graph (DAG), add a dummy source, dummy destination. Connect each node with an edge which has a weight corresponding to the number. Now use the Longest Path algorithm in the DAG to solve this problem.
Similarly, Arbitrage problems in financial world or even geometry problems of finding the longest overlapping structure is a similar path problem.
Some obvious ones would be the network problems (where your network could have computers people, organisation charts, etc).
You can glean a lot of structural information like
which point breaks the graph into two pieces
what is the best way to connect them
what is the best way to reach one place to another
is there a way to reach one place from another, etc.
I've solved a lot of project management related problems using graphs. A sequence of events can be pictured as a directed graph (if you don't have cycles then thats even better). So, now you can
sort the events according to their priority
you can find the event that is the most crucial (that is would free a lot of other projects)
you can find the duration needed to solve the total project (path problem), etc.
A lot of matching problems can be solved by graph. For example, if you need to match processors to the work load or match workers to their jobs. In my final exam, I had to match people to tables in restaurants. It follows the same principle (bipartite matching -> network flow algorithms). Its simple yet powerful.
A special graph, a tree, has numerous applications in the computer science world. For example, in the syntax of a programming language, or in a database indexing structure.
Most recently, I also used graphs in compiler optimization problems. I am using Morgan's Book, which is teaching me fascinating techniques.
The list really goes on and on and on. Graphs are a beautiful math abstraction for relation. You really can do wonders, if you can model it correctly. And since the graph theory has found so many applications, there are many active researches in the field. And because of numerous researches, we are seeing even more applications which is fuelling back researches.
If you want to get started on graph theory, get a good beginner discrete math book (Rosen comes to my mind), and you can buy books from authors like Fould or Even. CLRS also has good graph algorithms.
Your source code is tree structured, and a tree is a type of graph. Whenever you hear people talking about an AST (Abstract Syntax Tree), they're talking about a kind of graph.
Pointers form graph structures. Anything that walks pointers is doing some kind of graph manipulation.
The web is a huge directed graph. Google's key insight, that led them to dominate in search, is that the graph structure of the web is of comparable or greater importance than the textual content of the pages.
State machines are graphs. State machines are used in network protocols, regular expressions, games, and all kinds of other fields.
It's rather hard to think of anything you do that does not involve some sort of graph structure.
An example most people are familiar: build systems. Make is the typical example, but almost any good build system relies on a Directed Acyclic Graph. The basic idea is that the direction models the dependency between a source and a target, and you should "walk" the graph in a certain order to build things correctly -> this is an example of topological sort.
Another example is source control system: again based on a DAG. It is used for merging, for example, to find common parent.
Well, many program optimization algorithms that compilers use are based on graphs (e.g., figure out call graph, flow control, lots of static analysis).
Many optimization problems are based on graph. Since many problems are reducable to graph colouring and similar problems, then many other problems are also graph based.
I'm not sure I agree that graphs are the best way to represent every relation and I certainly try to avoid these "got a nail, let's find a hammer" approaches. Graphs often have poor memory representations and many algorithms are actually more efficient (in practice) when implemented with matrices, bitsets, and other things.
OCR. Picture a page of text scanned at an angle, with some noise in the image, where you must find the space between lines of text. One way is to make a graph of pixels, and find the shortest path from one side of the page to the other, where the difference in brightness is the distance between pixels.
This example is from the Algorithm Design Manual, which has lots of other real world examples of graph problems.
One popular example is garbage collection.
The collector starts with a set of references, then traverses all the objects they reference, then all the objects referenced there and so on. Everything it finds is added into a graph of reachable objects. All other objects are unreachable and collected.
To find out if two molecules can fit together. When developing drugs one is often interested in seeing if the drug molecules can fit into larger molecules in the body. The problem with determining whether this is possible is that molecules are not static. Different parts of the molecule can rotate around their chemical bindings so that a molecule can change into quite a lot of different shapes.
Each shape can be said to represent a point in a space consisting of shapes. Solving this problem involves finding a path through this space. You can do that by creating a roadmap through space, which is essentially a graph consisting of legal shapes and saying which shape a shape can turn into. By using a A* graph search algorithm through this roadmap you can find a solution.
Okay that was a lot of babble that perhaps wasn't very understandable or clear. But my point was that graphs pop up in all kinds of problems.
Graphs are not data structures. They are mathematical representation of relations. Yes, you can think and theoretize about problems using graphs, and there is a large body of theory about it. But when you need to implement an algorithm, you are choosing data structures to best represent the problem, not graphs. There are many data structures that represent general graphs, and even more for special kinds of graphs.
In your question, you mix these two things. The same theoretical solution may be in terms of graph, but practical solutions may use different data structures to represent the graph.
The following are based on graph theory:
Binary trees and other trees such as Red-black trees, splay trees, etc.
Linked lists
Anything that's modelled as a state machine (GUIs, network stacks, CPUs, etc)
Decision trees (used in AI and other applications)
Complex class inheritance
IMHO most of the domain models we use in normal applications are in some respect graphs. Already if you look at the UML diagrams you would notice that with a directed, labeled graph you could easily translate them directly into a persistence model. There are some examples of that over at Neo4j
Cheers
/peter
Social connections between people make an interesting graph example. I've tried to model these connections at the database level using a traditional RDMS but found it way too hard. I ended up choosing a graph database and it was a great choice because it makes it easy to follow connections (edges) between people (nodes).
Graphs are great for managing dependencies.
I recently started to use the Castle Windsor Container, after inspecting the Kernel I found a GraphNodes property. Castle Windsor uses a graph to represent the dependencies between objects so that injection will work correctly. Check out this article.
I have also used simple graph theory to develop a plugin framework, each graph node represent a plugin, once the dependencies have been defined I can traverse the graph to create a plugin load order.
I am planning on changing the algorithm to implement Dijkstra's algorithm so that each plugin is weighted with a specific version, thus a simple change will only load the latest version of the plugin.
I with I had discovered this sooner. I like that quote "Whenever someone gives you a problem, think graphs." I definitely think that's true.
Profiling and/or Benchmarking algorithms and implementations in code.
Anything that can be modelled as a foreign key in a relational database is essentially an edge and nodes in a graph.
Maybe that will help you think of examples, since most things are readily modelled in a RDBMS.
You could take a look at some of the examples in the Neo4j wiki,
http://wiki.neo4j.org/content/Domain_Modeling_Gallery
and the projects that Neo4j is used in (the known ones)
http://wiki.neo4j.org/content/Neo4j_In_The_Wild .
Otherwise, Recommender Algorithms are a good use for Graphs, see for instance PageRank, and other stuff at
https://github.com/tinkerpop/gremlin/wiki/pagerank
Analysing transaction serialisability in database theory.
You can utilise graphs anywhere you can define the problem domain objects into nodes and the solution as the flow of control and/or data amongst the nodes.
Considering the fact that trees are indeed connected-acyclic graphs, there are even more areas you can use the graph theory.
Basically nearlly all common data structures like trees, lists, queues, etc, can be thought as type of graph, some with different type of constraint.
To my experiences, I have used graph intensively in network flow problems, which is used in lots of areas like telecommunication network routing and optimisation, workload assignment, matching, supply chain optimisation and public transport planning.
Another interesting area is social network modelling as previous answer mentioned.
There are far more, like integrated circuit optimisation, etc.

Resources