I am a newbie. Could you please explain what would happen if a control flow graph consists of multiple start and/or stop nodes?
I searched for the answer but was not able to find a proper one
Thank You in advance
Related
I'm trying to find a way to find network communities in a top-down way. Most of the algorithms available (e.g. in the igraph package) are working bottom up - that is they start by assuming all nodes are singleton communities, and then combine them to larger communities. I want to got the other way around, similar to how decision trees are built: start with the whole network, then find a split that improves some "measure of information", etc.
Does anyone know of such algorithm or such a measure? I can't find such in the literature, but maybe I am missing something.
Also, what bothers me with some measures of modularity is that if you think of the whole network as one module, then all edges are within module and no out-module edges exist, so this seems like a perfect partition into a modules. Is there a measure that overcome this limitation?
I think Newman's algorithm meets your requirements.
It works by computing "network modularity" and then splitting the network into two groups. After that it recursively applies the same principle to the newly formed groups until no further increase in modularity is possible.
It should also be implemented in igraph. At least in the r version.
I'm reading through an algorithm textbook and I've come across yet another problem that I'm stuck on. I'm looking for some help solving it and if anyone could provide some similar, already-existing, problems that I could reference to follow similar steps, that'd be great.
This is the problem:
Begin by trying to transform some (optimal/feasible) schedule into one (also optimal/feasible) that satisfies the criteria (a). In you starting schedule there will always be at least two jobs with their deadlines in the opposite order, i.e. the later job has the earliest deadline. Think what happens if you exchange the places of these two jobs in the schedule.
I have a problem where I need to determine if the following pseudocode solves the activity selection problem optimally (eg. no overlapping events while getting the maximum number of activities).
I have gone through a few tries on paper with it and with the tried and true version i see which is by sorting the activities by ending time and it seems to work, but i am suspicious. Can anyone point me in the right direction to find a definitive proof if this does or doesnt work?
Given a large scale-free graph (a social network graph), what's the best way to sample it such that the sample retains an acceptable abstraction of the properties of the original?
I have a large graph (Munmun's twitter dataset, if you know it). But I need a connected sample of that graph with a reasonably large diameter (tl;dr... reasons why on request... a diameter of 10 would be good).
The problem is that any kinda breadth-first search always is likely to come across some massively connected nodes. So I start such a search, getting the friends of all nodes which I come across. I inevitably come across some massively-connected nodes, and have to get all their friends. This is a problem because I end up with a large number of nodes which are close to each other in the graph. To make programmatic analysis feasible, I have to limit the number of nodes (and edges). The whole point of this exercise is to find shortest paths between nodes, so I'm generally interested in ALL of a node's neighbours. And that's the problem.
One hack around this is to limit the max. number of nodes connected to a user which I'm interested in. For instance, if I come across #barackobama in my breadth-first search, I ensure that I only accept some small proportion of his friends and ignore the rest. But would this hacked graph be worth a damn, or am I losing too much information in terms of finding shortest paths??
Hope that makes sense...
Several sampling methods exist, how to choose one depends (amongst other things) of the properties you want to preserve. I found the literature review (section 3) in the thesis Sampling and Inference in Complex Networks [Maiya '11] very informative, for that matter.
But you seem to have found a way of sampling your network, and now you want to find out if the sample is representative of the whole graph in terms of shortest paths. You can try to have a look at this paper: Complex Network Measurements: Estimating the Relevance of Observed Properties [Latapy & Magnien '08]. They describe a method to assess the representativeness of a sample, regarding various classic topological properties. To summarize their approach, they initially have access to the whole studied network, and simulate some sampling process on these data, with increasing sample size. They monitor how properties evolve depending on sample size, and decide of an appropriate size when the properties of interest are stable enough. Their tool is freely available online.
Edit: the only ready-to-use tool I could find online is Albatross. The associated article Albatross Sampling: Robust and Effective Hybrid Vertex Sampling for Social Graphs [Jin et al. '11] also contains a nice review of existing sampling methods, some of which are implemented in the source code they provide.
Edit 2: I needed to use Albatross on a Linux system, so I did a Java port. It's very raw, but it seems to work fine. It's available on GitHub: https://github.com/vlabatut/Albatross
I am not sure, if I understand your question correctly. I think the main question you have is, about how you can compute the shortest path of two nodes in a giant, directed graph. Creating a subsample of the graph seems to be your attempt to create an efficient solution. (But I probably misunderstood you completely.)
Perhaps this SO-Question has some pointers for you: Efficiently finding the shortest path in large graphs
The graphs in that question seem to be significantly smaller, though.
You might want to check the following: Gscaler: https://github.com/jayCool/Gscaler
This is a recent tool which produces synthetic scaled graphs.
It contains the jar file and the related paper for your reference.
I am a graph/network enthusiast and this just for my curiosity :)
I am trying to model the StackOverflow community as a graph/network. Assume that the people in the SO community are nodes and that the answers given to any of the question establishes a relationship between these nodes. The relationship can be assumed to be directed(link from answer -> question) or undirected. The graph could be weighted and that the weights of the nodes could represented number of vote-ups/downs (normalized on the scale of 0 to 1).
What kind of graph/network does one end up with at any given snapshot of time? Is it scale-free? Is it a small-world? The graph is continuously evolving over a period of time and i would like to understand its structure and dynamics.
Is there a way where can i retrieve this relationship data from - may be SO APIs or some one from SO can help me out with (sample) data?
Clarification edit:
Scale-free network: A network whose degree distribution asymptotically follows a power law Small-world: A network that has sub-networks characterized by presence of connections between almost any two nodes within them and most pairs of nodes are connected by at least one short path.
To the second part of your question:
Is there a way where can i retrieve
this relationship data from - may be
SO APIs or some one from SO can help
me out with (sample) data?
Try these questions instead. There are a lot of plans to implement an API to access SO data. Some things are in change, but there are possibilities to screen-scrape the data or access them via JSON (afaik).
Is there a guide to accessing StackOverflow data programmatically?
What would you want to see in a StackOverflow API?
Are there plans for a StackOverflow API?
Try it out. Good luck!
What kind of graph/network does one end up with at any given snapshot of time? Is it scale-free? Is it a small-world? The graph is continuously evolving over a period of time and i would like to understand its structure and dynamics.
It takes only a few links between remote clusters to turn a random network into a small world one, so it's quite likely to be small world.
As to whether it's scale free, that would require there to be a few posters with lots of answers and many with only one or two. I seem to recall Jeff saying that there were lots with only one question in one of the pod-casts; you might be better off asking the question there rather than here, as he will have the data.