Networkx: How to create the incidence matrix from a huge graph - performance

I am reading a very big graph specifically the wiki-topcats (http://snap.stanford.edu/data/wiki-topcats.html) and while I can read and create in acceptable time the graph with:
graph = nx.read_edgelist("C:/.../wiki-topcats.txt", nodetype=int)
Then I need to extract the incidence matrix in order to create the linegraph (that's my goal) , but when I run:
C = nx.incidence_matrix(graph)
I am getting memory errors, and I cannot find any efficient way of dealing with this or maybe a way to work around this by creating the incidence matrix from the adjancency, but I must work with scipy matrices due to magnitude of the graph e.g. this adjacency matrix works.
A = nx.to_scipy_sparse_matrix(graph)
. Any help would be greatly appreciated.

NetworkX is clearly not designed to compute big graphs. It is not very performant either (for both memory consumption and speed). Graph databases are meant to solve this problem by storing graphs on a storage device. Neo4j is an example of such database available in Python. That being said, the graph you want to compute is not so big and you could try to use alternative Python libraries that are more efficient like graph-tool.
Assuming you really want/need to use NetworkX, then be aware that the graph takes about 12 GiB of RAM and many NetworkX function will return a full graph resulting in several dozen of GiB of RAM allocated which is a complete waste of resources. Not to mention that most machine does not have so much memory. The graph file take only about 400 MiB and compact representations should be able to store in RAM in less than 100 MiB. Thus, using 12 GiB for that require >120 times more resources than expected. Not to mention that NetworkX take a very long time to fill such memory space.
Loading the file directly in a sparse matrix is much more efficient assuming the sparse matrix type is carefully chosen. Indeed, CSR matrix can store the adjacency matrix pretty efficiently while a DOK matrix is very inefficient (it takes 6 GiB of RAM and is as slow as NetworkX).
Note that np.genfromtxt (which is intended to load such kind of file) goes crazy by using 45-50 GiB of RAM on my machine. This is insane as it is actually ~200 times bigger than what is strictly needed! Hopefully, df = pd.read_table('wiki-topcats.txt', dtype=np.int32, header=None, sep=' ') from Pandas is able to load the file using less than 400 MiB. Based on that, you can easily convert the dataframe to Numpy with edges = df.to_numpy() and then build a relatively fast CSR matrix.
Note that you can build directly the (sparse CSR) incidence matrix with Numpy. One solution is to use idx = np.arange(len(df), dtype=np.int32) so to set an ID to each edge. Then, you can directly know all the position of the 1 in the sparse matrix (m[edges[:,0], idx] and m[edges[:,1], idx]) which should be enough to build it (be careful about possible duplicates that could be due to a node connected to itself).

Related

Performance comparsion: Algorithm S and Algorithm Z

Recently I ran into two sampling algorithms: Algorithm S and Algorithm Z.
Suppose we want to sample n items from a data set. Let N be the size of the data set.
When N is known, we can use Algorithm S
When N is unknown, we can use Algorithm Z (optimized atop Algorithm R)
Performance of the two algorithms:
Algorithm S
Time complexity: average number of scanned items is n(N+1)/n+1 (I compute the result, Knuth's book left this as exercises), we can say it O(N)
Space complexity: O(1) or O(n)(if returning an array)
Algorithm Z (I search the web, find the paper https://www.cs.umd.edu/~samir/498/vitter.pdf)
Time complexity: O(n(1+log(N/n))
Space complexity: in TAOCP vol2 3.4.2, it mentions Algorithm R's space complexity is O(n(1+log(N/n))), so I suppose Algorithm Z might be the same
My question
The model for Algorithm Z is: keep calling next method on the data set until we reach the end. So for the problem that N is known, we can still use Algorithm Z.
Based on the above performance comparison, Algorithm Z has better time complexity than Algorithm S, and worse space complexity.
If space is not a problem, should we use Algorithm Z even when N is known?
Is my understanding correct? Thanks!
Is the Postgres code mentioned in your comment actually used in production? In my opinion, it really should be reviewed by someone who has at least some understanding of the problem domain. The problem with random sampling algorithms, and random algorithms in general, is that it is very hard to diagnose biased sampling bugs. Most samples "look random" if you don't look too hard, and biased sampling is only obvious when you do a biased sample of a biased dataset. Or when your biased sample results in a prediction which is catastrophically divergent from reality, which will eventually happen but maybe not when you're doing the code review.
Anyway, by way of trying to answer the questions, both the one actually in the text of this post and the ones added or implied in the comment stream:
Properly implemented, Vitter's algorithm Z is much faster than Knuth's algorithm S. If you have a use case in which reservoir sampling is indicated, then you should probably use Vitter, subject to the code testing advice above: Vitter's algorithm is more complicated and it might not be obvious how to validate the implementation.
I noticed in the Postgres code that it just uses the threshold value of 22 to decide whether to use the more complicated code, based on testing done almost 40 years ago on hardware which you'd be hard pressed to find today. It's possible that 22 is not a bad threshold, but it's just a number pulled out of thin air. At least some attempt should be made to verify or, more likely, correct it.
Forty years ago, when those algorithms were developed, large datasets were typically stored on magnetic tape. Magnetic tape is still used today, but applications have changed; I think that you're not likely to find a Postgres installation in which a live database is stored on tape. This matters because the way you get data off a tape drive is radically different from the way you get data from a file server. Or a sharded distributed collection of file servers, which also has its particular needs.
Data on a reel of tape can only be accessed linearly, although it is possible to skip tape somewhat faster than you can read it. On a file server, data is random access; there may be a slight penalty for jumping around in a file, but there might not. (On the sharded distributed model, it might well be faster then linear reads.) But trying to read out of order on a tape drive might turn an input operation which takes an hour into an operation which takes a week. So it's very important to access the sample in order. Moreover, you really don't want to have to read the tape twice, which would take twice as long.
One of the other assumptions that was made in those algorithms is that you might not have enough memory to store the entire sample; in 1985, main memory was horribly expensive and databases were already quite large. So a common way to collect a large sample from a huge database was to copy the sampled blocks onto secondary memory, such as another tape drive. But there's a bit of a catch with reservoir sampling: as the sampling algorithm proceeds, some items which were initially inserted in the sample are later replaced with other items. But you can't replace data written on tape, so you need to just keep on appending the newly selected samples. What you do hold in random access memory is a list of locations of the sample; once you've finished selecting the sample, you can sort this list of locations and then use it to read out the final selection in storage order, skipping over the rejected items. That means that the temporary sample storage ends up holding both the final sample, and some number of later rejected items. The O(n(1+log(N/n))) space complexity in Algorithm R refers to precisely this storage, and it's actually a reasonably small multiplier, considering.
All that is irrelevant if you can just allocate enough random access storage somewhere to hold the entire sample. Or, even better, if you can directly read a data from the database. There could well still be good reasons to read the sample into local storage, but nothing stops you from updating a block of local storage with a different block.
On the other hand, in many common cases, you don't need to read the data in order to sample it. You can just take a list of items numbers, select a sample from that list of the desired size, and then set about acquiring the sample from the list of selected item numbers. And that presents a rather different problem: how to choose an unbiased sample of size k from a set of K item indexes.
There's a fast and simple solution to that (also described by Knuth, unsurprisingly): make an array of all the item numbers (say, the integers from 0 to K, and then shuffle the array using the standard Knuth/Fisher-Yates shuffle, with a slight modification: you run the algorithm from front to back (instead of back to front, as it is often presented), and stop after k iterations. At that point the first k elements in the partially shuffled array are an unbiased sample. (In fact, you don't need the entire vector of K indices, as long as k is much smaller than K. You're only going to touch O(k) of the values, and you can keep the ones you touched in a hash table of size O(k).)
And there's an even simpler algorithm, again for the case where the sample is small relative to the dataset: just keep one bit for each item in the dataset, which indicates that the item has been selected. Now select k items at random, marking the bit vector as you go; if the relevant bit is already marked, then that item is already in the sample; you just ignore that selection and continue with the next random choice. The expected number of ignored sample is very small unless the sample size is a significant fraction of the dataset size.
There's one other criterion which weighed on the minds of Vitter and Knuth: you'll normally want to do something with the selected sample. And given the amount of time it takes to read through a tape, you want to be able to start processing each item immediately as it is accepted. That precludes algorithms which include, for example, "sort the selected indices and then read the indicated items. (See above.) For immediate processing to be possible, you must not depend on being able to "deselect" already selected items.
Fortunately, both the quick algorithms mentioned at the end of point 2 do satisfy this requirement. In both cases, an item once selected will never be later rejected.
There is at least one use case for reservoir sampling which is still very much relevant: sampling a datastream which is too voluminous or too high-bandwidth to store. That might be some kind of massive social media feed, or it might be telemetry data from a large sensor array, or whatever. In that case, you might want to reduce the size of the datastream by extracting only a small sample, and reservoir sampling is a good candidate. However, that has nothing to do with the Postgres example.
In summary:
Yes, you can (and probably should) use Vitter's Algorithm Z in preference to Knuth's Algorithm S, even if you know how big the data set it.
But there are certainly better algorithms, some of which are outlined above.

Maximum matrix size in Scala Breeze or other Java/Scala matrix packages?

I am trying to write a generic modelling code in Scala that relies on a grid (matrix or tensor). After much looking around, I decided to use Breeze for the matrix because (1) it's API is in Scala, which is nice, (2) it is reasonably fast for what I need (not a lot of linear algebra, more a convenient data structure), and (3) it allows me to store non-Primitive values and saves their types (not everything is a Numerical value in my application).
However, I cannot find an information about maximum matrix size. I managed to blow my heap size a few times by creating large tensors (matrix of 100'000 x 10'000 cells each containing a Vector of dimension 5), but I managed to overcome this by increasing my heap size.
Now the above matrix works, but I get an 'interesting' error though when I try to create a matrix of 100'000 x 100'000 cells each containing a Vector of dimension 5. This is what it gives me:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index 1410065408 out of bounds for length 1410065408
I suspect it has to do with the internals of how Breeze works with indexes, but I am not sure. I don't really mind having a limit in size (this is life) but I would need to know it to catch this before it crashes the application. Anyone has an idea?
Or is there a better package out there for what I need? I played with OjAlgo, which is nice but slower at creating the matrix than Breeze and can only store boxed primitives or primitives, not objects. Maybe Spark?
Thanks!
Breeze's DenseMatrix representation is backed by a single Java array, so the total number of elements is capped at (a little less than) 2^31.

Own fast Gamma Index implementation

My friends and I are writing our own implementation of Gamma Index algorithm. It should compute it within 1s for standard size 2d pictures (512 x 512) though could also calculate 3D pictures; be portable and easy to install and maintain.
Gamma Index, in case if you haven't came across this topic, is a method for comparing pictures. On input we provide two pictures (reference and target); every picture consist of points distributed over regular fine grid; every point has location and value. As output we receive a picture of Gamma Index values. For each point of target picture we calculate some function (called gamma) against every point from reference picture (in original version) or against points from reference picture, that are closest to the one from target picture (in version, that is usually used in Gamma Index calculation software). The Gamma Index for certain target point is minimum of calculated for it gamma function.
So far we have tried following ideas with these results:
use GPU - the calculation time has decreased 10 times. Problem is, that it's fairly difficult to install it on machines with non nVidia graphics card
use supercomputer or cluster - the problem is with maintenance of this solution. Plus every picture has to be ciphered for travel through network due to data sensitivity
iterate points ordered by their distances to target point with some extra stop criterion - this way we got 15 seconds at best condition (which is actually not ideally precise)
currently we are writing in Python due to NumPy awesome optimizations over matrix calculation, but we are open for other languages too.
Do you have any ideas how we can accelerate our algorithm(s), in order to meet the objectives? Do you think the obtaining of this level of performance is possible?
Some more information about GI for anyone interested:
http://lcr.uerj.br/Manual_ABFM/A%20technique%20for%20the%20quantitative%20evaluation%20of%20dose%20distributions.pdf

OpenGL ES 2.0 on Tegra2: How many GPU cores are used in glDrawArrays/glDrawElements functions?

Does Anybody have information about how much GPU cores will be used when I call glDrawArrays/glDrawElements?
A bit more details to explain my question.
Processor Tegra2 has 4cores GPU. To work libGLESv2.so is used.
After all preparatory works have been done (create and link shaders; upload textures and etc), I call DRAW function which started rasterization and create image in framebuffer.
I think, DRAW function has to use as more cores as possible to do rasterizarion more fast.
But I can't found any documents which confirm my theory.
Description of OpenGL has only information about there own level API, and, understandably, not any information about below levels. NVIDIA don't present description how libGLESv2.so is realized.
If nobody don't want to answer, I will do it myself :)
After a few attempts, I got following results:
!!Please, note I use GPU to data computing. Using linear data arrays so screen is defined with height = 1 and wight = array size. In order to computing line with wight = array size is drawn.
DRAW functions use as much cores as possible.
But it depends on how much verteces are send to DRAW.
For example: if line (2 vertices) is drawn one performance is got. If this line is divided on the several small lines (4 or more vertices), performance will be better.
All in all, as I suppose to use all cores is necessary to call DRAW with number of vertices at least equal the number of GPU cores . In my case computing was faster about 20% when line was divided on the two sublines. Following divide almost didn't give performance increasing.

Graph plotting: only keeping most relevant data

In order to save bandwith and so as to not to have generate pictures/graphs ourselves I plan on using Google's charting API:
http://code.google.com/apis/chart/
which works by simply issuing a (potentially long) GET (or a POST) and then Google generate and serve the graph themselves.
As of now I've got graphs made of about two thousands entries and I'd like to trim this down to some arbitrary number of entries (e.g. by keeping only 50% of the original entries, or 10% of the original entries).
How can I decide which entries I should keep so as to have my new graph the closest to the original graph?
Is this some kind of curve-fitting problem?
Note that I know that I can do POST to Google's chart API with up to 16K of data and this may be enough for my needs, but I'm still curious
The flot-downsample plugin for the Flot JavaScript graphing library could do what you are looking for, up to a point.
The purpose is to try retain the visual characteristics of the original line using considerably fewer data points.
The research behind this algorithm is documented in the author's thesis.
Note that it doesn't work for any kind of series, and won't give meaningful results when you want a downsampling factor beyond 10, in my experience.
The problem is that it cuts the series in windows of equal sizes then keep one point per window. Since you may have denser data in some windows than others the result is not necessarily optimal. But it's efficient (runs in linear time).
What you are looking to do is known as downsampling or decimation. Essentially you filter the data and then drop N - 1 out of every N samples (decimation or down-sampling by factor of N). A crude filter is just taking a local moving average. E.g. if you want to decimate by a factor of N = 10 then replace every 10 points by the average of those 10 points.
Note that with the above scheme you may lose some high frequency data from your plot (since you are effectively low pass filtering the data) - if it's important to see short term variability then an alternative approach is to plot every N points as a single vertical bar which represents the range (i.e. min..max) of those N points.
Graph (time series data) summarization is a very hard problem. It's like deciding, in a text, what is the "relevant" part to keep in an automatic summarization of it. I suggest you use one of the most respected libraries for finding "patterns of interest" in time series data by Eamonn Keogh

Resources