Optimizing rank computation for very large sparse matrices - performance

I have a sparse matrix such as
A =
(1,1) 1
(3,1) 1
(1,2) 1
(2,2) 1
(1,3) 1
(3,3) 1
(4,3) 1
(4,4) 1
The full matrix of A can see look like as following:
full(A) =
1 1 1 0
0 1 0 0
1 0 1 0
0 0 1 1
I want to find the rank of matrix A by fast way(because my matrix can extend to 10000 x 20000). I try to do it by two ways but it give the different result
Convert to full matrix and find rank using
rank(full(A)) = 3
Find the rank using sprank
sprank(A) = 4
The true answer must be 3 that means using first way. However, it take long time to find the rank,especially matrix with large size. I know the reason why the second way give 4 because sprank only tells you how many rows/columns of your matrix have non-zero elements, while rank is reporting the actual rank of the matrix which indicates how many rows of your matrix are linearly independent. sprank(A) is 4 but rank(A) is only 3 because you can write the third row as a linear combination of the other rows, specifically A(2,:) - A(1,:).
My problem is that how to find the rank of a sparse matrix with lowest time consumption
Update: I tried to use some way. However, it reported larger time consumption
%% Create random matrix
G = sparse(randi(2,1000,1000))-1;
A=sparse(G) %% Because my input matrix is sparse matrix
%% Measure performance
>> tic; rank(full(A)); toc
Elapsed time is 0.710750 seconds.
>> tic; svds(A); toc
Elapsed time is 1.130674 seconds.
>> tic; eigs(A); toc
Warning: Only 3 of the 6 requested eigenvalues converged.
> In eigs>processEUPDinfo at 1472
In eigs at 365
Elapsed time is 4.894653 seconds.

I don't know which algorithm is best suited for you and I agree that may be more appropriate to ask on math.stackexchange.com. While I was trying with the random matrix you supply G = sparse(randi(2,1000,1000))-1; I noticed that there is little chance that its rank is <1000, and whatever algorithm you use, it is likely that its performance is very data-dependant. For instance using eigs(G) on a 2000-samples square matrix of rank (resp.) [198,325,503,1026,2000] yields the following performance (in seconds): [0.64,0.90,1.38,1.57,4.00] which shows the performance of the eigs function is strongly related with the rank of the matrix.
I also searched for existing tools and gave a try to spnrank which I think is not so data-dependant (it gives a better performance than eigs for high ranks and worse if the rank is small).
In the end you may want to adapt your technical solution depending on the kind of matrices you are most likely to work with.

Related

How to generate a function that will algebraically encode a sequence?

Is there any way to generate a function F that, given a sequence, such as:
seq = [1 2 4 3 0 5 4 2 6]
Then F(seq) will return a function that generates that sequence? That is,
F(seq)(0) = 1
F(seq)(1) = 2
F(seq)(2) = 4
... and so on
Also, if it is, what is the function of lowest complexity that does so, and what is the complexity of the generated functions?
EDIT
It seems like I'm not clear, so I'll try to exemplify:
F(seq([1 3 5 7 9])}
# returns something like:
F(x) = 1 + 2*x
# limited to the domain x ∈ [1 2 3 4 5]
In other words, I want to compute a function that can be used to algebraically, using mathematical functions such as +, *, etc, restore a sequence of integers, even if you cleaned it from memory. I don't know if it is possible, but, as one could easily code an approximation for such function for trivial cases, I'm wondering how far it goes and if there is some actual research concerning that.
EDIT 2 Answering another question, I'm only interested in sequences of integers - if that is important.
Please let me know if it is still not clear!
Well, if you just want to know a function with "+ and *", that is to say, a polynomial, you can go and check Wikipedia for Lagrange Polynomial (https://en.wikipedia.org/wiki/Lagrange_polynomial).
It gives you the lowest degree polynomial that encodes your sequence.
Unfortenately, you probably won't be able to store less than before, as the probability of the polynom being of degree d=n-1 where n is the size of the array is very high with random integers.
Furthermore, you will have to store rational numbers instead of integers.
And finally, the access to any number of the array will be in O(d) (using Horner algorithm for polynomial evaluation), in comparison to O(1) with the array.
Nevertheless, if you know that your sequences may be very simple and very long, it might be an option.
If the sequence comes from a polynomial with a low degree, an easy way to find the unique polynomial that generates it is using Newton's series. Constructing the polynomial for a n numbers has O(n²) time complexity, and evaluating it has O(n).
In Newton's series the polynomial is expressed in terms of x, x(x-1), x(x-1)(x-2) etc instead of the more familiar x, x², x³. To get the coefficients, basically you compute the differences between subsequent items in the sequence, then the differences between the differences, until only one is left or you get a sequence of all zeros. The numbers you get along the bottom, divided by factorial of the degree of the term, give you the coefficients. For example with the first sequence you get these differences:
1 2 4 3 0 5 4 2 6
1 2 -1 -3 5 -1 -2 4
1 -3 -2 8 -6 -1 6
-4 1 10 -14 5 7
5 9 -24 19 2
4 -33 43 -17
-37 76 -60
113 -136
-249
The polynomial that generates this sequence is therefore:
f(x) = 1 + x(1 + (x-1)(1/2 + (x-2)(-4/6 + (x-3)(5/24 + (x-4)(4/120
+ (x-5)(-37/720 + (x-6)(113/5040 + (x-7)(-249/40320))))))))
It's the same polynomial you get using other techniques, like Lagrange interpolation; this is just the easiest way to generate it as you get the coefficients for a polynomial form that can be evaluated with Horner's method, unlike the Lagrange form for example.
There is no magic if you say that the sequence could be completely random. And yet, it is always possible, but won't save you memory. Any interpolation method requires the same amount of memory in the worst case. Because, if it didn't, it would be possible to compress everything to a single bit.
On the other hand, it is sometimes possible to use a brute force, some heuristics (like genetic algorithms), or numerical methods to reproduce some kind of mathematical expression having a specified type, but good luck with that :)
Just use some archiving tools instead in order to save memory usage.
I think it will be useful for you to read about this: http://en.wikipedia.org/wiki/Entropy_(information_theory)

random number generator test

How will you test if the random number generator is generating actual random numbers?
My Approach: Firstly build a hash of size M, where M is the prime number. Then take the number
generated by random number generator, and take mod with M.
and see it fills in all the hash or just in some part.
That's my approach. Can we prove it with visualization?
Since I have very less knowledge about testing. Can you suggest me a thorough approach of this question? Thanks in advance
You should be aware that you cannot guarantee the random number generator is working properly. Note that even a perfect uniform distribution in range [1,10] - there is a 10-10 chance of getting 10 times 10 in a random sampling of 10 numbers.
Is it likely? Of course not.
So - what can we do?
We can statistically prove that the combination (10,10,....,10) is unlikely if the random number generator is indeed uniformly distributed. This concept is called Hypothesis testing. With this approach we can say "with certainty level of x% - we can reject the hypothesis that the data is taken from a uniform distribution".
A common way to do it, is using Pearson's Chi-Squared test, The idea is similar to yours - you fill in a table - check what is the observed (generated) number of numbers for each cell, and what is the expected number of numbers for each cell under the null hypothesis (in your case, the expected is k/M - where M is the range's size, and k is the total number of numbers taken).
You then do some manipulation on the data (see the wikipedia article for more info what this manipulation is exactly) - and get a number (the test statistic). You then check if this number is likely to be taken from a Chi-Square Distribution. If it is - you cannot reject the null hypothesis, if it is not - you can be certain with x% certainty that the data is not taken from a uniform random generator.
EDIT: example:
You have a cube, and you want to check if it is "fair" (uniformly distributed in [1,6]). Throw it 200 times (for example) and create the following table:
number: 1 2 3 4 5 6
empirical occurances: 37 41 30 27 32 33
expected occurances: 33.3 33.3 33.3 33.3 33.3 33.3
Now, according to Pearson's test, the statistic is:
X = ((37-33.3)^2)/33.3 + ((41-33.3)^2)/33.3 + ... + ((33-33.3)^2)/33.3
X = (18.49 + 59.29 + 10.89 + 39.69 + 1.69 + 0.09) / 33.3
X = 3.9
For a random C~ChiSquare(5), the probability of being higher then 3.9 is ~0.45 (which is not improbable)1.
So we cannot reject the null hypothesis, and we can conclude that the data is probably uniformly distributed in [1,6]
(1) We usually reject the null hypothesis if the value is smaller then 0.05, but this is very case dependent.
My naive idea:
The generator is following a distribution. (At least it should.) Do a reasonable amount of runs then plot the values on a graph. Fit a regression curve on the points. If it correlates with the shape of the distribution you're good. (This is also possible in 1D with projections and histograms. And fully automatable with the correct tool, e.g. MatLab)
You can also use the diehard tests as it was mentioned before, that is surely better but involves much less intuition, at least on your side.
Let's say you want to generate a uniform distribution on the interval [0, 1].
Then one possible test is
for i from 1 to sample-size
when a < random-being-tested() < b
counter +1
return counter/sample-size
And see if the result is closed to b-a (b minus a).
Of course you should define a function taking a, b between 0 and 1 as inputs, and return the difference between the counter/sample-size and b-a. Loop through possible a, b, say of the multiples of 0.01, a < b. Print out a, b when the difference is larger than a preset epsilon, say 0.001.
Those are the a, b for which there are too many outliers.
If you let sample-size be 5000. Your random-being-tested will be called about 5000 * 5050 times in total, hopefully not too bad.
I had the same problem.
when I finish to write my code (using an external RNG engine)
I looked on the results and found that all of them fail Chi-Square test whenever I have to many results.
my code generated a random number and hold buckets of the amount of each result range.
I don't know why the Chi-square test fail when i have a lot of results.
during my research I saw that the C# Random.next() fail in any range of random and that some of the numbers have better odds than the other, further more i saw that the RNGCryptoServiceProvider random provider is not supporting good on big numbers.
when trying to get numbers in the range of 0-1,000,000,000 the numbers in the lower range 0-300M have better odds to appear...
as a result I'm using the RNGCryptoServiceProvider and if my range is higher than 100M i'm combine the number my self (RandomHigh*100M + RandomLow) and the ranges of both randoms is smaller than 100M so it good.
Good Luck!

Eliminating symmetry from graphs

I have an algorithmic problem in which I have derived a transfer matrix between a lot of states. The next step is to exponentiate it, but it is very large, so I need to do some reductions on it. Specifically it contains a lot of symmetry. Below are some examples on how many nodes can be eliminated by simple observations.
My question is whether there is an algorithm to efficiently eliminate symmetry in digraphs, similarly to the way I've done it manually below.
In all cases the initial vector has the same value for all nodes.
In the first example we see that b, c, d and e all receive values from a and one of each other. Hence they will always contain an identical value, and we can merge them.
In this example we quickly spot, that the graph is identical from the point of view of a, b, c and d. Also for their respective sidenodes, it doesn't matter to which inner node it is attached. Hence we can reduce the graph down to only two states.
Update: Some people were reasonable enough not quite sure what was meant by "State transfer matrix". The idea here is, that you can split a combinatorial problem up into a number of state types for each n in your recurrence. The matrix then tell you how to get from n-1 to n.
Usually you are only interested about the value of one of your states, but you need to calculate the others as well, so you can always get to the next level. In some cases however, multiple states are symmetrical, meaning they will always have the same value. Obviously it's quite a waste to calculate all of these, so we want to reduce the graph until all nodes are "unique".
Below is an example of the transfer matrix for the reduced graph in example 1.
[S_a(n)] [1 1 1] [S_a(n-1)]
[S_f(n)] = [1 0 0]*[S_f(n-1)]
[S_B(n)] [4 0 1] [S_B(n-1)]
Any suggestions or references to papers are appreciated.
Brendan McKay's nauty ( http://cs.anu.edu.au/~bdm/nauty/) is the best tool I know of for computing automorphisms of graphs. It may be too expensive to compute the whole automorphism group of your graph, but you might be able to reuse some of the algorithms described in McKay's paper "Practical Graph Isomorphism" (linked from the nauty page).
I'll just add an extra answer building on what userOVER9000 suggested, if anybody else are interested.
The below is an example of using nauty on Example 2, through the dreadnaut tool.
$ ./dreadnaut
Dreadnaut version 2.4 (64 bits).
> n=8 d g -- Starting a new 8-node digraph
0 : 1 3 4; -- Entering edge data
1 : 0 2 5;
2 : 3 1 6;
3 : 0 2 7;
4 : 0;
5 : 1;
6 : 2;
7 : 3;
> cx -- Calling nauty
(1 3)(5 7)
level 2: 6 orbits; 5 fixed; index 2
(0 1)(2 3)(4 5)(6 7)
level 1: 2 orbits; 4 fixed; index 4
2 orbits; grpsize=8; 2 gens; 6 nodes; maxlev=3
tctotal=8; canupdates=1; cpu time = 0.00 seconds
> o -- Output "orbits"
0:3; 4:7;
Notice it suggests joining nodes 0:3 which are a:d in Example 2 and 4:7 which are e:h.
The nauty algorithm is not well documented, but the authors describe it as exponential worst case, n^2 average.
Computing symmetries seems to be a bit of a second order problem. Taking just a,b,c and d in your second graph, the symmetry would have to be expressed
a(b,c,d) = b(a,d,c)
and all its permutations, or some such. Consider a second subgraph a', b', c', d' added to it. Again, we have the symmetries, but parameterised differently.
For computing people (rather than math people), could we express the problem like so?
Each graph node contains a set of letters. At each iteration, all of the letters in each node are copied to its neighbours by the arrows (some arrows take more than one iteration and can be treated as a pipe of anonymous nodes).
We are trying to find efficient ways of determining things such as
* what letters each set/node contains after N iterations.
* for each node the N after which its set no longer changes.
* what sets of nodes wind up containing the same sets of letters (equivalence class)
?

Programming problem - Game of Blocks

maybe you would have an idea on how to solve the following problem.
John decided to buy his son Johnny some mathematical toys. One of his most favorite toy is blocks of different colors. John has decided to buy blocks of C different colors. For each color he will buy googol (10^100) blocks. All blocks of same color are of same length. But blocks of different color may vary in length.
Jhonny has decided to use these blocks to make a large 1 x n block. He wonders how many ways he can do this. Two ways are considered different if there is a position where the color differs. The example shows a red block of size 5, blue block of size 3 and green block of size 3. It shows there are 12 ways of making a large block of length 11.
Each test case starts with an integer 1 ≤ C ≤ 100. Next line consists c integers. ith integer 1 ≤ leni ≤ 750 denotes length of ith color. Next line is positive integer N ≤ 10^15.
This problem should be solved in 20 seconds for T <= 25 test cases. The answer should be calculated MOD 100000007 (prime number).
It can be deduced to matrix exponentiation problem, which can be solved relatively efficiently in O(N^2.376*log(max(leni))) using Coppersmith-Winograd algorithm and fast exponentiation. But it seems that a more efficient algorithm is required, as Coppersmith-Winograd implies a large constant factor. Do you have any other ideas? It can possibly be a Number Theory or Divide and Conquer problem
Firstly note the number of blocks of each colour you have is a complete red herring, since 10^100 > N always. So the number of blocks of each colour is practically infinite.
Now notice that at each position, p (if there is a valid configuration, that leaves no spaces, etc.) There must block of a color, c. There are len[c] ways for this block to lie, so that it still lies over this position, p.
My idea is to try all possible colors and positions at a fixed position (N/2 since it halves the range), and then for each case, there are b cells before this fixed coloured block and a after this fixed colour block. So if we define a function ways(i) that returns the number of ways to tile i cells (with ways(0)=1). Then the number of ways to tile a number of cells with a fixed colour block at a position is ways(b)*ways(a). Adding up all possible configurations yields the answer for ways(i).
Now I chose the fixed position to be N/2 since that halves the range and you can halve a range at most ceil(log(N)) times. Now since you are moving a block about N/2 you will have to calculate from N/2-750 to N/2-750, where 750 is the max length a block can have. So you will have to calculate about 750*ceil(log(N)) (a bit more because of the variance) lengths to get the final answer.
So in order to get good performance you have to through in memoisation, since this inherently a recursive algorithm.
So using Python(since I was lazy and didn't want to write a big number class):
T = int(raw_input())
for case in xrange(T):
#read in the data
C = int(raw_input())
lengths = map(int, raw_input().split())
minlength = min(lengths)
n = int(raw_input())
#setup memoisation, note all lengths less than the minimum length are
#set to 0 as the algorithm needs this
memoise = {}
memoise[0] = 1
for length in xrange(1, minlength):
memoise[length] = 0
def solve(n):
global memoise
if n in memoise:
return memoise[n]
ans = 0
for i in xrange(C):
if lengths[i] > n:
continue
if lengths[i] == n:
ans += 1
ans %= 100000007
continue
for j in xrange(0, lengths[i]):
b = n/2-lengths[i]+j
a = n-(n/2+j)
if b < 0 or a < 0:
continue
ans += solve(b)*solve(a)
ans %= 100000007
memoise[n] = ans
return memoise[n]
solve(n)
print "Case %d: %d" % (case+1, memoise[n])
Note I haven't exhaustively tested this, but I'm quite sure it will meet the 20 second time limit, if you translated this algorithm to C++ or somesuch.
EDIT: Running a test with N = 10^15 and a block with length 750 I get that memoise contains about 60000 elements which means non-lookup bit of solve(n) is called about the same number of time.
A word of caution: In the case c=2, len1=1, len2=2, the answer will be the N'th Fibonacci number, and the Fibonacci numbers grow (approximately) exponentially with a growth factor of the golden ratio, phi ~ 1.61803399. For the
huge value N=10^15, the answer will be about phi^(10^15), an enormous number. The answer will have storage
requirements on the order of (ln(phi^(10^15))/ln(2)) / (8 * 2^40) ~ 79 terabytes. Since you can't even access 79
terabytes in 20 seconds, it's unlikely you can meet the speed requirements in this special case.
Your best hope occurs when C is not too large, and leni is large for all i. In such cases, the answer will
still grow exponentially with N, but the growth factor may be much smaller.
I recommend that you first construct the integer matrix M which will compute the (i+1,..., i+k)
terms in your sequence based on the (i, ..., i+k-1) terms. (only row k+1 of this matrix is interesting).
Compute the first k entries "by hand", then calculate M^(10^15) based on the repeated squaring
trick, and apply it to terms (0...k-1).
The (integer) entries of the matrix will grow exponentially, perhaps too fast to handle. If this is the case, do the
very same calculation, but modulo p, for several moderate-sized prime numbers p. This will allow you to obtain
your answer modulo p, for various p, without using a matrix of bigints. After using enough primes so that you know their product
is larger than your answer, you can use the so-called "Chinese remainder theorem" to recover
your answer from your mod-p answers.
I'd like to build on the earlier #JPvdMerwe solution with some improvements. In his answer, #JPvdMerwe uses a Dynamic Programming / memoisation approach, which I agree is the way to go on this problem. Dividing the problem recursively into two smaller problems and remembering previously computed results is quite efficient.
I'd like to suggest several improvements that would speed things up even further:
Instead of going over all the ways the block in the middle can be positioned, you only need to go over the first half, and multiply the solution by 2. This is because the second half of the cases are symmetrical. For odd-length blocks you would still need to take the centered position as a seperate case.
In general, iterative implementations can be several magnitudes faster than recursive ones. This is because a recursive implementation incurs bookkeeping overhead for each function call. It can be a challenge to convert a solution to its iterative cousin, but it is usually possible. The #JPvdMerwe solution can be made iterative by using a stack to store intermediate values.
Modulo operations are expensive, as are multiplications to a lesser extent. The number of multiplications and modulos can be decreased by approximately a factor C=100 by switching the color-loop with the position-loop. This allows you to add the return values of several calls to solve() before doing a multiplication and modulo.
A good way to test the performance of a solution is with a pathological case. The following could be especially daunting: length 10^15, C=100, prime block sizes.
Hope this helps.
In the above answer
ans += 1
ans %= 100000007
could be much faster without general modulo :
ans += 1
if ans == 100000007 then ans = 0
Please see TopCoder thread for a solution. No one was close enough to find the answer in this thread.

Algorithm for modeling expanding gases on a 2D grid

I have a simple program, at it's heart is a two dimensional array of floats, supposedly representing gas concentrations, I have been trying to come up with a simple algorithm that will model the gas expanding outwards, like a cloud, eventually ending up with the same concentration of the gas everywhere across the grid.
For example a given state progression could be:
(using ints for simplicity)
starting state
00000
00000
00900
00000
00000
state after 1 pass of algorithm
00000
01110
01110
01110
00000
one more pas should give a 5x5 grid all containing the value 0.36 (9/25).
I've tried it out on paper but no matter how I try, I cant get my head around the algorithm to do this.
So my question is, how should I set about trying to code this algorithm? I've tried a few things, applying a convolution, trying to take each grid cell in turn and distributing it to its neighbours, but they all end up having undesirable effects, such as ending up eventually with less gas than I originally started with, or all of gas movement being in one direction instead of expanding outwards from the centre. I really can't get my head around it at all and would appreciate any help at all.
It's either a diffusion problem if you ignore convection or a fluid dynamics/mass transfer problem if you don't. You would start with equations for conservation of mass and momentum for an Eulerian (fixed control volume) viewpoint if you were solving from scratch.
It's a transient problem, so you need to perform an integration to advance the state from time t(n) to t(n+1). You show a grid, but nothing about how you're solving in time. What integration scheme have you tried? Explicit? Implicit? Crank-Nicholson? If you don't know, you're not approaching the problem correctly.
One book that I really liked on this subject was S.W. Patankar's "Numerical Heat Transfer and Fluid Flow". It's a little dated now, but I liked the treatment. It's still good after 29 years, but there might be better texts since I was reading on the subject. I think it's approachable for somebody looking into it for the first time.
In the example you give, your second stage has a core of 1's. Usually diffusion requires a concentration gradient, so most diffusion related techniques won't change the 1 in the middle on the next iteration (nor would they have got to that state after the first one, but it's a bit easier to see once you've got a block of equal values). But as the commenters on your post say, that's not likely to be the cause of a net movement. Reducing the gas may be edge effects, but can also be a question of rounding errors - set the cpu to round half even, and total the gas and apply a correction now and again.
It looks like you're trying to implement a finite difference solver for the heat equation with Neumann boundary conditions (insulation at the edges). There's a lot of literature on this kind of thing. The Wikipedia page on finite difference method describes a simple but stable method, but for Dirichlet boundary conditions (constant density at edges). Modifying the handling of the boundary conditions shouldn't be too difficult.
It looks like what you want is something like a smoothing algorithm, often used in programs like Photoshop, or old school demo effects, like this simple Flame Effect.
Whatever algorithm you use, it will probably help you to double buffer your array.
A typical smoothing effect will be something like:
begin loop forever
For every x and y
{
b2[x,y] = (b1[x,y] + (b1[x+1,y]+b1[x-1,y]+b1[x,y+1]+b1[x,y-1])/8) / 2
}
swap b1 and b2
end loop forever
See Tom Forsyth's Game Programming Gems article. Looks like it fulfils your requirements, but if not then it should at least give you some ideas.
Here's a solution in 1D for simplicity:
The initial setup is with a concentration of 9 at the origin (), and 0 at all other positive and negative coordinates.
initial state:
0 0 0 0 (9) 0 0 0 0
The algorithm to find next iteration values is to start at the origin and average current concentrations with adjacent neighbors. The origin value is a boundary case and the average is done considering the origin value, and its two neighbors simultaneously, i.e. average among 3 values. All other values are effectively averaged among 2 values.
after iteration 1:
0 0 0 3 (3) 3 0 0 0
after iteration 2:
0 0 1.5 1.5 (3) 1.5 1.5 0 0
after iteration 3:
0 .75 .75 2 (2) 2 .75 .75 0
after iteration 4:
.375 .375 1.375 1.375 (2) 1.375 1.375 .375 .375
You do these iterations in a loop. Outputting the state every n number of iterations. You may introduce a time constant to control how many iterations represent one second of clock-on-the-wall time. This is also a function of what length units the integer coordinates represent. For a given H/W system, you can tune this value empirically. You may also introduce a steady state tolerance value to control when the program says " all neighbor values are within this tolerance" or "no value changed between iterations by more than this tolerance" and so the algorithm has reached a steady state solution.
The concentration for each iteration given a starting concentration can be obtained by the equation:
concentration = startingConcentration/(2*iter + 1)**2
iter is the time iteration. So for your example.
startingConcentration = 9
iter = 0
concentration = 9/(2*0 + 1)**2 = 9
iter = 1
concentration = 9/(2*1 + 1)**2 = 1
iter = 2
concentration = 9/(2*2 + 1)**2 = 9/25 = .35
you can set the value of the array after each "time step"

Resources