Distribution of pairwise distances between many integers - algorithm

We have M unique integers between 1 and N. In real life, N is a few millions, and M is between N/10 and N/3. I need to compute a distribution of pairwise distances between the M integers.
The brute-force complexity of the problem is M^2, but the output is just N numbers. So the natural question is whether there is a faster algorithm. Even an algorithm as fast as N * sqrt(M) should be sufficient for our purposes.
The problem appeared as a subset of the following problem. We have a large virtual square symmetric matrix, few million by few million elements. Some rows and columns of the matrix are masked out. We need to find how many masked-out elements are in each diagonal of the matrix. One can easily calculate how many masked-out bins intersect each diagonal. But often a masked-out row and column would intersect right on the diagonal, thus masking out only one bin. To avoid double-counting these, we need pairwise distribution of distances between masked-out columns.

You can do this in O(NlogN) using the Fourier transform.
The idea is that you first compute a histogram H(x) of your M integers where H(x) is the number of times the value x appears in your input (which will be either 0 or 1 if all M are distinct - but this is not essential).
Then what you want to compute is A(d), where A(d) defined as the number of pairs of integers that are exactly d apart.
This can be computed as A(d) = sum(H(x)*H(x+d) for all x)
This type of function is called a convolution and can be efficiently computed by taking the Fourier transform, multiplying the output by itself, and then computing the inverse transform. Care needs to be taken to pad appropriately, for example see this question.
If you use Python, this is particularly easy as you can call scipy.signal.fftconvolve to do this operation.

Related

speed-up computation of sum over several subsets

Let's say I have a huge array of doubles w[] indexed from 0 to n-1.
I also have a list of m subsets of [0;n-1]. For each subset S, I am trying to compute the sums of w[i] over S.
Obviously I can compute this separately for each subset, which is going to be in O(m * n).
However is there any faster way to do this? I'm talking from a practical standpoint, as I think you can't have a lower asymptotic bound. Is it possible to pre-process all the subsets and store them in such a way that computing all the sums is faster?
Thanks!
edit :
to give some order of magnitude, my n would be around 20 millions, and m around ~200.
For subsets that are dense (or nearly dense) you may be able to speed up the computation by computing a running sum of the elements. That is, create another array in parallel with w, where each element in the parallel array contains the sum of the elements of w up to that point.
To compute the sum for a dense subset, you that the starting and ending positions in the parallel array, and subtract the running sum at the start from the running sum at the end. The difference between the two is (ignoring rounding errors) the sum for that subset.
For a nearly dense subset, you start by doing the same, then subtract off the values of the (relatively few) items in that range that aren't part of the set.
These may not produce exactly the same result as you'd get by naively summing the subset though. If you need better accuracy, you'd probably want to use Kahan summation for your array of running sums, and possibly preserve its error residual at each point, to be taken into account when doing the subtraction.

Representation of binary matrix with space complexity of O(n)

I have nxn binary matrices (i.e. a matrices whose elements are 0 or 1). Using a two dimensional array (that is, storing the value of each element) have a space complexity of O(n^2).
Is there any way to store them in a way such that the space complexity is O(n)? All operations like summation, subtraction, etc. is welcome.
The matrices are not sparse so using list of non-zero elements is out of question.
No, you can not store an n x n binary matrix in O(n) space.
The proof is just pigeonhole principle.
Suppose you devise a way to store an arbitrary n x n binary matrix.
There are 2n x n possible binary matrices of such size.
If you use k bits for the storage, there would be 2k possible contents of your storage.
Now, if k < n x n, we have 2k < 2n x n, and by pigeonhole principle, there exist two different matrices (say, A and B) which are stored the same way (say, X is stored).
So, when you have that X stored, you can not say whether the matrix you actually intended to store was A or B (or maybe some other matrix).
Thus you cannot uniquely decode your storage back into the form of the stored matrix, which destroys the whole purpose of storing it.
First proof: A n*n bit matrix has n*n states. However with a n-bit string you can only store n states. So unless n>=n*n (e.g. n=1), there is no way to encode n*n bits in an n bit sequence.
Second proof, less abstract but also less complete:
Imagine you have a 16*16 matrix with 256 bits, and somehow manage to store this in 16 bits.
Now, of course, your could take those 16 bits and store them in a 4x4 matrix, using your algorithm, resulting in 4bits. Now your store the 4 bits in 2x2 matrix and compress them in 2 bits.
--> Essentially, such an algorithm would be able to compress any imaginable amount of data in just 2 bits. While this is not an actual proof, it is still quite obvious that such an algorithm cannot exist.
I don't think it can guarantee you O(n) space, but you can look for a compression algorithm called LZW (Lempel-Ziv-Welch).
It's quite simple to code and it's easy to understand why and how it works, and it should work very well for binary arrays, and the biggest your matrix is, the best the compression rate will be.
Anyways, if you know some information about the matrix, you can try to represent it in an array somehow you can restore, for example:
if you matrix is 32x32 dimension, you can get any row of it and represent as a single int, so a whole row will become a single number and you may have your O(n)

Fastest algorithm to determine if matrix has maximum rank

What is the fastest algorithm to determine if an N x N matrix has rank N?
In my case, I need an algorithm that is optimal for N around the order of 30. Is there any better way than computing the determinant, and check if it's not zero? I have the feeling that somehow the determinant of a matrix adds some information with respect to the rank.. and I don't even want to know the rank, I just would like to see if it's maximum.

Adding square matrices in O(n) time?

Say we have two square matrices of the same size n, named A and B.
A and B share the property that each entry in their main diagonal diagonals is the same value (i.e., A[0,0] = A[1,1] = A[2,2] ... = A[n,n] and B[0,0] = B[1,1] = B[2,2] ... = B[n,n]).
Is there a way to represent A and B so that they can be added to each other in O(n) time, rather than O(n^2)?
In general: No.
For an nxn matrix, there are n^2 output values to populate; that takes O(n^2) time.
In your case: No.
Even if O(n) of the input/output values are dependent, that leaves O(n^2) that are independent. So there is no representation that can reduce the overall runtime below O(n^2).
But...
In order to reduce the runtime, it is necessary (but not necessarily sufficient) to increase the number of dependent values to O(n^2). Obviously, whether or not this is possible is dictated by the particular scenario...
To complement Oli Cherlesworth answer, I'd like to point out that in the specific case of sparse matrices, you can often obtain a runtime of O(n).
For instance, if you happen to know that your matrices are diagonal, you also know that the resulting matrix will be diagonal, and hence you only need to compute n values.
Similarly, there are band matrices that can be added in O(n), as well as more "random" sparse matrices. In general, in a sparse matrix, the number of non-zero elements per row is more or less constant (you obtain these elements from a finite element computation for example, or from graph adjacency matrices etc.), and as such, using an appropriate representation such as "Compressed row storage" or "Compressed column storage", you will end up using O(n) operations to add your two matrices.
Also a special mention for sublinear randomized algorithms, that only propose you to know the final value that is "not-too-far" from the real solution, up to random errors.

What is the complexity of matrix addition?

I have found some mentions in another question of matrix addition being a quadratic operation. But I think it is linear.
If I double the size of a matrix, I need to calculate double the additions, not quadruple.
The main diverging point seems to be what is the size of the problem. To me, it's the number of elements in the matrix. Others think it is the number of columns or lines, hence the O(n^2) complexity.
Another problem I have with seeing it as a quadratic operation is that that means adding 3-dimensional matrices is cubic, and adding 4-dimensional matrices is O(n^4), etc, even though all of these problems can be reduced to the problem of adding two vectors, which has an obviously linear solution.
Am I right or wrong? If wrong, why?
As you already noted, it depends on your definition of the problem size: is it the total number of elements, or the width/height of the matrix. Which ever is correct actually depends on the larger problem of which the matrix addition is part of.
NB: on some hardware (GPU, vector machines, etc) the addition might run faster than expected (even though complexity is still the same, see discussion below), because the hardware can perform multiple additions in one step. For a bounded problem size (like n < 3) it might even be one step.
It's O(M*N) for a 2-dimensional matrix with M rows and N columns.
Or you can say it's O(L) where L is the total number of elements.
Usually the problem is defined using square matrices "of size N", meaning NxN. By that definition, matrix addition is an O(N^2) since you must visit each of the NxN elements exactly once.
By that same definition, matrix multiplication (using square NxN matrices) is O(N^3) because you need to visit N elements in each of the source matrices to compute each of the NxN elements in the product matrix.
Generally, all matrix operations have a lower bound of O(N^2) simply because you must visit each element at least once to compute anything involving the whole matrix.
think of the general case implementation:
for 1 : n
for 1 : m
c[i][j] = a[i][j] + b[i][j]
if we take the simple square matrix, that is n x n additions

Resources