Parallellize least squares for large (> 30k x 30k) non-square dense matrices - parallel-processing

Let RG = A for dense unstructured matrices with shapes (e.g. roughly) R: (30k x 40k, entries float32) and G: (40k x 50k, entries either 0.0 or 1.0, roughly equally often) and of course A: (30k x 50k, entries float32).
Given A and G, I want to find the least squares solution for R.
I can use hundreds of CPU cores, hundreds of GB of RAM and also an A40 GPU. What is the best way to use such resources to solve the problem? I'm using Julia 1.7 in the examples below but I'm open to other options!
First question: Can I somehow exploit that the entries of G are only zeros and ones?
Trying to use Julia LinearAlgebra with many CPUs
I've tried two methods: "Penrose inverse" and "right division"
using LinearAlgebra
#show BLAS.get_num_threads()
# defaults to 8. Can change using BLAS.set_num_threads(N)
# build toy problem (order of magnitude smaller sizes)
R_true = rand(Float32, 3_000, 4_000)
G = rand([0., 1.], 4_000, 5_000)
# note: using true/false here gives same results but is much slower!
A = R_true * G
# solve toy problem using matrix (right) division
R_fitted_rdiv = A / G
# solve toy problem using Penrose inverse
R_fitted_pinv = (pinv(G') * A')'
First, setting BLAS.set_num_threads(64) (or any bigger number) actually only gives me BLAS.get_num_threads() returning 32. Apparantly that's an upper limit. Second,
using 32 BLAS threads is actually slower than using 8.
(e.g. performing right division with sizes (4000, 9800) / (8500, 9800) takes less than 50 seconds on 8 threads but more than 55 seconds on 32 threads. I ran things multiple times to exclude compilation time issues.) I don't know why this is or if it's normal. How can I make use of my computing power for this problem?
I think that the matrix division is faster than the Penrose inverse method. Should this be expected? I don't know what either of the functions do exactly for these inputs. The docs say that left division (\) uses pivoted QR factorization. I couldn't find what algorithm(s) are used for pinv or right division (/) (although it's probably the same as \ since they are related by transposing the matrices). I'd rather not delve too deeply because my knowledge in numerical linear algebra is quite limited.
The issue is that for my large matrices either method takes forever. Is there a way to make use of my ~100 cores somehow?
Trying to use the GPU:
Using CUDA.jl, Matrices of size around 10k work fine and take a minute to pinv:
using CUDA
#time matrix = CUDA.rand(Float32, 10_000, 10_500) # 0.003037 seconds (5 allocations: 160 bytes)
#time pinv(matrix) # 57.417559 seconds (678 allocations: 172.094 KiB)
However, when I try to do matrices around size 20k, I get right away the error InexactError: trunc(Int32, 4811456640). I assume this is due to CUBLAS using int32 for indexing, even though I don't understand why it leads to an error in this case. (edit: it's about the size of the array in bytes fitting into 31 bits.)
Trying to use right division with CuArrays gives the error "DimensionMismatch("LU factored matrix A must be square!")". I guess I have to choose a different algorithm manually? I don't know what it's called. (Although, it probably would still crash for large matrices...?)
To summarize, it doesn't look like I can use the GPU from Julia easily to solve my problem. Should I keep trying to use the GPU for this task or stick to the many CPUs?
Yes this is really my problem, please refrain from commenting "nobody should ever need such large least squares"

Naive answer
Using pytorch, this will require at least 30GB GPU memory
import torch
A = torch.randint(0, 2, (50000, 40000), device='cuda', dtype=torch.float32).T
G = torch.randint(0, 2, (50000, 30000), device='cuda', dtype=torch.float32).T
R = torch.lstsq(G.T, A.T)
If the system can sustain the same operation throughput as my laptop you should have an answer in about 15 minutes.
I would suggest you to try a generalized version scaling up the dimensions to get a better feeling of how your system will handle it
def try_it(a,b,c):
A = torch.randint(0, 2, (a, b), device='cuda', dtype=torch.float32).T
G = torch.randint(0, 2, (a, c), device='cuda', dtype=torch.float32).T
R = torch.lstsq(G.T, A.T)
I transposed the dimensions in the generation in order to make sure G.T and A.T would be contiguous.
You can't take much advantage of the entries being integer. This type of problem is easier to solve on the reals than on the integers, because finding integer solutions would require you to search the solutions, while the real solution you can find by doing algebraic manipulations.

Related

Parallel method to get all the eigenvalues of a large sparse matrix

Is it possible to compute all the eigenvalues of a large sparse matrix using multiple CPUs ?
If yes, then is it possible to do it without storing the full dense matrix in memory ? (using only the stored sparse matrix)
If yes, then what's a good (rapid and low memory usage) method to do it ?
Can numpy or scipy do it ?
My matrix is complex, non-hermitian, as sparse as the identity matrix and of dimension N x N where N = BinomialCoefficient(L,Floor(L/2)) where we need to take L as large as possible.
For example, with L = 20, N = 184 756 it is 99.9995% sparse, having just N non-zero elements. So, the memory usage of the sparse matrix is ~0.1GB but would be ~10TB for the dense matrix.
With L = 30, N = 155 117 520 and we use ~60GB (sparse) and ~10EB (dense). So it's impraticable to store the full dense matrix in memory.
I have access to Intel® Gold 6148 Skylake # 2.4 [GHz] CPUs with up to 752 [GB] of RAM each. I could use Python, C (ScaLAPACK, OpenBLAS, MAGMA, ELPA, MUMPS, SuperLU, SuiteSparse, PETSc, Lis,...), C++ (Armadillo, Eigen, BLitz++, Trilinos,...), Matlab, R, Perl, Fortran, mpi4py, CUDA, Intel® Math Kernel Library, and a few other softwares.
I build my matrix using Python (scipy.sparse, numpy and multiprocessing). I've tried using numpy.linalg.eigvals() and scipy.linalg.eigvals(), but it seems that they only use the cores of one CPU. I could look further into those, but I wont if there's a better way to solve my matrix.
For the curious ones, my matrix comes from a representation of a non-hermitian operator on a subset of states of a length L quantum spin 1/2 chain with strong interactions. I need the full spectrum because it allows me to study the level spacing distribution of the energy spectrum for a fixed set of quantum numbers.
I'm far from being a professional in computer science, so if I missed some basic concept please be clement.

Non-intuitive perf diff between `matrix * vector`, `matrix’ * vector` and `copy(matrix’) * vector` Usage Performance blas

Question from Julia Discourse
I’m using Julia 1.2. This is my test:
a = rand(1000, 1000)
b = adjoint(a)
c = copy(b)
#btime a * x setup=(x=rand(1000)) # 114.757 μs
#btime b * x setup=(x=rand(1000)) # 94.179 μs
#btime c * x setup=(x=rand(1000)) # 110.325 μs
I was expecting a and c to be at very least not slower.
After inspecting stdlib/LinearAlgebra/src/matmul.jl , it turns out that Julia passes b.parent (i.e. a ) to BLAS.gemv , not b , and instead switches LAPACK’s dgemv_ into a different and apparently faster mode.
Am I correct in assuming that the speedup comes from the fact that the memory is aligned in a more favorable way for whatever dgemv_ does, when it’s in a trans = T mode? If so, then I’m guessing this isn’t actionable, besides possibly mentioning the gotcha in the docs somehow. If my assumption is wrong though, is there something to be done about this?
Answer from #stevengj in the same Discourse thread:
Am I correct in assuming that the speedup comes from the fact that the memory is aligned in a more favorable way for whatever dgemv_ does, when it’s in a trans = T mode?
Close. It does have to do with memory, but it’s about locality, not alignment. The basic thing to understand is that it is more efficient to access consecutive (or at least nearby) data from memory than data that is separated, due to the existence of cache lines. (Consecutive access also has some advantages in utilizing SIMD instructions.)
Julia stores matrices in column-major order, so that the columns are contiguous in memory. When you multiply a transposed matrix (that has not been copied) by a vector, therefore, it can compute it as the dot product of the contiguous column (= transposed row) with the contiguous vector, which has good spatial locality and therefore utilizes cache lines efficiently.
For multiplying a non -transposed matrix by a vector, in contrast, you are taking the dot products of non-contiguous rows of the matrix with the vector, and it is harder to efficiently utilize cache lines. To improve spatial locality in this case, an optimized BLAS like OpenBLAS actually computes the dot products of several rows at a time (a “block”) with the vector, I believe — that’s why it’s only 10% slower and not much worse. (In fact, even the transposed case may do some blocking to keep the vector in cache.)

CUDA implementation for arbitrary precision arithmetics

I have to multiply two very large (~ 2000 X 2000) dense matrices whose entries are floats with arbitrary precision (I am using GMP and the precision is currently set to 600). I was wondering if there is any CUDA library that supports arbitrary precision arithmetics? The only library that I have found is called CAMPARY however it seems to be missing some references to some of the used functions.
The other solution that I was thinking about was implementing a version of the Karatsuba algorithm for multiplying matrices with arbitrary precision entries. The end step of the algorithm would just be multiplying matrices of doubles, which could be done very efficiently using cuBLAS. Is there any similar implementation already out there?
Since nobody has suggested such a library so far, let's assume that one doesn't exist.
You could always implement the naive implementation:
One grid thread for each pair of coordinates in the output matrix.
Each thread performs an inner product of a row and a column in the input matrices.
Individual element operations will use the code taken from the GMP (hopefully not much more than copy-and-paste).
But you can also do better than this - just like you can do better for regular-float matrix multiplication. Here's my idea (likely not the best of course):
Consider the worked example of matrix multiplication using shared memory in the CUDA C Programming Guide. It suggests putting small submatrices in shared memory. You can still do this - but you need to be careful with shared memory sizes (they're small...):
A typical GPU today has 64 KB shared memory usable per grid block (or more)
They take 16 x 16 submatrix.
Times 2 (for the two multiplicands)
Times ceil(801/8) (assuming the GMP representation uses 600 bits from the mantissa, one bit for the sign and 200 bits from the exponent)
So 512 * 101 < 64 KB !
That means you can probably just use the code in their worked example as-is, again replacing the float multiplication and addition with code from GMP.
You may then want to consider something like parallelizing the GMP code itself, i.e. using multiple threads to work together on single pairs of 600-bit-precision numbers. That would likely help your shared memory reading pattern. Alternatively, you could interleave the placement of 4-byte sequences from the representation of your elements, in shared memory, for the same effect.
I realize this is a bit hand-wavy, but I'm pretty certain I've waved my hands correctly and it would be a "simple matter of coding".

How to Optimally Add, Multiply and Average Very Large Data Sets in MATLAB Using parfor

I would like to introduce an interesting MATLAB programming problem I’ve encountered in my research. The solution may be of use to people doing computations on very large data sets. It involves striking a balance between RAM and CPU usage using parfor. Because my data is so large, files must be read in over and over again to be processed. The other issue it introduces is finding an optimal algorithm for multiplication, summation and averaging of very large vectors and matrices.
I have found a solution, but it’s time intensive and I would like too see if the community sees any room for improvement. Here’s the general form of the problem.
Suppose we have about 30,000 functions that we’ve taken the Fourier transforms of. Each transform has the form e(k)=a(k)+b(k)*i where k is the magnitude of a wavevector, a is the real component and b is the imaginary component. Each of these transforms is saved to file as a 2-column table with the structure below. The number of elements in each vector is about N=1e6. This means that each of these files is 1/64 GB in size. Note that the values of k_i are not necessarily in order.
k | Re(e) Im(e)
k_1 | a(1) b(1)
k_2 | a(2) b(2)
... |
k_N | a(N) b(N)
The goal is to cross-multiply each pair of modes and average the results over a set of about 50 fixed k-bands. So for example, if we let the elements of vectors 5 and 7 be represented respectively as e5=a5+b5*i and e7=a7+b7*i we need
a5(1)*a7(1) + b5(1)*b7(1)
a5(2)*a7(2) + b5(2)*b7(2)
...
a5(N)*a7(N) + b5(N)*b7(N)
Each element of the above N-dimensional vector belongs within a single k-bin. All the elements in each bin must be averaged and returned. So at the end of one mode-mode comparison we end up with just 50 numbers.
I have at my disposal a computer with 512GB of RAM and 48 processors. My version of MATLAB 2013a limits me to only opening 12 processors with parfor simultaneously on each instance of MATLAB that I run. So what I’ve been doing is opening 4 versions of MATLAB, allocating 10 processors each, and sending the maximum amount of data to each processor without spilling over my self-imposed limit of 450 GB.
Naturally this involves my breaking the problem into blocks. If I have 30000 vectors e, then I will have 30000^2 sets of these 50 cross-coefficients. (It’s actually about half this since the problem is symmetric). I decide to break my problem into blocks of size m=300. This means I’d have 100 rows and columns of these blocks. The code I’m running looks like this (I’m simplifying a bit to just include the relevant bits):
for ii=1:100 % for each of the block-rows
[a,b] = f_ReadModes(ii); % this function reads modes 1 through 300 if ii=1,
% modes 301 through 600 if ii=2 and so on
% “a” and “b” are matrices of size Nx300
% “a” contains the real vectors stored in columns,
% “b” contains the imaginary vectors stored in columns
for jj=ii:100 % for each of the block-columns in the upper triangle
[c,d] = f_ReadModes(jj); % same as above except this reads in a different
% set of 300 modes
block = zeros(300,300,50); % allocates space for the results. The first
% dimension corresponds to the "ii modes".
% The 2nd dimension corresponds to the “jj”
% modes. The 3rd dimension is for each k-bin.
parfor rr=1:300
A = zeros(1,300,50); % temporary storage to keep parfor happy
ModeMult = bsxfun(#times,a(:,rr),c)+bsxfun(#times,b(:,rr),d);
% My strategy is to cross multiply one mode by all the others before
% moving on. So when rr=6, I’m multiplying the 6th mode in a (and b)
% by all the modes in c and d. This would help fill in the 6th row
% of block, i.e. block(6,:,:).
for kk=1:50 % Now I average the results in each of the k-bins
ind_dk = f_kbins(kk); % this returns the rows of a,b,c,d and
% ModeMult that lie within the the kk^th bin
A(1,:,kk) = mean(ModeMult(ind_dk,:)); % average results in each bin
end
block(rr,:,:) = A; % place the results in more permanent storage.
end
f_WriteBlock(block); % writes the resulting cross-coefficient block to disk
end
end
There are three bottlenecks in this problem:
1) Read-in time
2) Computing the products ac and bd then summing them (this is ModeMult)
3) Averaging the results of step 2 in each k-bin
Bigger blocks are preferable since they necessitate fewer read-in’s. However, the computations in steps 2 and 3 don’t automatically parallelize, so they have to be sent to individual processors using parfor. Because the computational costs are high, utitlizing all the processors seems necessary.
The way my code is written, each processor needs enough memory to hold 4*N*m elements. When m=300, this means each processor is using about 10 GB of RAM. It would be nice if the memory requirement for each processor could be lowered somehow. It would also be great if the computations in steps 2 and 3 could be rewritten to run more efficiently. Any thoughts?

Hiding communication in Matrix Vector Product with MPI

I have to solve a huge linear equation for multiple right sides (Let's say 20 to 200). The Matrix is stored in a sparse format and distributed over multiple MPI nodes (Let's say 16 to 64). I run a CG solver on the rank 0 node. It's not possible to solve the linear equation directly, because the system matrix would be dense (Sys = A^T * S * A).
The basic Matrix-Vector multiplication is implemented as:
broadcast x
y = A_part * x
reduce y
While the collective operations are reasonably fast (OpenMPI seems to use a binary tree like communication pattern + Infiniband), it still accounts for a quite large part of the runtime. For performance reasons we already calculate 8 right sides per iteration (Basicly SpM * DenseMatrix, just to be complete).
I'm trying to come up with a good scheme to hide the communication latency, but I did not have a good idea yet. I also try to refrain from doing 1:n communication, although I did not yet measure if scaling would be a problem.
Any suggestions are welcome!
If your matrix is already distributed, would it be possible to use a distributed sparse linear solver instead of running it only on rank 0 and then broadcasting the result (if I'm reading your description correctly..). There's plenty of libraries for that, e.g. SuperLU_DIST, MUMPS, PARDISO, Aztec(OO), etc.
The "multiple rhs" optimization is supported by at least SuperLU and MUMPS (haven't checked the others, but I'd be VERY surprised if they didn't support it!), since they solve AX=B where X and B are matrices with potentially > 1 column. That is, each "rhs" is stored as a column vector in B.
If you don't need to have the results of an old right-hand-side before starting the next run you could try to use non-blocking communication (ISend, IRecv) and communicate the result while calculating the next right-hand-side already.
But make sure you call MPI_Wait before reading the content of the communicated array, in order to be sure you're not reading "old" data.
If the matrices are big enough (i.e. it takes long enough to calculate the matrix-product) you don't have any communication delay at all with this approach.

Resources