I have some NxN symmetric matrix A, and I want to extract the elements of this matrix and construct it with several matrices. For example,
A_1 = A(1:10,1:10), A_2 = A(11:N, 11:N), A_3 =(1:10,11:N).
In the case of MATLAB, the sparse matrix can also be written as above. However, in my Fortran code, it can not be written as above because matrix A is stored in CSR format. Moreover, all matrices have to be saved in CSR format. What should I do in this case?
Question: Is there any library like MATLAB sparse function that makes it easy to handle sparse matrices available in Fortran?
Thanks.
Related
My original goal was to do some jugglery with a couple of matrices. Its simple and I explaine it for 2D matrices below:
Given a certain matrix Matrix1 as:
, and a binary matrix Matrix2 as this:
, I want to allocate the elements from Matrix1 to Matrix2 such that I have a final matrix Matrix3, which looks like:
The following one liner worked for me:
(Matrix3 = zeros(eltype(Matrix1),size(Matrix2)))'[Matrix2'[:]] .= Matrix1'[:]
Now I need to extend it for higher dimensions i.e. 3D or more. So, suppose Matrix1 has dimension (4,6,6) and the binary matrix Matrix2 has dimension (4,12,12). The allocation problem remains the same. How would then you approach it? Can someone kindly help me in it (preferably with a one liner)? Note here that for both the matrices the size of the first dimension is the same here, 4 in this case. Along the rest of the two dimensions, individually both the matrices are square.
I'm looking for a way to implement block-diagonal matrices in Tensorflow. Specifically, I have block-diagonal matrix A with N blocks of size S x S each. Further, I have a vector v of length N*S. I want to calculate A dot v. Is there any efficient way to do it in Tensorflow?
Also, I would prefer the implementation which supports a batch dimension of v (e.g. its real dimension is batch_size x (N*S)) and which is memory efficient, keeping in memory only block-diagonal parts of A.
Thanks for any help!
You can simply convert your tensor to a sparse tensor since a block-diagonal matrix is just a special case of it. Then, the operations are done in a efficient way. If you already have a dense representation of the tensor you can just cast it using sparse_tensor = tf.contrib.layers.dense_to_sparse(dense_tensor). Otherwise, you can construct it with the tf.SparseTensor(...) function. To get the indices, you might use tf.strided_slice, see this post for more information.
I have a code that was written to use UMFpack sparse matrix solver but need to convert it to Eigen sparse matrix but I am running into memory problems.
I have Ai (row pointers), Ap (column pointers) and Ax (array). Trying to solve Ax=b. How can I pass these pointers and Ax or change them for Eigen?
I have the following problem. There is a matrix A of size NxN, where N = 200 000. It is very sparse, there are exactly M elements in each row, where M={6, 18, 40, 68, 102} (I have 5 different scenarios), the rest are zeros.
Now I would like to get all the eigenvalues and eigenvectors of matrix A.
Problem is, I cannot put matrix A into memory as it is around 160 GB of data. What I am looking for is a software that allows nice storing of sparse matrix (without zeros, my matrix is just few MB) and then putting this stored matrix without zeros to the algorithm that calculates eigenvalues and vectors.
Can any of you recommend me a software for that?
EDIT: I found out I can reconfigure my matrix A so it becomes a band matrix. Then I could use LAPACK to get the eigenvalues and eigenvectors (concretely: http://software.intel.com/sites/products/documentation/doclib/iss/2013/mkl/mklman/GUID-D3C929A9-8E33-4540-8854-AA8BE61BB08F.htm). Problem is, I need all the vectors, and since my matrix is NxN, I cannot allow LAPACK to store the solution (all eigenvectors) in the memory. The best way would be a function that will give me first K eigenvectors, then I rerun the program to get the next K eigenvectors and so on, so I can save the results in a file.
You may try to use the SLEPC library http://www.grycap.upv.es/slepc/description/summary.htm :
"SLEPc the Scalable Library for Eigenvalue Problem Computations, is a software library for the solution of large sparse eigenproblems on parallel computers."
Read the second chapter of their users'manual, "EPS: Eigenvalue Problem Solver". They are focused on methods that preserve sparcity...but a limited number of eigenvalues and eigenvectors are computed.
I hope our matrices have good properties (positive definite for instance...).
EPSIsPositive(EPS eps,PetscBool *pos);
You may be interrested in "spectrum slicing" to compute all eigenvalues in a given interval... Or you may set a target and compute the closest eigenvalue around this target.
See http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetWhichEigenpairs.html#EPSSetWhichEigenpairs
See examples http://www.grycap.upv.es/slepc/documentation/current/src/eps/examples/tutorials/index.html
Why do you need to compute all eigenvectors for such large matrices ?
Bye,
I am using armadillo mostly for symmetric and triangular matrices. I wanted to be efficient in terms of memory storage. However, it seems there is no other way than to create a new mat and fill with zeros(for triangular) or with duplicates(for symmetric) the lower/upper part of the matrix.
Is there a more efficient way of using triangular/symmetric matrices using Armadillo?
Thanks,
Antoine
There is no specific support for triangular or banded matrices in Armadillo. However, since version 3.4 support for sparse matrices has gradually been added. Depending on what Armadillo functions you need, and the sparsity of your matrix, you might gain from using SpMat<type> which implements the compressed sparse column (CSC) format. For each nonzero value in your matrix the CSC format stores the row index along with the value so you would likely not save much memory for a triangular matrix. A banded diagonal matrix should however consume significantly less memory.
symmatu()/symmatl() and trimatu()/trimatl()
may be what you are looking for:
http://arma.sourceforge.net/docs.html