I need to compute the determinant of complex matrix which is symmetric. Size of matrix ranges from 500*500 to 2000*2000. Is there any subroutine for me to call? By the way, I use ifort to compile.
The easiest way would be to do an LU-decomposition as described here. I would suggest using LAPACK for this task...
This article has some code in C doing that for a real-valued symmetric matrix, so you need to exchange dspsv by zspsv to handle double-precision complex matrices.
Related
I need to compute the following matrices:
M = XSX^T
and
V = XSy
what I'd like to know is the more efficient implementation using blas, knowing that S is a symmetric and definite positive matrix of dimension n, X has m rows and n columns while y is a vector of length n.
My implementation is the following:
I compute A = XS using dsymm and then with dgemm is obtained M=AX^T while dgemv is used to obtain V=Ay.
I think that at least M can be computed in a more efficient way since I know that M is symmetric and definite positive.
Your code is the best BLAS can do for you. There is no BLAS operation, that can exploit the fact that M is symmetric.
You are right though you'd technically only need to compute the upper diagonal part of the gemm product and then copy the strictly upper diagonal part to the lower diagonal part. But there is no routine for that.
May I inquire about the sizes? And may I also inspire some other sources for performance gains: Own build of your BLAS implementation, comparison with MKL, ACML, OpenBLAS, ATLAS. You could obviously code your own version that would use AVX, FMA intrinsics. You should be able to do better that some generalised library. Also what is the precision of your floating point variable?
I seriously doubt that you might gain too much by coding it yourself anyway. But what I would definitely suggest is converting everything to floats and testing if float precision is not giving you the same result with significant speed up in compute time. Very seldom have I seen such cases, which were more in the ODE solving domain and numeric integration of nasty functions.
But you did not address my question regarding the BLAS implementation and machine type.
Again, the optimisation beyond this point is not possible without more skills :(. But seriously, don't be to worried about this. There is a reason, why BLAS does not the optimisation you ask for. It might not be worth the hassle. Go with your solution.
And don't forget to investgate the use of floats rather than double. On R convert everything to float. For the Lapack commands use only sgemX
Without knowing the detail of your problem, it can be useful to recognize the zeros in the matrices. Partitioning the matrices to achieve this can provide significant benefits. Is M the sum of many XSX' sub matrices ?
For V = XSy, where y is a vector and X and S are matrices, calculating S.y then X.(Sy) should be better, unless X.S is a necessary calculation for M.
I have a piece of code in Fortran 90 in which I have to solve both a non-linear (for which I have to invert the Jacobian matrix) and a linear system of equations. When I say small I mean n unknowns for both operations, with n<=4. Unfortunately, n is not known a priori. What do you think is the best option?
I thought of writing explicit formulas for cases with n=1,2 and using other methods for n=3,4 (e.g. some functions of the Intel MKL libraries), for the sake of performance. Is this sensible or should I write explicit formulas for the inverse matrix also for n=3,4?
My questions concerns the Mixed integer programming (MIP) in Scip:
I have the following code:
$\min trace(X)$
subject to
$$(A+D)^TX+X(A+D)=I\\
d_i \in \left\{0,1\right\} \mbox{ for } i=1,\ldots,n$$
where A is a n*n matrix and $D=diag(d_1,\ldots,d_n)$ is a diagonal matrix.
Since the matrix constraints are linear the equation can be transformed to a system of linear equations (via Kronecker product and vectorize operation), but this is limited to small n. Is it possible to solve the matrix equation directly with Scip? Is there a way to embed an external solver? Or do I have to write my own solver for the continuous lyapunov matrix equation?
You could try using the pip file format used for polynomial constraints and objective. See http://polip.zib.de/ and http://polip.zib.de/pipformat.php
You would have to do the matrix operations yourself or use ZIMPL
Matrix equations cannot be handled in SCIP. You would need to transform them into linear equations. Also, all the data has to be loaded into an LP solver at some time and needs to be formulated as usual constraints here as well. So even if SCIP itself would be able to handle matrix equations you are sooner or later to required to expand the problem.
any one can help me, i want to generate a matrix with elements being zero mean and unit variance independent and identically distributed (i.i.d.) circularly symmetric Gaussian variables using Matlab any one know the code for this and how to do it
It is easy to generate a matrix with elements being zero mean and unit variance by using this command in matlab:
normrnd(mu, sigma)
mu is the mean
sigma is the standard deviation.
More detail please help normrnd in MATLAB.
I am very new to programming and fortran in particular. I am using the LAPACK (Linear Algebra Package) software package for Fortran to find the eigenvalues and eigenvectors of a large symmetrical real matrix. Specifically, I calculate a scalar from each eigenvector, and I want to graph it against its associated eigenvalue.
I am using the subroutine DSYEV of LAPACK to do this. However, DSYEV outputs the eigenvalues in ascending order, and I'm not sure how it orders the eigenvectors. Is there a way to associate each eigenvector with its eigenvalue?
Edit: The official page for DSYEV is here: http://www.netlib.org/lapack/double/dsyev.f
Here is another page about it: http://www.nag.co.uk/numeric/fl/nagd...F08/f08faf.xml
They should be in the same order. You can actually check this by matrix multiplication. It is much easier and faster, than finding the eigenvectors.