Efficient way of computing matrix product AXA'? - matrix

I'm currently using BLAS function DSYMM to compute Y = AX and then DGEMM for YA', but I'm wondering is there some more efficient way of computing the matrix product AXAT, where A is an arbitrary n×n matrix and X is a symmetric n×n matrix?

Related

How to generate an orthogonal symmetric matrix?

I want to create an orthogonal symmetric matrix of size 5 and size 6. Is there a way for me to generate this type of matrix? An orthogonal symmetric matrix is when a matrix A is equal to its transpose and also its inverse.
I've tried searching for ways to do this but all I get is how to generate an orthogonal matrix or how to generate a symmetric matrix. Couldn't find a way to generate an orthogonal symmetric matrix.

Diagonal Matrix of Sigma Values in Julia

If I compute the SVD of a matrix A in Julia, it will give the sigma values of the matrix, BUT NOT in matrix form. However, if I want to assemble the sigma values of a matrix A into a diagonal matrix, is there any way to do this other than manually typing out the sigma values into the Diagonal() function?

How we can use CUR decomposition in place of SVD decomposition?

I have understood how CUR and SVD works, but not able to understand,
How we can use CUR in place of SVD decomposition?
Does C and R matrices in CUR follow the same properties as that of U and V matrices in SVD decomposition?
If we want to reduce the dimension of original matrix say from n to k, which matrix of CUR we can use to project original matrix, so that we will get k-dimensional data points.
There is a paper called Finding Structure in Randomness that address some points about all of these decompositions as well as the SVD which would be covered in Trefethan and Bau .
The interpolative decomposition is used in different places. A paper that explores it is here.
The U,V are unitary matrices. C is a matrix containing a subset of the columns of A, R a subset of the rows.

Can 2d convolution been represented as matrix multiplication?

Discr. convolution can be represented as multiplication of input with matrix M.
Where M is presented a special case of Toeplitz matrices - circulant matrices.
The questions is: is 2d convolution can also be represented as matrix multiplication?
p.s. By dicr. convolution I mean dicr. convolution with indexing discrete samples in modulus fashion, i.e. the discrete signal is repeating
....X[n-1]x[0]x[1]...x[N-1]x[0]...
Yes, it can, but it will generally be a rather big matrix. If your data set is on a grid of size NxM, then the convolution is a matrix operating on a vector of length N*M; the convolution matrix has N2M2 elements.
If your convolution kernel is small, then the matrix will typically a band matrix where the width of the band is at least N or M.

Finding Matrix inverse using SIMPLEX Method

How can we find inverse of a matrix using Simplex method? Do we need to have square matrix only or inverse can be found of any matrix? Also specify about the upper bound on the matrix size?
The Matrix Inverse is required during simplex only over the Basis Matrix (Basis Inversion).
Base matrix is a square matrix of dimensions (mxm) where m is the total number of constraints.
This Matrix Inversion is carried out using either the Product form of Inverse or LU Decomposition.

Resources