I have an N by P matrix in which in which the n-th row is a P-vector representing the mean for a multivariate Gaussian and a P by P matrix Sigma representing a shared covariance matrix.
Is there any way to sample from all N multivariate Gaussians in NumPy faster than using a for loop?
Normal = np.random.multivariate_normal
for n in range(N):
X[n] = Normal(mean=mu[n], cov=Sigma)
Related
Given: two column vectors a, b. Let the matrix from their outer product be P:
P = a * b^T
where ^T denotes the transpose.
Also given: a sparse matrix S whose entries are only 1s and 0s.
I want to compute the following matrix:
S % P = S % ( a * b^T )
where % denotes element-wise multiplication of the two matrices.
In other words, I want the matrix whose elements (i,j) are:
The product of elements a_i * b_j for S_ij = 1, or
Zero for S_ij = 0.
The formula S % (a * b^T) involves computing many products that are set to zero anyways, so this does not seem very efficient. Another way to do it is to loop through the elements of the sparse matrix S and manually compute the product a_i * b_j, but I wondered if there is a faster matrix/vector computation to do this.
Thanks
I am trying to use Cholesky decomposition to generate a multivariate matrix with this: Y = U + X*L
U is the mean vector: n x m
L from cholesky: m x m
X is a matrix with univariate normal vectors: n x m
After calculating the mean of the simulated matrix, I realized it was off. The reason is that the mean vector is very close to zero, so when adding it to L*X, L*X dominated the U. Anyone know how to work around this issue?
Is there a way to generate N x N random diagonalizable matrix in MATLAB? I tried as following:
N = 10;
A = diag(rand(N,N))
but it is giving me an N x 1 matrix. I also need the matrix to be symmetric.
Assuming that you are considering real-valued matrices: Every real symmetric matrix is diagonalizable. You can therefore randomly generate some matrix A, e.g. by using A = rand(N, N), and then symmetrize it, e.g. by
A = A + A'
For complex matrices the condition for diagonalizability is that the matrix is normal. If A is an arbitrary square random matrix, you can normalize it by
A = A * A'
All full-rank matrices are diagonalizable by SVD or eigen-decomposition.
If you want a random symmetric matrix...
N = 5
V = rand(N*(N+1)/2, 1)
M = triu(ones(N))
M(M==1) = V
M = M + tril(M.',-1)
#DavidEisenstat is right. I tried his example. Sorry for the false statement. Here's a true statement that is relevant specifically to your situation, but is not as general: Random matrices are virtually guaranteed to be diagonalizable.
I've been reading a paper on Sparse PCA, which is:
http://stats.stanford.edu/~imj/WEBLIST/AsYetUnpub/sparse.pdf
And it states that, if you have n data points, each represented with p features, then, the complexity of PCA is O(min(p^3,n^3)).
Can someone please explain how/why?
Covariance matrix computation is O(p2n); its eigen-value decomposition is O(p3). So, the complexity of PCA is O(p2n+p3).
O(min(p3,n3)) would imply that you could analyze a two-dimensional dataset of any size in fixed time, which is patently false.
Assuming your dataset is $X \in \R^{nxp}$ where n: number of samples, d: dimensions of a sample, you are interested in the eigenanalysis of $X^TX$ which is the main computational cost of PCA. Now matrices $X^TX \in \R^{pxp}$ and $XX^T \in \R^{nxn}$ have the same min(n, p) non negative eigenvalues and eigenvectors. Assuming p less than n you can solve the eigenanalysis in $O(p^3)$. If p greater than n (for example in computer vision in many cases the dimensionality of sample -number of pixels- is greater than the number of samples available) you can perform eigenanalysis in $O(n^3)$ time. In any case you can get the eigenvectors of one matrix from the eigenvalues and eigenvectors of the other matrix and do that in $O(min(p, n)^3)$ time.
$$X^TX = V \Lambda V^T$$
$$XX^T = U \Lambda U^T$$
$$U = XV\Lambda^{-1/2}$$
Below is michaelt's answer provided in both the original LaTeX and rendered as a PNG.
LaTeX code:
Assuming your dataset is $X \in R^{n\times p}$ where n: number of samples, p: dimensions of a sample, you are interested in the eigenanalysis of $X^TX$ which is the main computational cost of PCA. Now matrices $X^TX \in \R^{p \times p}$ and $XX^T \in \R^{n\times
n}$ have the same min(n, p) non negative eigenvalues and eigenvectors. Assuming p less than n you can solve the eigenanalysis in $O(p^3)$. If p greater than n (for example in computer vision in many cases the dimensionality of sample -number of pixels- is greater than the number of samples available) you can perform eigenanalysis in $O(n^3)$ time. In any case you can get the eigenvectors of one matrix from the eigenvalues and eigenvectors of the other matrix and do that in $O(min(p, n)^3)$ time.
I have an 6000*16000 matrix D
I need to compute matrix C formed by the first leigenvectors with the smallest
eigenvalues D ( I don't choose the right l until now)
what is the faster way to compute C?