Diagonalizable Matrix in MATLAB - algorithm

Is there a way to generate N x N random diagonalizable matrix in MATLAB? I tried as following:
N = 10;
A = diag(rand(N,N))
but it is giving me an N x 1 matrix. I also need the matrix to be symmetric.

Assuming that you are considering real-valued matrices: Every real symmetric matrix is diagonalizable. You can therefore randomly generate some matrix A, e.g. by using A = rand(N, N), and then symmetrize it, e.g. by
A = A + A'
For complex matrices the condition for diagonalizability is that the matrix is normal. If A is an arbitrary square random matrix, you can normalize it by
A = A * A'

All full-rank matrices are diagonalizable by SVD or eigen-decomposition.
If you want a random symmetric matrix...
N = 5
V = rand(N*(N+1)/2, 1)
M = triu(ones(N))
M(M==1) = V
M = M + tril(M.',-1)
#DavidEisenstat is right. I tried his example. Sorry for the false statement. Here's a true statement that is relevant specifically to your situation, but is not as general: Random matrices are virtually guaranteed to be diagonalizable.

Related

Compute the outer product of two matrixes with a sparse matrix mask

Given: two column vectors a, b. Let the matrix from their outer product be P:
P = a * b^T
where ^T denotes the transpose.
Also given: a sparse matrix S whose entries are only 1s and 0s.
I want to compute the following matrix:
S % P = S % ( a * b^T )
where % denotes element-wise multiplication of the two matrices.
In other words, I want the matrix whose elements (i,j) are:
The product of elements a_i * b_j for S_ij = 1, or
Zero for S_ij = 0.
The formula S % (a * b^T) involves computing many products that are set to zero anyways, so this does not seem very efficient. Another way to do it is to loop through the elements of the sparse matrix S and manually compute the product a_i * b_j, but I wondered if there is a faster matrix/vector computation to do this.
Thanks

Generate multivariate normal matrix issue with accuracy

I am trying to use Cholesky decomposition to generate a multivariate matrix with this: Y = U + X*L
U is the mean vector: n x m
L from cholesky: m x m
X is a matrix with univariate normal vectors: n x m
After calculating the mean of the simulated matrix, I realized it was off. The reason is that the mean vector is very close to zero, so when adding it to L*X, L*X dominated the U. Anyone know how to work around this issue?

Making a customizable LCG that travels backward and forward

How would i go about making an LCG (type of pseudo random number generator) travel in both directions?
I know that travelling forward is (a*x+c)%m but how would i be able to reverse it?
I am using this so i can store the seed at the position of the player in a map and be able to generate things around it by propogating backward and forward in the LCG (like some sort of randomized number line).
All LCGs cycle. In an LCG which achieves maximal cycle length there is a unique predecessor and a unique successor for each value x (which won't necessarily be true for LCGs that don't achieve maximal cycle length, or for other algorithms with subcycle behaviors such as von Neumann's middle-square method).
Suppose our LCG has cycle length L. Since the behavior is cyclic, that means that after L iterations we are back to the starting value. Finding the predecessor value by taking one step backwards is mathematically equivalent to taking (L-1) steps forward.
The big question is whether that can be converted into a single step. If you're using a Prime Modulus Multiplicative LCG (where the additive constant is zero), it turns out to be pretty easy to do. If xi+1 = a * xi % m, then xi+n = an * xi % m. As a concrete example, consider the PMMLCG with a = 16807 and m = 231-1. This has a maximal cycle length of m-1 (it can never yield 0 for obvious reasons), so our goal is to iterate m-2 times. We can precalculate am-2 % m = 1407677000 using readily available exponentiation/mod libraries. Consequently, a forward step is found as xi+1 = 16807 * xi % 231-1, while a backwards step is found as xi-1 = 1407677000 * xi % 231-1.
ADDITIONAL
The same concept can be extended to generic full-cycle LCGs by casting the transition in matrix form and doing fast matrix exponentiation to come up with the equivalent one-stage transform. The matrix formulation for xi+1 = (a * xi + c) % m is Xi+1 = T · Xi % m, where T is the matrix [[a c],[0 1]] and X is the column vector (x, 1) transposed. Multiple iterations of the LCG can be quickly calculated by raising T to any desired power through fast exponentiation techniques using squaring and halving the power. After noticing that powers of matrix T never alter the second row, I was able to focus on just the first row calculations and produced the following implementation in Ruby:
def power_mod(ary, mod, power)
return ary.map { |x| x % mod } if power < 2
square = [ary[0] * ary[0] % mod, (ary[0] + 1) * ary[1] % mod]
square = power_mod(square, mod, power / 2)
return square if power.even?
return [square[0] * ary[0] % mod, (square[0] * ary[1] + square[1]) % mod]
end
where ary is a vector containing a and c, the multiplicative and additive coefficients.
Using this with power set to the cycle length - 1, I was able to determine coefficients which yield the predecessor for various LCGs listed in Wikipedia. For example, to "reverse" the LCG with a = 1664525, c = 1013904223, and m = 232, use a = 4276115653 and c = 634785765. You can easily confirm that the latter set of coefficients reverses the sequence produced by using the original coefficients.

N x N identity matrix in MATLAB

I am having difficulty creating a generic N x N identity matrix in Matlab.
I am given a system where
Ai,j =
{1, if i does not equal j
{n, if i = j}
You are asked to compute this when the value of the identity matrix n = 10, n = 20.
What I don't see is how to apply matrix indexing here. That is easy enough to do, but how do I account for the given linear system?
There is a builtin function for creating a unit matrix called eye.
have a look at the documentation http://au.mathworks.com/help/matlab/ref/eye.html?requestedDomain=au.mathworks.com
Also, ones(n,m) creates a matrix of ones.
For a square matrix use (n-1)*eye(n) + ones(n) and for non-square
(n-1)*eye(n, m) + ones(n, m)

Algorithm to create a vector based puzzle

I am working on a little puzzle-game-project. The basic idea is built around projecting multi-dimensonal data down to 2D. My only problem is how to generate the randomized scenario data. Here is the problem:
I got muliple randomized vectors v_i and a target vector t, all 2D. Now I want to randomize scalar values c_i that:
t = sum c_i v_i
Because there are more than two v_i this is a overdetermined system. I also took care that the linear combination of v_i is actual able to reach t.
How can I create (randomized) values for my c_i?
Edit: After finding this Question I can additionally state, that it is possible for me also (slightly) change the v_i.
All values are based on double
Let's say your v_i form a matrix V with 2 rows and n columns, each vector is a column. The coefficients c_i form a column vector c. Then the equation can be written in matrix form as
V×c = t
Now apply a Singular Value Decomposition to matrix V:
V = A×D×B
with A being an orthogonal 2×2 matrix, D is a 2×n matrix and B an orthogonal n×n matrix. The original equation now becomes
A×D×B×c = t
multiply this equation with the inverse of A, the inverse is the same as the transposed matrix AT:
D×B×c = AT×t
Let's introduce new symbols c'=B×c and t'=AT×t:
D×c' = t'
The solution of this equation is simple, because Matrix D looks like this:
u 0 0 0 ... // n columns
0 v 0 0 ...
The solution is
c1' = t1' / u
c2' = t2' / v
And because all the other columns of D are zero, the remaining components c3'...cn' can be chosen freely. This is the place where you can create random numbers for c3'...cn. Having vector c' you can calculate c as
c = BT×c'
with BT being the inverse/transposed of B.
Since the v_i are linearly dependent there are non trivial solutions to 0 = sum l_i v_i.
If you have n vectors you can find n-2 independent such solutions.
If you have now one solution to t = sum c_i v_i you can add any multiple of l_i to c_i and you will still have a solution: c_i' = p l_i + c_i.
For each independent solution of the homogenous problem determine a random p_j and calculate
c_i'' = c_i + sum p_j l_i_j.

Resources