Hermitian matrix of h-chain using np.diag - chain

I hope you are well.
Greatly appreciate any help provided.
I am attempting to create a NxN hermitian matrix which takes the molecular chain of H atoms using a function and numpy diagonal.
def H_chain(N, E, A):
y = np.ones(N,N)
x = np.arange(E).reshape((N,N))
z = np.arange(A).reshape((N,N))
g = np.diag(np.diag(y)*x)+np.diag(np.diag(y, k=-1)*-z+np.diag(np.diag(y,k=1)*-z))
return g
#The function should return something that looks like this
#[E,-A,0,0,0],
#[-A,E,-A,0,0]
#[0,-A,E,-A,0]
#[0,0,-A,E,-A]
#[0,0,0,-A,E]
Returns an N x N matrix representing the Hamiltonian of an N-atom chain molecule with
site-energies E and inter-site coupling rate A.
Ideally I should be summing three matrices together using the numpy diagonal function and the numpy ones to fill the diagonals aswell.
Much appreciate any help.
Thank you.
Kind regards,

Related

Quaternion(Hurwitz Integers) gcrd algorithm

I want a ref or pseudo-algorithm or an actual algorithm for Quaternion GCD, I need this to find out the 4 squares that make up any given integer $n$, I did all the other work but I am stuck on this since there is no information on Wikipedia or Arxiv on how to do such GCD.
Thanks.
I tried to Extend the Complex(Gaussian Integers) gcd but with no success.
The key for these things is the Euclidean algorithm:
def euclidean_rightdiv_hurwitz(B,D):
#if B is not D, returns q,r such that
#B = qD + r
#r==0 or norm(r)< norm(D)
nor = norm(D)
for a,b,c,d in [-sqrt(nor), sqrt(nor)+1]:
r = quaternion(a,b,c,d)
if r==0 or norm(r)< nor:
diff = (B-r)
if hurwitz_is_rightdivisible(diff,D):
return diff/D, r
To implement hurwitz_is_rightdivisible, notice that if diff is rightdivisible by D it must be the case that diff*inv(D) is an integer, so just compute it and check each coordinate.

numerical diagonalization of a unitary matrix

To numerically diagonalize a unitary matrix I use the LAPACK routine zgeev.
The problem is: In case of degeneracies the degenerate subspace is not orthonormalized, since the routine is for general matrices.
However, since in my case the matrices are unitary, the basis can be always orthonormalized. Is there a better solution than applying QR-algorithm afterwards to the degenerate subspace?
Short answer: Schur decomposition!
If a square matrix A is complex, then its Schur factorization is A=ZTZ*, where Z is unitary and T is upper triangular.
If A happens to be unitary, T must also be unitary. Since T is both unitary and triangular, it is diagonal (proof here,.or there)
Let's consider the vectors Z.e_i, where e_i are the vectors of the canonical basis. These vectors obviously form an orthonormal basis. Moreover, these vectors are eigenvectors of the matrix A.
Hence, the columns of the unitary matrix Z are eigenvectors of the unitary matrix A and form an orthonormal basis.
As a consequence, computing a Schur decomposition of a unitary matrix is equivalent to finding one of its orthogonal basis of eigenvectors.
ZGEESX computes the eigenvalues, the Schur form, and, optionally, the matrix of Schur vectors for GE matrices
The resulting T can also be tested to check that A is unitary.
Here is a piece of python code testing it, though scipy's scipy.linalg.schur makes use of Lapack's zgees for Schur decomposition. I used hpaulj's code to generate random unitary matrix as shown in How to create random orthonormal matrix in python numpy
import numpy as np
import scipy.linalg
#from hpaulj, https://stackoverflow.com/questions/38426349/how-to-create-random-orthonormal-matrix-in-python-numpy
def rvs(dim=3):
random_state = np.random
H = np.eye(dim)
D = np.ones((dim,))
for n in range(1, dim):
x = random_state.normal(size=(dim-n+1,))
D[n-1] = np.sign(x[0])
x[0] -= D[n-1]*np.sqrt((x*x).sum())
# Householder transformation
Hx = (np.eye(dim-n+1) - 2.*np.outer(x, x)/(x*x).sum())
mat = np.eye(dim)
mat[n-1:, n-1:] = Hx
H = np.dot(H, mat)
# Fix the last sign such that the determinant is 1
D[-1] = (-1)**(1-(dim % 2))*D.prod()
# Equivalent to np.dot(np.diag(D), H) but faster, apparently
H = (D*H.T).T
return H
n=42
A= rvs(n)
A = A.astype(complex)
T,Z=scipy.linalg.schur(A,output='complex',lwork=None,overwrite_a=False,sort=None,check_finite=True)
#print T
normT=np.linalg.norm(T,ord=None) #2-norm
eigenvalues=[]
for i in range(n):
eigenvalues.append(T[i,i])
T[i,i]=0.
normTu=np.linalg.norm(T,ord=None)
print 'must be very low if A is unitary: ',normTu/normT
#print Z
for i in range(n):
v=Z[:,i]
w=A.dot(v)-eigenvalues[i]*v
print i,'must be very low if column i of Z is eigenvector of A: ',np.linalg.norm(w,ord=None)/np.linalg.norm(v,ord=None)

Numpy Hermitian Matrix class

Are you aware of something like a hermitian matrix class in numpy? I'd like to optimize matrix calculations like
B = U * A * U.H
, where A (and thus, B) are hermitian. Without specification, all matrix elements of B are calculated. In fact, it should be able to save a factor of about 2 here. Do I miss something?
The method I need should take take the upper/lower triangle of A, the full matrix of U and return the upper/lower triangle of B.
I don't think there exists a method for your specific problem, but with a little thought you might be able to build an algorithm from the low-level BLAS routines that are wrapped in SciPy. For example, dgemm, dsymm, and dtrmm do general, symmetric, and triangular matrix products respectively. Here's an example of using them:
from scipy.linalg.blas import dgemm, dsymm, dtrmm
A = np.random.rand(10, 10)
B = np.random.rand(10, 10)
S = np.dot(A, A.T) # symmetric matrix
T = np.triu(S) # upper triangular matrix
# normal matrix-matrix product
assert np.allclose(dgemm(1, A, B), np.dot(A, B))
# symmetric mat-mat product using only upper-triangle
assert np.allclose(dsymm(1, T, B), np.dot(S, B))
# upper-triangular mat-mat product
assert np.allclose(dtrmm(1, T, B), np.dot(T, B))
There are many other low-level BLAS routines available; I find the NETLIB page to be a good resource to learn what they do. You may be able to cleverly use some combination of the available routines to efficiently solve the problem you have in mind.
Edit: it looks like there are LAPACK routines that quickly compute exactly what you want: dsytrd or zhetrd, but unfortunately these don't appear to be wrapped directly in scipy.linalg.lapack, though scipy does provide cython wrappers for them. Best of luck!
I needed tridiagonal reduction of a symmetric/Hermitian matrix A,
T = Q^H * A * Q
– presumably OP's underlying problem – and I've just submitted a pull request to SciPy for properly interfacing LAPACK's {s,d}sytrd (for real symmetric matrices) and {c,z}hetrd (for Hermitian matrices). All routines use either only the upper or the lower triangular part of the matrix.
Once this has been merged, it can be used like
import numpy as np
n = 3
A = np.zeros((n, n), dtype=dtype)
A[np.triu_indices_from(A)] = np.arange(1, 2*n+1, dtype=dtype)
# query lwork -- optional
lwork, info = sytrd_lwork(n)
assert info == 0
data, d, e, tau, info = sytrd(A, lwork=lwork)
assert info == 0
The vectors d and e now contain the main diagonal and the upper and lower diagonal, respectively.

Spark distributed matrix multiply and pseudo-inverse calculating

I am very new in Apache Spark Scala. Can you help me with some operations?
I have two distributed matrix H and Y in Spark Scala.
I want to compute the pseudo-inverse of H and then multiply H and Y.
How can I do this?
Here is an implementation for the inverse.
import org.apache.spark.mllib.linalg.{Vectors,Vector,Matrix,SingularValueDecomposition,DenseMatrix,DenseVector}
import org.apache.spark.mllib.linalg.distributed.RowMatrix
def computeInverse(X: RowMatrix): DenseMatrix = {
val nCoef = X.numCols.toInt
val svd = X.computeSVD(nCoef, computeU = true)
if (svd.s.size < nCoef) {
sys.error(s"RowMatrix.computeInverse called on singular matrix.")
}
// Create the inv diagonal matrix from S
val invS = DenseMatrix.diag(new DenseVector(svd.s.toArray.map(x => math.pow(x,-1))))
// U cannot be a RowMatrix
val U = new DenseMatrix(svd.U.numRows().toInt,svd.U.numCols().toInt,svd.U.rows.collect.flatMap(x => x.toArray))
// If you could make V distributed, then this may be better. However its alreadly local...so maybe this is fine.
val V = svd.V
// inv(X) = V*inv(S)*transpose(U) --- the U is already transposed.
(V.multiply(invS)).multiply(U)
}
To calculate the pseudo-inverse of non-square matrices you need to be able to calculate the transpose (easy) and the matrix inverse (others have supplied that functionality). There are two different calculations, depending on whether M has full column rank or full row rank.
Full column rank means that the columns of the matrix are linearly independent which requires that the the number of columns is less than or equal to the number of rows. (In pathological cases, an mxn matrix with m>=n might still not have full column rank, but we'll ignore that statistical impossibility. If it is a possibility in your case, the matrix inversion step below will fail.) For full column rank, the pseudo-inverse is
M^+ = (M^T M)^{-1} M^T
where M^T is the transpose of M. Matrix multiply M^T by M, then take the inverse, then matrix multiply by M^T again. (I'm assuming M has real number entries; if the entries are complex numbers, you also have to take complex conjugates.)
A quick check to make sure you have calculated the psuedo-inverse correctly is to check M^+ M. It should be the identity matrix (up to floating point error).
On the other hand, if M has full row rank, in other words M is mxn with m<=n, the pseudo-inverse is
M^+ = M^T (M M^T)^{-1}
To check whether you have the correct pseudo-inverse in this case, right multiply with the original matrix: M M^+. That should equal the identity matrix, up to floating point error.
Matrix multiplication is the easier one: there are several Matrix implementations with a multiply method in packages org.apache.spark.mllib.linalg and org.apache.spark.mllib.linalg.distributed. Pick whatever fits your needs most.
I have not seen (pseudo-)inverse anywhere in the Spark API. But RowMatrix is able to compute the singular value decomposition which can be used to calculate the inverse of a matrix. Here is a very naive implementation, inspired by How can we compute Pseudoinverse for any Matrix (warning: dimensions of the 2x2 matrix are hard-coded):
val m = new RowMatrix(sc.parallelize(Seq(Vectors.dense(4, 3), Vectors.dense(3, 2))))
val svd = m.computeSVD(2, true)
val v = svd.V
val sInvArray = svd.s.toArray.toList.map(x => 1.0 / x).toArray
val sInverse = new DenseMatrix(2, 2, Matrices.diag(Vectors.dense(sInvArray)).toArray)
val uArray = svd.U.rows.collect.toList.map(_.toArray.toList).flatten.toArray
val uTranspose = new DenseMatrix(2, 2, uArray) // already transposed because DenseMatrix is column-major
val inverse = v.multiply(sInverse).multiply(uTranspose)
// -1.9999999999998297 2.999999999999767
// 2.9999999999997637 -3.9999999999996767
Unfortunately, a lot of conversion from Matrix to Array and so forth is necessary. If you need a fully distributed implementation, try using DistributedMatrix instead of DenseMatrix. If not, maybe using Breeze is preferable here.

Octave Matrix of discretized Legendre polynomials

I need to get N x columns(L) matrix of legendre polynomials evaluated over L for arbitrary N.
Is there a better way of computing the matrix than just explicitly evaluating the polynomial vector for each row? The code snippet for this approach (N = 4) is here:
L = linspace(-1,1,800);
# How to do this in a better way?
G = [legendre_Pl(0,L); legendre_Pl(1,L); legendre_Pl(2,L); legendre_Pl(3,L)];
Thanks,
Vojta
Create an anonymous function. Documentation at http://www.gnu.org/software/octave/doc/interpreter/Anonymous-Functions.html
f = #(x) legendre_Pl(x,L);
Then use arrayfun to apply the function, f to an array [1:N] Documentation at http://www.gnu.org/software/octave/doc/interpreter/Function-Application.html
CellArray = arrayfun(f, [1:N], "UniformOutput", false);
That gives you a cell array. If you want the answer in a matrix, use cell2mat
G = cell2mat(CellArray);

Resources