Related
I have a curvefit problem
I have two functions
y = ax+b
y = ax^2+bx-2.3
I have one set of data each for the above functions
I need to find a and b using least square method combining both the functions
I was using fminsearch function to minimize the sum of squares of errors of these two functions.
I am unable to use this method in lsqcurvefit
Kindly help me
Regards
Ram
I think you'll need to worry less about which library routine to use and more about the math. Assuming you mean vertical offset least squares, then you'll want
D = sum_{i=1..m}(y_Li - a x_Li + b)^2 + sum_{i=j..n}(y_Pj - a x_Pj^2 - b x_Pj + 2.3)^2
where there are m points (x_Li, y_Li) on the line and n points (x_Pj, y_Pj) on the parabola. Now find partial derivatives of D with respect to a and b. Setting them to zero provides two linear equations in 2 unknowns, a and b. Solve this linear system.
y = ax+b
y = ax^2+bx-2.3
In order to not confuse y of the first equation with y of the second equation we use distinct notations :
u = ax+b
v = ax^2+bx+c
The method of linear regression combined for the two functions is shown on the joint page :
HINT : If you want to find by yourself the matrixial equation appearing above, follow the Gene's answer.
To numerically diagonalize a unitary matrix I use the LAPACK routine zgeev.
The problem is: In case of degeneracies the degenerate subspace is not orthonormalized, since the routine is for general matrices.
However, since in my case the matrices are unitary, the basis can be always orthonormalized. Is there a better solution than applying QR-algorithm afterwards to the degenerate subspace?
Short answer: Schur decomposition!
If a square matrix A is complex, then its Schur factorization is A=ZTZ*, where Z is unitary and T is upper triangular.
If A happens to be unitary, T must also be unitary. Since T is both unitary and triangular, it is diagonal (proof here,.or there)
Let's consider the vectors Z.e_i, where e_i are the vectors of the canonical basis. These vectors obviously form an orthonormal basis. Moreover, these vectors are eigenvectors of the matrix A.
Hence, the columns of the unitary matrix Z are eigenvectors of the unitary matrix A and form an orthonormal basis.
As a consequence, computing a Schur decomposition of a unitary matrix is equivalent to finding one of its orthogonal basis of eigenvectors.
ZGEESX computes the eigenvalues, the Schur form, and, optionally, the matrix of Schur vectors for GE matrices
The resulting T can also be tested to check that A is unitary.
Here is a piece of python code testing it, though scipy's scipy.linalg.schur makes use of Lapack's zgees for Schur decomposition. I used hpaulj's code to generate random unitary matrix as shown in How to create random orthonormal matrix in python numpy
import numpy as np
import scipy.linalg
#from hpaulj, https://stackoverflow.com/questions/38426349/how-to-create-random-orthonormal-matrix-in-python-numpy
def rvs(dim=3):
random_state = np.random
H = np.eye(dim)
D = np.ones((dim,))
for n in range(1, dim):
x = random_state.normal(size=(dim-n+1,))
D[n-1] = np.sign(x[0])
x[0] -= D[n-1]*np.sqrt((x*x).sum())
# Householder transformation
Hx = (np.eye(dim-n+1) - 2.*np.outer(x, x)/(x*x).sum())
mat = np.eye(dim)
mat[n-1:, n-1:] = Hx
H = np.dot(H, mat)
# Fix the last sign such that the determinant is 1
D[-1] = (-1)**(1-(dim % 2))*D.prod()
# Equivalent to np.dot(np.diag(D), H) but faster, apparently
H = (D*H.T).T
return H
n=42
A= rvs(n)
A = A.astype(complex)
T,Z=scipy.linalg.schur(A,output='complex',lwork=None,overwrite_a=False,sort=None,check_finite=True)
#print T
normT=np.linalg.norm(T,ord=None) #2-norm
eigenvalues=[]
for i in range(n):
eigenvalues.append(T[i,i])
T[i,i]=0.
normTu=np.linalg.norm(T,ord=None)
print 'must be very low if A is unitary: ',normTu/normT
#print Z
for i in range(n):
v=Z[:,i]
w=A.dot(v)-eigenvalues[i]*v
print i,'must be very low if column i of Z is eigenvector of A: ',np.linalg.norm(w,ord=None)/np.linalg.norm(v,ord=None)
I am new to quadratic programming and having trouble running the function QPmat in the package popbio, which uses a matrix of stage class counts to calculate stage class transition probabilities.
The code I am running:
####Create a matrix of time series stage class counts
Total<-
matrix(c(17,74,86,41,17,11,75,84,46,25,7,60,90,46,24,10,61,82,44,25),nrow=5,
ncol=4)
Total
## list nonzero elements counting by column, indices
nonzero <- c(1,2,7,8,13,14,19,20,25)
## create a constraint matrix, C
C <- rbind(diag(-1,5), c(1,1,0,0,0), c(0,0,1,0,0), c(0,0,0,0,1))
C
## calculate b vector
b <- apply(C, 1, max)
b
QPmat(Total,C,b,nonzero)
This call returns the error "Amat and dvec are incompatible!"
I think the problem is in the constraint matrix, C, but I have been unable to troubleshoot this. I have worked through a couple examples of the solve.QP function in quadprog but to no avail.
I had the constraint matrix completely wrong. I checked out Caswell 2001 for the actual example and saw what the constraints were meant to accomplish.
for the constraint matrix C in the above code, substitute:
C<-rbind(diag(-1,9), c(1,1,0,0,0,0,0,0,0), c(0,0,1,1,0,0,0,0,0),
c(0,0,0,0,1,1,0,0,0),c(0,0,0,0,0,0,1,1,0),c(0,0,0,0,0,0,0,0,1))
This guarantees that all nonzero output matrix elements will be nonnegative, that sums of the consecutive pairs of the nonzero matrix elements will be less than or equal to 1 and that the last nonzero matrix element will be less than or equal to 1.
This is a very quick way to get a projection matrix with transition probabilities when stage class counts are the data and not individual fates.
In the paper "The fractional Laplacian operator on bounded domains as a special case of the nonlocal diffusion operator". Where the author has solved a fractional laplacian equation on bounded domain as a non-local diffusion equation.
I am trying to implement the finite element approximation of the one dimensional problem(please refer to page 14 of the above mentioned paper) in matlab.
I am using the following definition of $\phi_k$ as it is mentioned in the paper that $\phi$ is a $hat\;function$
\begin{equation}
\phi_{k}(x)=\begin{cases} {x-x_{k-1} \over x_k\,-x_{k-1}} & \mbox{ if } x \in [x_{k-1},x_k], \\
{x_{k+1}\,-x \over x_{k+1}\,-x_k} & \mbox{ if } x \in [x_k,x_{k+1}], \\
0 & \mbox{ otherwise},\end{cases}
\end{equation}
$\Omega=(-1,1)$ and $\Omega_I=(-1-\lambda,-1) \cup (1,1+\lambda)$ so that $\Omega\cup\Omega_I=(-1-\lambda,1+\lambda)$
For the integers K,N we define the partition of $\overline{\Omega\cup\Omega_I}=[-1-\lambda,1+\lambda]$ as,
\begin{equation}
-1-\lambda=x_{-K}<...
Finally the equations that we have to solve to get the solution $\tilde{u_N}=\sum_{i=-K}^{K+N}U_j\phi_j(x)$ for some coefficients $U_j$ is:
Where $i=1,...,N-1$.
I need pointers in order to simplify and solve the LHS double integral in matlab.It is written in the paper(page 15) that I should use four point gauss quadrature for inner integral and quadgk.m function for outer integral, but since the limits of the inner integral are in terms of x how can I apply four point gauss quadrature on it??.Any help will be appreciated.
Thanks.
You can find the original question here.(Since SO does not support Latex)
For a first stab at the problem, take a look at dblquad and/or quad2d.
In the end, you'll want custom quadrature methods, so you should do something like the following:
% The integrand is of course a function of both x and y
integrand = #(x,y) (phi_j(y) - phi_j(x))*(phi_i(y) - phi_i(x))/abs(y-x)^(2*s+1);
% The inner integral is a function of x, and integrates over y
inner = #(x) quadgk(#(y)integrand(x,y), x-lambda, x+lambda);
% The inner integral is integrated over x to yield the value of the double integral
dblIntegral = quadgk(inner, -(1+lambda), 1+lambda)
where I've used quadgk twice, but you can replace by any other (custom) quadrature method you please.
By the way -- what is the reason for the authors to suggest a (non-adaptive) 4-point Gauss method? That way, you have no estimation of (and/or control over) the errors made in the inner integral...
You can do a 4 point 1D Gaussian quadrature. You seem to assume that it means a 2D integral. Not so - this is assuming a higher-order quadrature over 1D.
If you're solving a 1D finite element problem, it makes no sense whatsoever to integrate over a 2D domain.
I didn't read the paper, but that's what I recall from FEA that I learned.
I'm trying to take the square root of a matrix. That is find the matrix B so B*B=A. None of the methods I've found around gives a working result.
First I found this formula on Wikipedia:
Set Y_0 = A and Z_0 = I then the iteration:
Y_{k+1} = .5*(Y_k + Z_k^{-1}),
Z_{k+1} = .5*(Z_k + Y_k^{-1}).
Then Y should converge to B.
However implementing the algorithm in python (using numpy for inverse matrices), gave me rubbish results:
>>> def denbev(Y,Z,n):
if n == 0: return Y,Z
return denbev(.5*(Y+Z**-1), .5*(Z+Y**-1), n-1)
>>> denbev(matrix('1,2;3,4'), matrix('1,0;0,1'), 3)[0]**2
matrix([[ 1.31969074, 1.85986159],
[ 2.78979239, 4.10948313]])
>>> denbev(matrix('1,2;3,4'), matrix('1,0;0,1'), 100)[0]**2
matrix([[ 1.44409972, 1.79685675],
[ 2.69528512, 4.13938485]])
As you can see, iterating 100 times, gives worse results than iterating three times, and none of the results get within a 40% error margin.
Then I tried the scipy sqrtm method, but that was even worse:
>>> scipy.linalg.sqrtm(matrix('1,2;3,4'))**2
array([[ 0.09090909+0.51425948j, 0.60606061-0.34283965j],
[ 1.36363636-0.77138922j, 3.09090909+0.51425948j]])
>>> scipy.linalg.sqrtm(matrix('1,2;3,4')**2)
array([[ 1.56669890+0.j, 1.74077656+0.j],
[ 2.61116484+0.j, 4.17786374+0.j]])
I don't know a lot about matrix square rooting, but I figure there must be algorithms that perform better than the above?
(1) the square root of the matrix [1,2;3,4] should give something complex, as the eigenvalues of that matrix are negative. SO your solution can't be correct to begin with.
(2) linalg.sqrtm returns an array, NOT a matrix. Hence, using * to multiply them is not a good idea. In your case, the solutions is thus correct, but you're not seeing it.
edit try the following, you'll see it's correct:
asmatrix(scipy.linalg.sqrtm(matrix('1,2;3,4')))**2
Your matrix [1 2; 3 4] isn't positive so there is no solution to the problem in the domain of real matrices.
What is the purpose of the matrix square root that you're doing? I suspect a practical application the matrix really could be symmetric positive definite (e.g. covariance) so you shouldn't encounter complex numbers.
In that case you can compute a cholesky decomposition, like a scaled LU factorization, see here: http://en.wikipedia.org/wiki/Cholesky_decomposition
Another practical example is if your matrices are rotations, then you can first decompose with matrix log and just divide by 2 in the log space, then go back to rotation with matrix exponent... in any event it sounds strange that you ask for a 'generic matrix square root', you probably want to understand the specific application in more depth.