sympy regression example, solve after partial derivative - matrix

I try to solve some matrix calculus problem with sympy and get stuck at the solver after differentiation with respect to a vector.
As a short example let's take ordinary least squares regression.
E.g. the sum of the squared differences between target y and prediction y_hat. Where the prediction y_hat = X.T * w is linear combination and thus a matrix vector multiplication.
We therefore want to minimize the LMS Error with respect to the weight vector w.
By hand we can derive that from:
Err(w) = norm(y - X.T * w)^2
follows after differentiation, setting to zero and solving for w
w_opt = (X*X.T)^-1 * X * y
How can we derive w_opt using sympy?
My rather naïve approach was:
from sympy import *
# setup matrix and vectors
X = MatrixSymbol('X',3,5)
y = MatrixSymbol('y',5,1)
w = MatrixSymbol('w',3,1)
# define error function
E = (y - X.T*w).T * (y - X.T*w)
# derivate
Edw = [E.diff(wi) for wi in w]
# solve for w
solve(Edw,w)
At solve(Edw,w) however i get the attribute error: 'Mul' object has no attribute 'shape'
I also tried to set E.as_explicit() before differentiating. This however resultet in the attribute error: 'str' object has no attribute 'is_Piecewise'
I know by calculating by hand, that after derivation the result should be -2*X*y + 2*X*X.T*w. The derivation in Edw is listed but not performed. How can I verify this step in between? My first guess was the .doit() method, which unfortunately is not defined in that case.

Related

How to speed up the solving of multiple optimization problems?

Currently, I'm writing a simulation that asses the performance of a positioning algorithm by measuring the mean error of the position estimator for different points around the room. Unfortunately the running times are pretty slow and so I am looking for ways to speed up my code.
The working principle of the position estimator is based on the MUSIC algorithm. The estimator gets an autocorrelation matrix (sized 12x12, with complex values in general) as an input and follows the next steps:
Find the 12 eigenvalues and eigenvectors of the autocorrelation matrix R.
Construct a new 12x11 matrix EN whose columns are the 11 eigenvectors corresponding to the 11 smallest eigenvalues.
Using the matrix EN, construct a function P = 1/(a' EN EN' a).
Where a is a 12x1 complex vector and a' is the Hermitian conjugate of a. The components of a are functions of 3 variables (named x,y and z) and so the scalar P is also a function P(x,y,z)
Finally, find the values (x0,y0,z0) which maximizes the value of P and return it as the position estimate.
In my code, I choose some constant z and create a grid on points in the plane (at heigh z, parallel to the xy plane). For each point I make n4Avg repetitions and calculate the error of the estimated point. At the end of the parfor loop (and some reshaping), I have a matrix of errors with dims (nx) x (ny) x (n4Avg) and the mean error is calculated by taking the mean of the error matrix (acting on the 3rd dimension).
nx=30 is the number of point along the x axis.
ny=15 is the number of points along the y axis.
n4Avg=100 is the number of repetitions used for calculating the mean error at each point.
nGen=100 is the number of generations in the GA algorithm (100 was tested to be good enough).
x = linspace(-20,20,nx);
y = linspace(0,20,ny);
z = 5;
[X,Y] = meshgrid(x,y);
parfor ri = 1:nx*ny
rT = [X(ri);Y(ri);z];
[ENs] = getEnNs(rT,stdv,n4R,n4Avg); % create n4Avg EN matrices
for rep = 1:n4Avg
pos_est = estPos_helper(squeeze(ENs(:,:,rep)),nGen);
posEstErr(ri,rep) = vecnorm(pos_est(:)-rT(:));
end
end
The matrices EN are generated by the following code
function [ENs] = getEnNs(rT,stdv,n4R,nEN)
% generate nEN simulated EN matrices, each using n4R simulated phases
f_c = 2402e6; % center frequency [Hz]
c0 = 299702547; % speed of light [m/s]
load antennaeArr1.mat antennaeArr1;
% generate initial phases.
phi0 = 2*pi*rand(n4R*nEN,1);
k0 = 2*pi.*(f_c)./c0;
I = cos(-k0.*vecnorm(antennaeArr1 - rT(:),2,1)-phi0);
Q = -sin(-k0.*vecnorm(antennaeArr1 - rT(:),2,1)-phi0);
phases = I+1i*Q;
phases = phases + stdv/sqrt(2)*(randn(size(phases)) + 1i*randn(size(phases)));
phases = reshape(phases',[12,n4R,nEN]);
Rxx = pagemtimes(phases,pagectranspose(phases));
ENs = zeros(12,11,nEN);
for i=1:nEN
[ENs(:,:,i),~] = eigs(squeeze(Rxx(:,:,i)),11,'smallestabs');
end
end
The position estimator uses a solver utilizing a 'genetic algorithm' (chosen because it preformed the best of all the other solvers).
function pos_est = estPos_helper(EN,nGen)
load antennaeArr1.mat antennaeArr1; % 3x12 constant matrix
antennae_array = antennaeArr1;
x0 = [0;10;5];
lb = [-20;0;0];
ub = [20;20;10];
function y = myfun(x)
k0 = 2*pi*2.402e9/299702547;
a = exp( -1i*k0*sqrt( (x(1)-antennae_array(1,:)').^2 + (x(2) - antennae_array(2,:)').^2 + (x(3)-antennae_array(3,:)').^2 ) );
y = 1/real((a')*(EN)*(EN')*a);
end
% Create optimization variables
x3 = optimvar("x",3,1,"LowerBound",lb,"UpperBound",ub);
% Set initial starting point for the solver
initialPoint2.x = x0;
% Create problem
problem = optimproblem("ObjectiveSense","Maximize");
% Define problem objective
problem.Objective = fcn2optimexpr(#myfun,x3);
% Set nondefault solver options
options2 = optimoptions("ga","Display","off","HybridFcn","fmincon",...
"MaxGenerations",nGen);
% Solve problem
solution = solve(problem,initialPoint2,"Solver","ga","Options",options2);
% Clear variables
clearvars x3 initialPoint2 options2
pos_est = solution.x;
end
The current runtime of the code, when setting the parameters as shown above, is around 700-800 seconds. This is a problem as I would like to increase the number of points in the grid and the number of repetitions to get a more accurate result.
The main ways I've tried to tackle this is by using parallel computing (in the form of the parloop) and by reducing the nested loops I had (one for x and one for y) into a single vectorized loop going over all the points in the grid.
It indeed helped, but not quite enough.
I apologize for the messy code.
Michael.

Computing a single element of the adjugate or inverse of a symbolic binary matrix

I'm trying to get a single element of an adjugate A_adj of a matrix A, both of which need to be symbolic expressions, where the symbols x_i are binary and the matrix A is symmetric and sparse. Python's sympy works great for small problems:
from sympy import zeros, symbols
size = 4
A = zeros(size,size)
x_i = [x for x in symbols(f'x0:{size}')]
for i in range(size-1):
A[i,i] += 0.5*x_i[i]
A[i+1,i+1] += 0.5*x_i[i]
A[i,i+1] = A[i+1,i] = -0.3*(i+1)*x_i[i]
A_adj_0 = A[1:,1:].det()
A_adj_0
This calculates the first element A_adj_0 of the cofactor matrix (which is the corresponding minor) and correctly gives me 0.125x_0x_1x_2 - 0.28x_2x_2^2 - 0.055x_1^2x_2 - 0.28x_1x_2^2, which is the expression I need, but there are two issues:
This is completely unfeasible for larger matrices (I need this for sizes of ~100).
The x_i are binary variables (i.e. either 0 or 1) and there seems to be no way for sympy to simplify expressions of binary variables, i.e. simplifying polynomials x_i^n = x_i.
The first issue can be partly addressed by instead solving a linear equation system Ay = b, where b is set to the first basis vector [1, 0, 0, 0], such that y is the first column of the inverse of A. The first entry of y is the first element of the inverse of A:
b = zeros(size,1)
b[0] = 1
y = A.LUsolve(b)
s = {x_i[i]: 1 for i in range(size)}
print(y[0].subs(s) * A.subs(s).det())
print(A_adj_0.subs(s))
The problem here is that the expression for the first element of y is extremely complicated, even after using simplify() and so on. It would be a very simple expression with simplification of binary expressions as mentioned in point 2 above. It's a faster method, but still unfeasible for larger matrices.
This boils down to my actual question:
Is there an efficient way to compute a single element of the adjugate of a sparse and symmetric symbolic matrix, where the symbols are binary values?
I'm open to using other software as well.
Addendum 1:
It seems simplifying binary expressions in sympy is possible with a simple custom substitution which I wasn't aware of:
A_subs = A_adj_0
for i in range(size):
A_subs = A_subs.subs(x_i[i]*x_i[i], x_i[i])
A_subs
You should make sure to use Rational rather than floats in sympy so S(1)/2 or Rational(1, 2) rather than 0.5.
There is a new (undocumented and for the moment internal) implementation of matrices in sympy called DomainMatrix. It is likely to be a lot faster for a problem like this and always produces polynomial results in a fully expanded form. I expect that it will be much faster for this kind of problem but it still seems to be fairly slow for this because is is not sparse internally (yet - that will probably change in the next release) and it does not take advantage of the simplification from the symbols being binary-valued. It can be made to work over GF(2) but not with symbols that are assumed to be in GF(2) which is something different.
In case it is helpful though this is how you would use it in sympy 1.7.1:
from sympy import zeros, symbols, Rational
from sympy.polys.domainmatrix import DomainMatrix
size = 10
A = zeros(size,size)
x_i = [x for x in symbols(f'x0:{size}')]
for i in range(size-1):
A[i,i] += Rational(1, 2)*x_i[i]
A[i+1,i+1] += Rational(1, 2)*x_i[i]
A[i,i+1] = A[i+1,i] = -Rational(3, 10)*(i+1)*x_i[i]
# Convert to DomainMatrix:
dM = DomainMatrix.from_list_sympy(size-1, size-1, A[1:, 1:].tolist())
# Compute determinant and convert back to normal sympy expression:
# Could also use dM.det().as_expr() although it might be slower
A_adj_0 = dM.charpoly()[-1].as_expr()
# Reduce powers:
A_adj_0 = A_adj_0.replace(lambda e: e.is_Pow, lambda e: e.args[0])
print(A_adj_0)

numerical diagonalization of a unitary matrix

To numerically diagonalize a unitary matrix I use the LAPACK routine zgeev.
The problem is: In case of degeneracies the degenerate subspace is not orthonormalized, since the routine is for general matrices.
However, since in my case the matrices are unitary, the basis can be always orthonormalized. Is there a better solution than applying QR-algorithm afterwards to the degenerate subspace?
Short answer: Schur decomposition!
If a square matrix A is complex, then its Schur factorization is A=ZTZ*, where Z is unitary and T is upper triangular.
If A happens to be unitary, T must also be unitary. Since T is both unitary and triangular, it is diagonal (proof here,.or there)
Let's consider the vectors Z.e_i, where e_i are the vectors of the canonical basis. These vectors obviously form an orthonormal basis. Moreover, these vectors are eigenvectors of the matrix A.
Hence, the columns of the unitary matrix Z are eigenvectors of the unitary matrix A and form an orthonormal basis.
As a consequence, computing a Schur decomposition of a unitary matrix is equivalent to finding one of its orthogonal basis of eigenvectors.
ZGEESX computes the eigenvalues, the Schur form, and, optionally, the matrix of Schur vectors for GE matrices
The resulting T can also be tested to check that A is unitary.
Here is a piece of python code testing it, though scipy's scipy.linalg.schur makes use of Lapack's zgees for Schur decomposition. I used hpaulj's code to generate random unitary matrix as shown in How to create random orthonormal matrix in python numpy
import numpy as np
import scipy.linalg
#from hpaulj, https://stackoverflow.com/questions/38426349/how-to-create-random-orthonormal-matrix-in-python-numpy
def rvs(dim=3):
random_state = np.random
H = np.eye(dim)
D = np.ones((dim,))
for n in range(1, dim):
x = random_state.normal(size=(dim-n+1,))
D[n-1] = np.sign(x[0])
x[0] -= D[n-1]*np.sqrt((x*x).sum())
# Householder transformation
Hx = (np.eye(dim-n+1) - 2.*np.outer(x, x)/(x*x).sum())
mat = np.eye(dim)
mat[n-1:, n-1:] = Hx
H = np.dot(H, mat)
# Fix the last sign such that the determinant is 1
D[-1] = (-1)**(1-(dim % 2))*D.prod()
# Equivalent to np.dot(np.diag(D), H) but faster, apparently
H = (D*H.T).T
return H
n=42
A= rvs(n)
A = A.astype(complex)
T,Z=scipy.linalg.schur(A,output='complex',lwork=None,overwrite_a=False,sort=None,check_finite=True)
#print T
normT=np.linalg.norm(T,ord=None) #2-norm
eigenvalues=[]
for i in range(n):
eigenvalues.append(T[i,i])
T[i,i]=0.
normTu=np.linalg.norm(T,ord=None)
print 'must be very low if A is unitary: ',normTu/normT
#print Z
for i in range(n):
v=Z[:,i]
w=A.dot(v)-eigenvalues[i]*v
print i,'must be very low if column i of Z is eigenvector of A: ',np.linalg.norm(w,ord=None)/np.linalg.norm(v,ord=None)

Solving a Nonlinear equation with Julia

I am trying to solve a nonlinear equation with Julia,
I have the following nonlinear equation
Nfoc(k,k1,z,n)=(1-α)*exp(z)*(k/n)^α/(exp(z)*(k^α)*(n^(1-α))+k*(1-δ)-k1) - A/(1-n)
and I have a grid of values for k,k1 and z and I am trying to find the values of x that are the roots of this equation for each k,k1, and z, by using this loop:
MatrixN=zeros(nkk,M,nkk)
for i=1:nkk,j=1:M
for i2=1:nkk
MatrixN[i,j,i2]=roots(Nfoc[K[i],K[i2],z(j),n])
end
end
However, its obvious that the command roots its not functioning.
I would deeply appreciate any help in the less techical way possible!
I don't have enough knowledge to work on your use case, but in general, one way to find roots of a parametric function could be:
using FastAnonymous # Creating efficient "anonymous functions" in Julia
using Roots
f(x,k,k1,z,n) = exp(x) - x^4 + k + k1 + z + n
function f_gen(k,k1,z,n)
#anon x -> f(x,k,k1,z,n)
end
fzero(f_gen(0,0,0,0), 1) # => finds x so f(x,0,0,0,0) = 0 using a derivative free method

example algorithm for generating random value in dataset with normal distribution?

I'm trying to generate some random numbers with simple non-uniform probability to mimic lifelike data for testing purposes. I'm looking for a function that accepts mu and sigma as parameters and returns x where the probably of x being within certain ranges follows a standard bell curve, or thereabouts. It needn't be super precise or even efficient. The resulting dataset needn't match the exact mu and sigma that I set. I'm just looking for a relatively simple non-uniform random number generator. Limiting the set of possible return values to ints would be fine. I've seen many suggestions out there, but none that seem to fit this simple case.
Box-Muller transform in a nutshell:
First, get two independent, uniform random numbers from the interval (0, 1], call them U and V.
Then you can get two independent, unit-normal distributed random numbers from the formulae
X = sqrt(-2 * log(U)) * cos(2 * pi * V);
Y = sqrt(-2 * log(U)) * sin(2 * pi * V);
This gives you iid random numbers for mu = 0, sigma = 1; to set sigma = s, multiply your random numbers by s; to set mu = m, add m to your random numbers.
My first thought is why can't you use an existing library? I'm sure that most languages already have a library for generating Normal random numbers.
If for some reason you can't use an existing library, then the method outlined by #ellisbben is fairly simple to program. An even simpler (approximate) algorithm is just to sum 12 uniform numbers:
X = -6 ## We set X to be -mean value of 12 uniforms
for i in 1 to 12:
X += U
The value of X is approximately normal. The following figure shows 10^5 draws from this algorithm compared to the Normal distribution.

Resources