Separate mutation probabilities for each part of the solution (Genetic Algorithms) - genetic-algorithm

I am working with the Deap library (Python) for evolutionary computation. I am interested in the following mutation function:
deap.tools.mutGaussian(individual, mu, sigma, indpb)
where indpb, according to documentation, refers to the probability of mutating each solution element.
My question is, how does one specify higher (or lower) mutation probabilities for certain parts of the solution (indices).
With other words, so that the indpb is not a scalar but a vector of solutions.

You most likely need to implement your own mutation function. Perhaps something like the following:
def mutGaussian(individual, mu, sigma, indpb):
size = len(individual)
for i, m, s, p in zip(xrange(size), mu, sigma, indpb):
if random.random() < p:
individual[i] += random.gauss(m, s)
return individual,

Related

Adaptive Simpsons Quadrature Algorithm for Double Integrals?

I'm currently using Numerical Analysis 10th edition by Richard L Burden as a reference for approximate Integration techniques. In there it describes the Adaptive Simpsons Quadrature rule that inputs only the bounds and an error tolerance, and spits out the approximate integral within precision of the error tolerance. This method is much more effective than the standard Simpsons rule where you have to input number of iterations and not know how close it is to the actual solution. However, the book goes on to describe a method for Double Integrals using Simpson's rule, but not an algorithm Adaptive Simpsons Quadrature rule for double integrals. Does anyone know a pseudo algorithm for an Adaptive Simpsons rule for double integrals??
For reference, this is the pseudo algorithm for Composite Simpsons rule for single integrals: Inputs bounds (a, b) and n # of iterations
`NAME: compositeSimpsons(a, b, n):
h=(b-a)/n
first = f(a)
last = f(b)
sum=0
x = a+h
for(i=2:n-1)
if(i%2==0) // even
sum+=4*(x)
else // odd
sum+=2*f(x)
x+=h
end for
return (h/3) * (first+sum+last)`
And here is the pseudo-algorithm for Adaptive Simpsons Quadrature for single integrals: (Input bounds a, b) and tolerance (tol)
`NAME: adaptiveQuadratureSimspons(a, b, tol):
myStack.push(a)
myStack.push(b)
I=0
while(myStack is not empty)
bb = myStack.pop()
aa = myStack.pop()
I1 = compositeSimpsons(aa, bb, 2)
m = (aa+bb)/2
I2 = compositeSimpsons(aa, mm, 2) + compositeSimspons(mm, bb, 2)
if(|I2-I1|/15 < (bb-aa)*tol)
I += I2
else
myStack.push(m)
myStack.push(bb)
myStack.push(aa)
myStackl.push(m)
end while
return I`
The algorithm for Simpsons rule for two integrals gets very complex fast as you're replacing the x variable with each iteration with a different subdivision, so I won't detail it here unless necessary. However, I know that the problem isn't that algorithm as I've tried it many times and works fine for many different double integral problems. I tried to use the same logic found in the adaptive Simpsons rule my double integral adaptive Simpsons rule by replacing compositeSimpsons() with my compositeSimpsonsDouble(), but it entered an infinite loop as the difference between I2 and I1 was always less than the tolerance. Any help? Coding this in Java
In the lingo of numerical quadrature, "double integrals" don't play as big as a role as the domain you want to integrate your function over. In 1D it's always an interval, in 2D it can be a disk, a rectangle, a triangle, the plane with weight function exp(-r**2) etc. Perhaps your double integral is one of these. For all these different domains, you have different integration techniques. See https://github.com/nschloe/quadpy for some examples.
For adaptive quadrature in 2D, my first impulse would be to check if the domain can be approximated well by a number of triangles. Like intervals in 1D, those can be easily split into smaller triangles if the error estimator recommends so.
Check https://github.com/nschloe/quadpy/wiki/Adaptive-quadrature for how to do this with quadpy.

Latent factor recovery with probabilistic matrix factorization using Edward

I implemented a probabilistic matrix factorization model (R = U'V) following the example in Edward's repo:
# data
U_true = np.random.randn(D, N)
V_true = np.random.randn(D, M)
R_true = np.dot(np.transpose(U_true), V_true) + np.random.normal(0, 0.1, size=(N, M))
# model
I = tf.placeholder(tf.float32, [N, M])
U = Normal(loc=tf.zeros([D, N]), scale=tf.ones([D, N]))
V = Normal(loc=tf.zeros([D, M]), scale=tf.ones([D, M]))
R = Normal(loc=tf.matmul(tf.transpose(U), V), scale=tf.ones([N, M]))
I get a good performance when predicting the data in matrix R. However, when I evaluate the inferred traits in U and V, the error varies a lot and can get very high.
I tried with a latent space of small dimension (e.g. 2) and checked if latent traits weren't simply permuted. They sometimes get permuted but even after realigning them the error is still significant.
To throw some numbers: for a synthetic R matrix generated from U and V both normally distributed (mean 0 and variance 1), I can achieve a mean absolute error of 0.003 on R, but on U and V it's usually around 0.5.
I know this model is symmetric, but I am not sure about the implications. I would like to ask:
Is it actually possible to guarantee the recovery of the original latent traits in some way?
If so, how could it be achieved, preferably using Edward?

Finding parameters of exponentially decaying sinusoids (Matrix Pencil Method)

The matrix pencil method is an algorithm which can be used to find the individual exponential decaying sinusoids' parameters (frequency, amplitude, decay factor and initial phase) in a signal consisting of multiple such signals added. I am trying to implement the algorithm. The algorithm can be found in the paper from this link:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=370583 OR
http://krein.unica.it/~cornelis/private/IEEE/IEEEAntennasPropagMag_37_48.pdf
In order to test the algorithm, I created a synthetic signal composed of four exponentially decaying sinusoids generated as follows:
fs=2205;
t=0:1/fs:249/fs;
f(1)=80;
f(2)=120;
f(3)=250;
f(4)=560;
a(1)=.4;
a(2)=1;
a(3)=0.89;
a(4)=.65;
d(1)=70;
d(2)=50;
d(3)=90;
d(4)=80;
for i=1:4
x(i,:)=a(i)*exp(-d(i)*t).*cos(2*pi*f(i)*t);
end
y=x(1,:)+x(2,:)+x(3,:)+x(4,:);
I then feed this signal to the algorithm described in the paper as follows:
function [f d] = mpencil(y)
%construct hankel matrix
N = size(y,2);
L1 = ceil(1/3 * N);
L2 = floor(2/3 * N);
L = ceil((L1 + L2) / 2);
fs=2205;
for i=1:1:(N-L)
Y(i,:)=y(i:(i+L));
end
Y1=Y(:,1:L);
Y2=Y(:,2:(L+1));
[U,S,V] = svd(Y);
D=diag(S);
tol=1e-3;
m=0;
l=length(D);
for i=1:l
if( abs(D(i)/D(1)) >= tol)
m=m+1;
end
end
Ss=S(:,1:m);
Vnew=V(:,1:m);
a=size(Vnew,1);
Vs1=Vnew(1:(a-1),:);
Vs2=Vnew(2:end,:);
Y1=U*Ss*(Vs1');
Y2=U*Ss*(Vs2');
D_fil=(pinv(Y1))*Y2;
z = eig(D_fil);
l=length(z);
for i=1:2:l
f((i+1)/2)= (angle(z(i))*fs)/(2*pi);
d((i+1)/2)=-real(z(i))*fs;
end
In the output from the above code, I am correctly getting the four constituent frequency components but am not getting their decaying factors. If anybody has prior experience with this algorithm or has some understanding about why this discrepancy might be there, I would be very grateful for your help. I have tried rewriting the code from a scratch multiple times but it has been of no help, giving the same results.
Any help would be highly appreciated.
I found the problem.
There are two small glitches in the code:
SVD output is a complex conjugate of the right singular matrix—i.e, Vh—and according to IEEE, it needs to be converted to V first.
Now, this V is filtered for reducing the dimension.
After reducing the dimensions of V, V1 and V2 are calculated from V. (In your case, you are using Vh directly for calculating V1 and V2!)
When calculating Y1 and Y2, the complex conjugates of V1 and V2 are used.
You did not consider the absolute magnitude of complex eigen values, but only the real part.
damping coefficient "zeta"= log(magnitude(z))/Ts

Numpy Hermitian Matrix class

Are you aware of something like a hermitian matrix class in numpy? I'd like to optimize matrix calculations like
B = U * A * U.H
, where A (and thus, B) are hermitian. Without specification, all matrix elements of B are calculated. In fact, it should be able to save a factor of about 2 here. Do I miss something?
The method I need should take take the upper/lower triangle of A, the full matrix of U and return the upper/lower triangle of B.
I don't think there exists a method for your specific problem, but with a little thought you might be able to build an algorithm from the low-level BLAS routines that are wrapped in SciPy. For example, dgemm, dsymm, and dtrmm do general, symmetric, and triangular matrix products respectively. Here's an example of using them:
from scipy.linalg.blas import dgemm, dsymm, dtrmm
A = np.random.rand(10, 10)
B = np.random.rand(10, 10)
S = np.dot(A, A.T) # symmetric matrix
T = np.triu(S) # upper triangular matrix
# normal matrix-matrix product
assert np.allclose(dgemm(1, A, B), np.dot(A, B))
# symmetric mat-mat product using only upper-triangle
assert np.allclose(dsymm(1, T, B), np.dot(S, B))
# upper-triangular mat-mat product
assert np.allclose(dtrmm(1, T, B), np.dot(T, B))
There are many other low-level BLAS routines available; I find the NETLIB page to be a good resource to learn what they do. You may be able to cleverly use some combination of the available routines to efficiently solve the problem you have in mind.
Edit: it looks like there are LAPACK routines that quickly compute exactly what you want: dsytrd or zhetrd, but unfortunately these don't appear to be wrapped directly in scipy.linalg.lapack, though scipy does provide cython wrappers for them. Best of luck!
I needed tridiagonal reduction of a symmetric/Hermitian matrix A,
T = Q^H * A * Q
– presumably OP's underlying problem – and I've just submitted a pull request to SciPy for properly interfacing LAPACK's {s,d}sytrd (for real symmetric matrices) and {c,z}hetrd (for Hermitian matrices). All routines use either only the upper or the lower triangular part of the matrix.
Once this has been merged, it can be used like
import numpy as np
n = 3
A = np.zeros((n, n), dtype=dtype)
A[np.triu_indices_from(A)] = np.arange(1, 2*n+1, dtype=dtype)
# query lwork -- optional
lwork, info = sytrd_lwork(n)
assert info == 0
data, d, e, tau, info = sytrd(A, lwork=lwork)
assert info == 0
The vectors d and e now contain the main diagonal and the upper and lower diagonal, respectively.

Solving double integral numerically in matlab

In the paper "The fractional Laplacian operator on bounded domains as a special case of the nonlocal diffusion operator". Where the author has solved a fractional laplacian equation on bounded domain as a non-local diffusion equation.
I am trying to implement the finite element approximation of the one dimensional problem(please refer to page 14 of the above mentioned paper) in matlab.
I am using the following definition of $\phi_k$ as it is mentioned in the paper that $\phi$ is a $hat\;function$
\begin{equation}
\phi_{k}(x)=\begin{cases} {x-x_{k-1} \over x_k\,-x_{k-1}} & \mbox{ if } x \in [x_{k-1},x_k], \\
{x_{k+1}\,-x \over x_{k+1}\,-x_k} & \mbox{ if } x \in [x_k,x_{k+1}], \\
0 & \mbox{ otherwise},\end{cases}
\end{equation}
$\Omega=(-1,1)$ and $\Omega_I=(-1-\lambda,-1) \cup (1,1+\lambda)$ so that $\Omega\cup\Omega_I=(-1-\lambda,1+\lambda)$
For the integers K,N we define the partition of $\overline{\Omega\cup\Omega_I}=[-1-\lambda,1+\lambda]$ as,
\begin{equation}
-1-\lambda=x_{-K}<...
Finally the equations that we have to solve to get the solution $\tilde{u_N}=\sum_{i=-K}^{K+N}U_j\phi_j(x)$ for some coefficients $U_j$ is:
Where $i=1,...,N-1$.
I need pointers in order to simplify and solve the LHS double integral in matlab.It is written in the paper(page 15) that I should use four point gauss quadrature for inner integral and quadgk.m function for outer integral, but since the limits of the inner integral are in terms of x how can I apply four point gauss quadrature on it??.Any help will be appreciated.
Thanks.
You can find the original question here.(Since SO does not support Latex)
For a first stab at the problem, take a look at dblquad and/or quad2d.
In the end, you'll want custom quadrature methods, so you should do something like the following:
% The integrand is of course a function of both x and y
integrand = #(x,y) (phi_j(y) - phi_j(x))*(phi_i(y) - phi_i(x))/abs(y-x)^(2*s+1);
% The inner integral is a function of x, and integrates over y
inner = #(x) quadgk(#(y)integrand(x,y), x-lambda, x+lambda);
% The inner integral is integrated over x to yield the value of the double integral
dblIntegral = quadgk(inner, -(1+lambda), 1+lambda)
where I've used quadgk twice, but you can replace by any other (custom) quadrature method you please.
By the way -- what is the reason for the authors to suggest a (non-adaptive) 4-point Gauss method? That way, you have no estimation of (and/or control over) the errors made in the inner integral...
You can do a 4 point 1D Gaussian quadrature. You seem to assume that it means a 2D integral. Not so - this is assuming a higher-order quadrature over 1D.
If you're solving a 1D finite element problem, it makes no sense whatsoever to integrate over a 2D domain.
I didn't read the paper, but that's what I recall from FEA that I learned.

Resources