Solving double integral numerically in matlab - algorithm

In the paper "The fractional Laplacian operator on bounded domains as a special case of the nonlocal diffusion operator". Where the author has solved a fractional laplacian equation on bounded domain as a non-local diffusion equation.
I am trying to implement the finite element approximation of the one dimensional problem(please refer to page 14 of the above mentioned paper) in matlab.
I am using the following definition of $\phi_k$ as it is mentioned in the paper that $\phi$ is a $hat\;function$
\begin{equation}
\phi_{k}(x)=\begin{cases} {x-x_{k-1} \over x_k\,-x_{k-1}} & \mbox{ if } x \in [x_{k-1},x_k], \\
{x_{k+1}\,-x \over x_{k+1}\,-x_k} & \mbox{ if } x \in [x_k,x_{k+1}], \\
0 & \mbox{ otherwise},\end{cases}
\end{equation}
$\Omega=(-1,1)$ and $\Omega_I=(-1-\lambda,-1) \cup (1,1+\lambda)$ so that $\Omega\cup\Omega_I=(-1-\lambda,1+\lambda)$
For the integers K,N we define the partition of $\overline{\Omega\cup\Omega_I}=[-1-\lambda,1+\lambda]$ as,
\begin{equation}
-1-\lambda=x_{-K}<...
Finally the equations that we have to solve to get the solution $\tilde{u_N}=\sum_{i=-K}^{K+N}U_j\phi_j(x)$ for some coefficients $U_j$ is:
Where $i=1,...,N-1$.
I need pointers in order to simplify and solve the LHS double integral in matlab.It is written in the paper(page 15) that I should use four point gauss quadrature for inner integral and quadgk.m function for outer integral, but since the limits of the inner integral are in terms of x how can I apply four point gauss quadrature on it??.Any help will be appreciated.
Thanks.
You can find the original question here.(Since SO does not support Latex)

For a first stab at the problem, take a look at dblquad and/or quad2d.
In the end, you'll want custom quadrature methods, so you should do something like the following:
% The integrand is of course a function of both x and y
integrand = #(x,y) (phi_j(y) - phi_j(x))*(phi_i(y) - phi_i(x))/abs(y-x)^(2*s+1);
% The inner integral is a function of x, and integrates over y
inner = #(x) quadgk(#(y)integrand(x,y), x-lambda, x+lambda);
% The inner integral is integrated over x to yield the value of the double integral
dblIntegral = quadgk(inner, -(1+lambda), 1+lambda)
where I've used quadgk twice, but you can replace by any other (custom) quadrature method you please.
By the way -- what is the reason for the authors to suggest a (non-adaptive) 4-point Gauss method? That way, you have no estimation of (and/or control over) the errors made in the inner integral...

You can do a 4 point 1D Gaussian quadrature. You seem to assume that it means a 2D integral. Not so - this is assuming a higher-order quadrature over 1D.
If you're solving a 1D finite element problem, it makes no sense whatsoever to integrate over a 2D domain.
I didn't read the paper, but that's what I recall from FEA that I learned.

Related

Adaptive Simpsons Quadrature Algorithm for Double Integrals?

I'm currently using Numerical Analysis 10th edition by Richard L Burden as a reference for approximate Integration techniques. In there it describes the Adaptive Simpsons Quadrature rule that inputs only the bounds and an error tolerance, and spits out the approximate integral within precision of the error tolerance. This method is much more effective than the standard Simpsons rule where you have to input number of iterations and not know how close it is to the actual solution. However, the book goes on to describe a method for Double Integrals using Simpson's rule, but not an algorithm Adaptive Simpsons Quadrature rule for double integrals. Does anyone know a pseudo algorithm for an Adaptive Simpsons rule for double integrals??
For reference, this is the pseudo algorithm for Composite Simpsons rule for single integrals: Inputs bounds (a, b) and n # of iterations
`NAME: compositeSimpsons(a, b, n):
h=(b-a)/n
first = f(a)
last = f(b)
sum=0
x = a+h
for(i=2:n-1)
if(i%2==0) // even
sum+=4*(x)
else // odd
sum+=2*f(x)
x+=h
end for
return (h/3) * (first+sum+last)`
And here is the pseudo-algorithm for Adaptive Simpsons Quadrature for single integrals: (Input bounds a, b) and tolerance (tol)
`NAME: adaptiveQuadratureSimspons(a, b, tol):
myStack.push(a)
myStack.push(b)
I=0
while(myStack is not empty)
bb = myStack.pop()
aa = myStack.pop()
I1 = compositeSimpsons(aa, bb, 2)
m = (aa+bb)/2
I2 = compositeSimpsons(aa, mm, 2) + compositeSimspons(mm, bb, 2)
if(|I2-I1|/15 < (bb-aa)*tol)
I += I2
else
myStack.push(m)
myStack.push(bb)
myStack.push(aa)
myStackl.push(m)
end while
return I`
The algorithm for Simpsons rule for two integrals gets very complex fast as you're replacing the x variable with each iteration with a different subdivision, so I won't detail it here unless necessary. However, I know that the problem isn't that algorithm as I've tried it many times and works fine for many different double integral problems. I tried to use the same logic found in the adaptive Simpsons rule my double integral adaptive Simpsons rule by replacing compositeSimpsons() with my compositeSimpsonsDouble(), but it entered an infinite loop as the difference between I2 and I1 was always less than the tolerance. Any help? Coding this in Java
In the lingo of numerical quadrature, "double integrals" don't play as big as a role as the domain you want to integrate your function over. In 1D it's always an interval, in 2D it can be a disk, a rectangle, a triangle, the plane with weight function exp(-r**2) etc. Perhaps your double integral is one of these. For all these different domains, you have different integration techniques. See https://github.com/nschloe/quadpy for some examples.
For adaptive quadrature in 2D, my first impulse would be to check if the domain can be approximated well by a number of triangles. Like intervals in 1D, those can be easily split into smaller triangles if the error estimator recommends so.
Check https://github.com/nschloe/quadpy/wiki/Adaptive-quadrature for how to do this with quadpy.

How to find least square fit for two combined functions

I have a curvefit problem
I have two functions
y = ax+b
y = ax^2+bx-2.3
I have one set of data each for the above functions
I need to find a and b using least square method combining both the functions
I was using fminsearch function to minimize the sum of squares of errors of these two functions.
I am unable to use this method in lsqcurvefit
Kindly help me
Regards
Ram
I think you'll need to worry less about which library routine to use and more about the math. Assuming you mean vertical offset least squares, then you'll want
D = sum_{i=1..m}(y_Li - a x_Li + b)^2 + sum_{i=j..n}(y_Pj - a x_Pj^2 - b x_Pj + 2.3)^2
where there are m points (x_Li, y_Li) on the line and n points (x_Pj, y_Pj) on the parabola. Now find partial derivatives of D with respect to a and b. Setting them to zero provides two linear equations in 2 unknowns, a and b. Solve this linear system.
y = ax+b
y = ax^2+bx-2.3
In order to not confuse y of the first equation with y of the second equation we use distinct notations :
u = ax+b
v = ax^2+bx+c
The method of linear regression combined for the two functions is shown on the joint page :
HINT : If you want to find by yourself the matrixial equation appearing above, follow the Gene's answer.

Why does Perlin noise use a hash function rather than computing random values?

I'm reading through this explanation of Perlin noise which describes a hash function that is calculates random points for all x, y coordinates.
If the x, y coordinate hashes are generated randomly which are eventually used for computing the gradient's and such, why couldn't I just generate random numbers on the fly?
Is it simply a question of optimization that we use a permutation on hash maps to find our random values? The only reason I could think of is that permutations through our hash map some how generates a smoothening effect but I fail to see how.
Just for clarification, I'm refering to this section in the code:
private static readonly int[] p = { 151,160,137,91,90,15, // Hash lookup table as defined by Ken Perlin. This is a randomly
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23, // arranged array of all numbers from 0-255 inclusive.
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
};
int aaa, aba, aab, abb, baa, bba, bab, bbb;
aaa = p[p[p[ xi ]+ yi ]+ zi ];
aba = p[p[p[ xi ]+inc(yi)]+ zi ];
aab = p[p[p[ xi ]+ yi ]+inc(zi)];
abb = p[p[p[ xi ]+inc(yi)]+inc(zi)];
baa = p[p[p[inc(xi)]+ yi ]+ zi ];
bba = p[p[p[inc(xi)]+inc(yi)]+ zi ];
bab = p[p[p[inc(xi)]+ yi ]+inc(zi)];
bbb = p[p[p[inc(xi)]+inc(yi)]+inc(zi)];
Why don't we just initialize the values as follows?
aaa = random(255)
aab = random(255)
// ...
The key idea behind Perlin noise generation is to create a grid of points, each of which is assigned some vector value, and then to interpolate between those points in a specific way.
I checked out Ken Perlin's original paper on Perlin noise and it seems like as far back as the original paper he recommends using a hash function to do this:
Associate with each point in the integer lattice a pseudorandom value and x, y, and z gradient values. More precisely, map each ordered sequence of three integers into an uncorrelated ordered sequence of four real numbers [a,b,c,d] = H([x,y,z]), where [a,b,e,d] define a linear equation with gradient [a,b,c] and value d at [x,y,z]. H is best implemented as a hash function.
(Emphasis mine).
I suspect that the reason for this has to do with memory concerns. Perlin noise generation requires that the gradient function at different points in space be reevaluated multiple times over the course of the run of the algorithm. Accordingly, you could either
have some formula that, given a point in space, evaluates to the gradient, or
explicitly create a table and store all of the random values that you need.
Option (1) is what Ken Perlin is proposing. The advantage of this approach is that the memory usage required to store the gradients is minimal; you just need to use a hash function.
Option (2) is what you're proposing. This works just fine, but it uses a ton of memory (you need multiple values stored for each point in the integer lattice you're working with). Remember that Perlin's paper was written back in 1985 (!) when memory was much, much scarcer than it is today.
My suspicion is that you can get away with either approach, but given that you don't need true randomness, the pseudorandomness afforded by a good hash function should be sufficient.
I can't explain why the author of that article you read chose to use the particular hash function that they did, though. My guess is that it's "random enough" and sufficiently fast that it doesn't end up being the bottleneck in the computation; remember that the hash function gets called a lot of times in the noise generation code. This seems to be the standard approach to implementing Perlin noise; even Ken Perlin mentions using this hash function on his site.
What you can't do is the approach you're proposing of just letting the variables aaa, aab, aba, etc. be random. The reason why is that the Perlin noise algorithm requires you to reevaluate the noise term at a given point multiple times and expects that it will give back the same values every time. If you wanted to compute truly random values, you could do so, but you'd need to cache your results so that you give back consistent answers of the noise terms at each point.

Finding parameters of exponentially decaying sinusoids (Matrix Pencil Method)

The matrix pencil method is an algorithm which can be used to find the individual exponential decaying sinusoids' parameters (frequency, amplitude, decay factor and initial phase) in a signal consisting of multiple such signals added. I am trying to implement the algorithm. The algorithm can be found in the paper from this link:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=370583 OR
http://krein.unica.it/~cornelis/private/IEEE/IEEEAntennasPropagMag_37_48.pdf
In order to test the algorithm, I created a synthetic signal composed of four exponentially decaying sinusoids generated as follows:
fs=2205;
t=0:1/fs:249/fs;
f(1)=80;
f(2)=120;
f(3)=250;
f(4)=560;
a(1)=.4;
a(2)=1;
a(3)=0.89;
a(4)=.65;
d(1)=70;
d(2)=50;
d(3)=90;
d(4)=80;
for i=1:4
x(i,:)=a(i)*exp(-d(i)*t).*cos(2*pi*f(i)*t);
end
y=x(1,:)+x(2,:)+x(3,:)+x(4,:);
I then feed this signal to the algorithm described in the paper as follows:
function [f d] = mpencil(y)
%construct hankel matrix
N = size(y,2);
L1 = ceil(1/3 * N);
L2 = floor(2/3 * N);
L = ceil((L1 + L2) / 2);
fs=2205;
for i=1:1:(N-L)
Y(i,:)=y(i:(i+L));
end
Y1=Y(:,1:L);
Y2=Y(:,2:(L+1));
[U,S,V] = svd(Y);
D=diag(S);
tol=1e-3;
m=0;
l=length(D);
for i=1:l
if( abs(D(i)/D(1)) >= tol)
m=m+1;
end
end
Ss=S(:,1:m);
Vnew=V(:,1:m);
a=size(Vnew,1);
Vs1=Vnew(1:(a-1),:);
Vs2=Vnew(2:end,:);
Y1=U*Ss*(Vs1');
Y2=U*Ss*(Vs2');
D_fil=(pinv(Y1))*Y2;
z = eig(D_fil);
l=length(z);
for i=1:2:l
f((i+1)/2)= (angle(z(i))*fs)/(2*pi);
d((i+1)/2)=-real(z(i))*fs;
end
In the output from the above code, I am correctly getting the four constituent frequency components but am not getting their decaying factors. If anybody has prior experience with this algorithm or has some understanding about why this discrepancy might be there, I would be very grateful for your help. I have tried rewriting the code from a scratch multiple times but it has been of no help, giving the same results.
Any help would be highly appreciated.
I found the problem.
There are two small glitches in the code:
SVD output is a complex conjugate of the right singular matrix—i.e, Vh—and according to IEEE, it needs to be converted to V first.
Now, this V is filtered for reducing the dimension.
After reducing the dimensions of V, V1 and V2 are calculated from V. (In your case, you are using Vh directly for calculating V1 and V2!)
When calculating Y1 and Y2, the complex conjugates of V1 and V2 are used.
You did not consider the absolute magnitude of complex eigen values, but only the real part.
damping coefficient "zeta"= log(magnitude(z))/Ts

least square approximation: how this matrix calculation equation is deducted?

I am reading a book "kernel methods for pattern analysis". For the least square approximation, it is to minimise the sum of the square of the discrepancies:
e=y-Xw
Therefore it is to minimize
L(w,S)=(y-Xw)'(y-Xw)
Leading to
$$ w=(X'X)^-1 X'y $$
I understand until now.
But how does it leads to this? What is a exactly? Is it constant?
The same way you would solve for the minima (or maxima) of a quadratic function in only one variable: By solving for the zero in the first derivative:
diff((y-Xw)' (y-Xw), w) = 0
(only that this "0" is a row vector with as many elements as w.)
after performing the differentiation we get the following. (note that ' is the transpose, not a differentiation operator.)
-2y'X + 2w'X'X = 0
we transpose the whole expression (so 0 is a column vector) and divide by two:
-X'y + X'Xw = 0
and finally solve for w:
w = (X'X)^-1 X'y
Regarding your second question: The alpha is simply the whole expression X(X'X)^-2X'y. The point is that w can be written as the dot product of X' and some vector, which means that w is a linear combination of the columns of X' (rows of X).

Resources