Adaptive Simpsons Quadrature Algorithm for Double Integrals? - algorithm

I'm currently using Numerical Analysis 10th edition by Richard L Burden as a reference for approximate Integration techniques. In there it describes the Adaptive Simpsons Quadrature rule that inputs only the bounds and an error tolerance, and spits out the approximate integral within precision of the error tolerance. This method is much more effective than the standard Simpsons rule where you have to input number of iterations and not know how close it is to the actual solution. However, the book goes on to describe a method for Double Integrals using Simpson's rule, but not an algorithm Adaptive Simpsons Quadrature rule for double integrals. Does anyone know a pseudo algorithm for an Adaptive Simpsons rule for double integrals??
For reference, this is the pseudo algorithm for Composite Simpsons rule for single integrals: Inputs bounds (a, b) and n # of iterations
`NAME: compositeSimpsons(a, b, n):
h=(b-a)/n
first = f(a)
last = f(b)
sum=0
x = a+h
for(i=2:n-1)
if(i%2==0) // even
sum+=4*(x)
else // odd
sum+=2*f(x)
x+=h
end for
return (h/3) * (first+sum+last)`
And here is the pseudo-algorithm for Adaptive Simpsons Quadrature for single integrals: (Input bounds a, b) and tolerance (tol)
`NAME: adaptiveQuadratureSimspons(a, b, tol):
myStack.push(a)
myStack.push(b)
I=0
while(myStack is not empty)
bb = myStack.pop()
aa = myStack.pop()
I1 = compositeSimpsons(aa, bb, 2)
m = (aa+bb)/2
I2 = compositeSimpsons(aa, mm, 2) + compositeSimspons(mm, bb, 2)
if(|I2-I1|/15 < (bb-aa)*tol)
I += I2
else
myStack.push(m)
myStack.push(bb)
myStack.push(aa)
myStackl.push(m)
end while
return I`
The algorithm for Simpsons rule for two integrals gets very complex fast as you're replacing the x variable with each iteration with a different subdivision, so I won't detail it here unless necessary. However, I know that the problem isn't that algorithm as I've tried it many times and works fine for many different double integral problems. I tried to use the same logic found in the adaptive Simpsons rule my double integral adaptive Simpsons rule by replacing compositeSimpsons() with my compositeSimpsonsDouble(), but it entered an infinite loop as the difference between I2 and I1 was always less than the tolerance. Any help? Coding this in Java

In the lingo of numerical quadrature, "double integrals" don't play as big as a role as the domain you want to integrate your function over. In 1D it's always an interval, in 2D it can be a disk, a rectangle, a triangle, the plane with weight function exp(-r**2) etc. Perhaps your double integral is one of these. For all these different domains, you have different integration techniques. See https://github.com/nschloe/quadpy for some examples.
For adaptive quadrature in 2D, my first impulse would be to check if the domain can be approximated well by a number of triangles. Like intervals in 1D, those can be easily split into smaller triangles if the error estimator recommends so.
Check https://github.com/nschloe/quadpy/wiki/Adaptive-quadrature for how to do this with quadpy.

Related

Why does Perlin noise use a hash function rather than computing random values?

I'm reading through this explanation of Perlin noise which describes a hash function that is calculates random points for all x, y coordinates.
If the x, y coordinate hashes are generated randomly which are eventually used for computing the gradient's and such, why couldn't I just generate random numbers on the fly?
Is it simply a question of optimization that we use a permutation on hash maps to find our random values? The only reason I could think of is that permutations through our hash map some how generates a smoothening effect but I fail to see how.
Just for clarification, I'm refering to this section in the code:
private static readonly int[] p = { 151,160,137,91,90,15, // Hash lookup table as defined by Ken Perlin. This is a randomly
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23, // arranged array of all numbers from 0-255 inclusive.
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
};
int aaa, aba, aab, abb, baa, bba, bab, bbb;
aaa = p[p[p[ xi ]+ yi ]+ zi ];
aba = p[p[p[ xi ]+inc(yi)]+ zi ];
aab = p[p[p[ xi ]+ yi ]+inc(zi)];
abb = p[p[p[ xi ]+inc(yi)]+inc(zi)];
baa = p[p[p[inc(xi)]+ yi ]+ zi ];
bba = p[p[p[inc(xi)]+inc(yi)]+ zi ];
bab = p[p[p[inc(xi)]+ yi ]+inc(zi)];
bbb = p[p[p[inc(xi)]+inc(yi)]+inc(zi)];
Why don't we just initialize the values as follows?
aaa = random(255)
aab = random(255)
// ...
The key idea behind Perlin noise generation is to create a grid of points, each of which is assigned some vector value, and then to interpolate between those points in a specific way.
I checked out Ken Perlin's original paper on Perlin noise and it seems like as far back as the original paper he recommends using a hash function to do this:
Associate with each point in the integer lattice a pseudorandom value and x, y, and z gradient values. More precisely, map each ordered sequence of three integers into an uncorrelated ordered sequence of four real numbers [a,b,c,d] = H([x,y,z]), where [a,b,e,d] define a linear equation with gradient [a,b,c] and value d at [x,y,z]. H is best implemented as a hash function.
(Emphasis mine).
I suspect that the reason for this has to do with memory concerns. Perlin noise generation requires that the gradient function at different points in space be reevaluated multiple times over the course of the run of the algorithm. Accordingly, you could either
have some formula that, given a point in space, evaluates to the gradient, or
explicitly create a table and store all of the random values that you need.
Option (1) is what Ken Perlin is proposing. The advantage of this approach is that the memory usage required to store the gradients is minimal; you just need to use a hash function.
Option (2) is what you're proposing. This works just fine, but it uses a ton of memory (you need multiple values stored for each point in the integer lattice you're working with). Remember that Perlin's paper was written back in 1985 (!) when memory was much, much scarcer than it is today.
My suspicion is that you can get away with either approach, but given that you don't need true randomness, the pseudorandomness afforded by a good hash function should be sufficient.
I can't explain why the author of that article you read chose to use the particular hash function that they did, though. My guess is that it's "random enough" and sufficiently fast that it doesn't end up being the bottleneck in the computation; remember that the hash function gets called a lot of times in the noise generation code. This seems to be the standard approach to implementing Perlin noise; even Ken Perlin mentions using this hash function on his site.
What you can't do is the approach you're proposing of just letting the variables aaa, aab, aba, etc. be random. The reason why is that the Perlin noise algorithm requires you to reevaluate the noise term at a given point multiple times and expects that it will give back the same values every time. If you wanted to compute truly random values, you could do so, but you'd need to cache your results so that you give back consistent answers of the noise terms at each point.

Finding parameters of exponentially decaying sinusoids (Matrix Pencil Method)

The matrix pencil method is an algorithm which can be used to find the individual exponential decaying sinusoids' parameters (frequency, amplitude, decay factor and initial phase) in a signal consisting of multiple such signals added. I am trying to implement the algorithm. The algorithm can be found in the paper from this link:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=370583 OR
http://krein.unica.it/~cornelis/private/IEEE/IEEEAntennasPropagMag_37_48.pdf
In order to test the algorithm, I created a synthetic signal composed of four exponentially decaying sinusoids generated as follows:
fs=2205;
t=0:1/fs:249/fs;
f(1)=80;
f(2)=120;
f(3)=250;
f(4)=560;
a(1)=.4;
a(2)=1;
a(3)=0.89;
a(4)=.65;
d(1)=70;
d(2)=50;
d(3)=90;
d(4)=80;
for i=1:4
x(i,:)=a(i)*exp(-d(i)*t).*cos(2*pi*f(i)*t);
end
y=x(1,:)+x(2,:)+x(3,:)+x(4,:);
I then feed this signal to the algorithm described in the paper as follows:
function [f d] = mpencil(y)
%construct hankel matrix
N = size(y,2);
L1 = ceil(1/3 * N);
L2 = floor(2/3 * N);
L = ceil((L1 + L2) / 2);
fs=2205;
for i=1:1:(N-L)
Y(i,:)=y(i:(i+L));
end
Y1=Y(:,1:L);
Y2=Y(:,2:(L+1));
[U,S,V] = svd(Y);
D=diag(S);
tol=1e-3;
m=0;
l=length(D);
for i=1:l
if( abs(D(i)/D(1)) >= tol)
m=m+1;
end
end
Ss=S(:,1:m);
Vnew=V(:,1:m);
a=size(Vnew,1);
Vs1=Vnew(1:(a-1),:);
Vs2=Vnew(2:end,:);
Y1=U*Ss*(Vs1');
Y2=U*Ss*(Vs2');
D_fil=(pinv(Y1))*Y2;
z = eig(D_fil);
l=length(z);
for i=1:2:l
f((i+1)/2)= (angle(z(i))*fs)/(2*pi);
d((i+1)/2)=-real(z(i))*fs;
end
In the output from the above code, I am correctly getting the four constituent frequency components but am not getting their decaying factors. If anybody has prior experience with this algorithm or has some understanding about why this discrepancy might be there, I would be very grateful for your help. I have tried rewriting the code from a scratch multiple times but it has been of no help, giving the same results.
Any help would be highly appreciated.
I found the problem.
There are two small glitches in the code:
SVD output is a complex conjugate of the right singular matrix—i.e, Vh—and according to IEEE, it needs to be converted to V first.
Now, this V is filtered for reducing the dimension.
After reducing the dimensions of V, V1 and V2 are calculated from V. (In your case, you are using Vh directly for calculating V1 and V2!)
When calculating Y1 and Y2, the complex conjugates of V1 and V2 are used.
You did not consider the absolute magnitude of complex eigen values, but only the real part.
damping coefficient "zeta"= log(magnitude(z))/Ts

How many thresholds and distance matrix are in Eigenface?

I edited my question trying to make it as short and precise.
I am developing a prototype of a facial recognition system for my Graduation Project. I use Eigenface and my main source is the document Turk and Pentland. It is available here: http://www.face-rec.org/algorithms/PCA/jcn.pdf.
My doubts focus on step 4 and 5.
I can not correctly interpret the number of thresholds: If two types of thresholds, or only one (Notice that the text speaks of two types but uses the same symbol). And again, my question is whether this (or these) threshold(s) is unique and global for all person or if each person has their own default.
I understand the steps to be calculated until an matrix O() of classes with weights or weighted. So this matrix O() is of dimension M'x P. Since M' equal to the amount of eigenfaces chosen and P the number of people.
What follows and confuses me. He speaks of two distances: the distance of a class against another, and also from a distance of one face to another. I call it D1 and D2 respectively. NOTE: In the training set there are M images in total, with F = M / P the number of images per person.
I understand that threshold(s) should be chosen empirically. But there must be a way to approximate. I was initially designing a matrix of distances D1() of dimension PxP. Where the row vector D(i) has the distances from the vector average class O(i) to each O(j), j = 1..P. Ie a "all vs all."
Until I came here, and what follows depends on whether I should actually choose a single global threshold for all. Or if I should be chosen for each individual value. Also not if they are 2 types: one for distance classes, and one for distance faces.
I have a theory as could proceed but not so supported by the concepts of Turk:
Stage Pre-Test:
Gender two matrices of distances D1 and D2:
In D1 would be stored distances between classes, and in D2 distances between faces. This basis of the matrices W and A respectively.
Then, as indeed in the training set are P people, taking the F vectors columns D1 for each person and estimate a threshold T1 was in range [Min, Max]. Thus I will have a T1(i), i = 1..P
Separately have a T2 based on the range [Min, Max] out of all the matrix D2. This define is a face or not.
Step Test:
Buid a test set of image with a 1 image for each known person
Itest = {Itest(1) ... Itest(P)}
For every image Itest(i) test:
Calculate the space face Atest = Itest - Imean
Calculate the weight vector Otest = UT * Atest
Calculating distances:
dist1(j) = distance(Otest, O (j)), j = 1..P
Af = project(Otest, U)
dist2 = distance(Atest, Af)
Evaluate recognition:
MinDist = Min(dist1)
For each j = 1..P
If dist2 > T2 then "not is face" else:
If MinDist <= T1(j) then "Subject identified as j" else "subject unidentified"
Then I take account of TFA and TFR and repeat the test process with different threshold values until I find the best approach gives to each person.
Already defined thresholds can put the system into operation unknown images. The algorithm is similar to the test.
I know I get out of "script" of the official documentation but at least this reasoning is the most logical place my head. I wondered if I could give guidance.
EDIT:
i No more to say that has not already been said and that may help clarify things.
Could anyone tell me if I'm okay tackled with my "theory"? I'm moving into my project, and if this is not the right way would appreciate some guidance and does not work and you wrong.

Solving double integral numerically in matlab

In the paper "The fractional Laplacian operator on bounded domains as a special case of the nonlocal diffusion operator". Where the author has solved a fractional laplacian equation on bounded domain as a non-local diffusion equation.
I am trying to implement the finite element approximation of the one dimensional problem(please refer to page 14 of the above mentioned paper) in matlab.
I am using the following definition of $\phi_k$ as it is mentioned in the paper that $\phi$ is a $hat\;function$
\begin{equation}
\phi_{k}(x)=\begin{cases} {x-x_{k-1} \over x_k\,-x_{k-1}} & \mbox{ if } x \in [x_{k-1},x_k], \\
{x_{k+1}\,-x \over x_{k+1}\,-x_k} & \mbox{ if } x \in [x_k,x_{k+1}], \\
0 & \mbox{ otherwise},\end{cases}
\end{equation}
$\Omega=(-1,1)$ and $\Omega_I=(-1-\lambda,-1) \cup (1,1+\lambda)$ so that $\Omega\cup\Omega_I=(-1-\lambda,1+\lambda)$
For the integers K,N we define the partition of $\overline{\Omega\cup\Omega_I}=[-1-\lambda,1+\lambda]$ as,
\begin{equation}
-1-\lambda=x_{-K}<...
Finally the equations that we have to solve to get the solution $\tilde{u_N}=\sum_{i=-K}^{K+N}U_j\phi_j(x)$ for some coefficients $U_j$ is:
Where $i=1,...,N-1$.
I need pointers in order to simplify and solve the LHS double integral in matlab.It is written in the paper(page 15) that I should use four point gauss quadrature for inner integral and quadgk.m function for outer integral, but since the limits of the inner integral are in terms of x how can I apply four point gauss quadrature on it??.Any help will be appreciated.
Thanks.
You can find the original question here.(Since SO does not support Latex)
For a first stab at the problem, take a look at dblquad and/or quad2d.
In the end, you'll want custom quadrature methods, so you should do something like the following:
% The integrand is of course a function of both x and y
integrand = #(x,y) (phi_j(y) - phi_j(x))*(phi_i(y) - phi_i(x))/abs(y-x)^(2*s+1);
% The inner integral is a function of x, and integrates over y
inner = #(x) quadgk(#(y)integrand(x,y), x-lambda, x+lambda);
% The inner integral is integrated over x to yield the value of the double integral
dblIntegral = quadgk(inner, -(1+lambda), 1+lambda)
where I've used quadgk twice, but you can replace by any other (custom) quadrature method you please.
By the way -- what is the reason for the authors to suggest a (non-adaptive) 4-point Gauss method? That way, you have no estimation of (and/or control over) the errors made in the inner integral...
You can do a 4 point 1D Gaussian quadrature. You seem to assume that it means a 2D integral. Not so - this is assuming a higher-order quadrature over 1D.
If you're solving a 1D finite element problem, it makes no sense whatsoever to integrate over a 2D domain.
I didn't read the paper, but that's what I recall from FEA that I learned.

matlab: optimum amount of points for linear fit

I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.

Resources