I'm wondering if anyone knows of a fast (i.e. O(N log(N)) ) method of calculating the average square difference function (ASDF) or average magnitude difference function (AMDF) for a periodic signal, or it is even possible.
I know that one can use the FFT to calculate the periodic cross correlation. For example, in Matlab code,
for i=1:N
xc(i)=sum(x1*circshift(x2,i-1));
end
is equivalent to the much faster
xc=ifft(fft(x1).*conj(fft(x2));
Is there a similar "fast" algorithm for
for i=1:N
ASDF(i)=sum((x1-circshift(x2,i-1)).^2)/N;
end
or
for i=1:N
AMDF(i)=sum(abs(x1-circshift(x2,i-1)))/N;
end
?
You can expand your definition of ASDF as follows:
for i = 1:N
asdf(i) = (sum(x1.^2) - 2*sum(x1*circshift(x2,i-1)) + sum(x2.^2))/N;
end
which simplifies to
asdf = (-2*ifft(fft(x1).*conj(fft(x2))) + sum(x1.^2) + sum(x2.^2))/N;
Related
I am using Newton Raphson +successive Substitute algorithm to perform flash calculation(chemical process simulation).
The algorithm can converge well when the input is in low precision like 0.1, but when the number precision is increased to 0.11111 or 0.99999. The algorithm will not converge.
When I am using the quasi newton method with BFGS update, the same problem occurs again. How can we decrease the sensitivity of the code to the numerical precision?
Here is a simple example using matlab to solve the Rachford-Rice equation. When the comp_overall=[0.9,1-0.9], it converges well. However, when the number precision increase to like[0.99999,1-0.99999]. It will not converge.
K=[0.053154011443159 34.234731216532658],
comp_overall= [0.99999 1- 0.99999], phi=0.5; %initial values
epsilon = 1.0;
iter1 = 1;
while (epsilon >=1.e-05)
rc=0.0;
drc=0.0;
for i=1:2
% Rachford-Rice Equation
rc = comp_overall(i)*(K(i)-1.0)/(1.0+phi*(K(i)-1.0))+rc;
% Derivative
drc = comp_overall(i)*(K(i)-1.0)^2/(1.0+phiK(i)-1.0))^2+drc;
end
% Newton-Raphson
phi1 = phi +0.01 (rc / drc);
epsilon = abs( (phi1-phi)/phi );
% Convergence
phi = phi1;
iter1=iter1+1;
end
The Newton–Raphson method relies on the function being differentiable between any two consecutive approximations. Depending on the choice of the initial value, this may not be the case for z₁ = 0.99999. Let's look at the graph of the Rachford-Rice function:
The root of this function is φ₀ ≈ –0.0300781429 and the nearest point of discontinuity is –1/(K₂-1) ≈ –0.0300890052. They are close enough for the Newton–Raphson method to overshoot, to jump over that discontinuity.
For example:
φ₁ = –0.025
f(φ₁) ≈ -0.9229770571
f'(φ₁) ≈ 1.2416569960
φ₂ = φ₁ + 0.01 * f(φ₁) / f'(φ₁) ≈ -0.0324334302
φ₂ lies to the left of the discontinuity, so the following steps will be away from, not towards the root.
φ₃ = -0.0358986759 < φ₂
What can be done about it:
When the algorithm fails to converge, repeat it with smaller steps. For example, start with the coefficient 0.01 (as it is now) and decrease it 10 times after every failure.
Detect overshoots. On each iteration check if there is a discontinuity point (–1/(Kᵢ-1)) between the current approximation and the previous one. When it happens, discard the current approximation, decrease the coefficient and continue.
Limit the scope of the search. Are solutions outside of [0, 1] physically meaningful? If not, you can stop once the approximated value falls out of that range.
Use different method. The function is monotonic on any interval between two consecutive discontinuity points, so you can perform binary search on each such interval. It will be both faster and more robust than the Newton–Raphson method.
Calculation of Average clustering coefficient of a graph
I am getting correct result but it takes huge time when the graph dimension increases need some alternative way so that it takes less time to execute. Is there any way to simplify the code??
%// A is adjacency matrix N X N,
%// d is degree ,
N=100;
d=10;
rand('state',0)
A = zeros(N,N);
kv=d*(d-1)/2;
%% Creating A matrix %%%
for i = 1:(d*N/2)
j = floor(N*rand)+1;
k = floor(N*rand)+1;
while (j==k)||(A(j,k)==1)
j = floor(N*rand)+1;
k = floor(N*rand)+1;
end
A(j,k)=1;
A(k,j)=1;
end
%% Calculation of clustering Coeff %%
for i=1:N
J=find(A(i,:));
et=0;
for ii=1:(size(J,2))-1
for jj=ii+1:size(J,2)
et=et+A(J(ii),J(jj));
end
end
Cv(i)=et/kv;
end
Avg_clustering_coeff=sum(Cv)/n;
Output I got.
Avg_clustering_coeff = 0.1107
That Calculation of clustering Coeff part could be vectorized using nchoosek to remove the innermost two nested loops, like so -
CvOut = zeros(1,N);
for k=1:N
J=find(A(k,:));
if numel(J)>1
idx = nchoosek(J,2);
CvOut(k) = sum(A(sub2ind([N N],idx(:,1),idx(:,2))));
end
end
CvOut=CvOut/kv;
Hopefully, this would boost up the performance quite a bit!
To speed up your code you can read my comment, but you are not going to reduce drastically the computation time, because the time complexity doesn't change.
But if you don't need to get an absolut result you can use the probability.
probnum = cumsum(1:d);
probnum = mean(probnum(end-1:end)); %theorical number of elements created by your second loop (for each row).
probfind = d*N/(N^2); %probability of finding a non zero value.
coeff = probnum*probfind/kv;
This probabilistic coeff is going to be equal to Avg_clustering_coeff for big N.
So you can use the normal method for small N and this method for big N.
I have a loop in which I use ppval to evaluate a set of values from a piecewise polynomial spline. The interpolation is easily the most time consuming part of the loop and I am looking for a way improve the function's efficiency.
More specifically, I'm using a finite difference scheme to calculate transient temperature distributions in friction welds. To do this I need to recalculate the material properties (as a function of temperature and position) at each time step. The rate limiting factor is the interpolation of these values. I could use an alternate finite difference scheme (less restrictive in the time domain) but would rather stick with what I have if at all possible.
I've included a MWE below:
x=0:.1:10;
y=sin(x);
pp=spline(x,y);
tic
for n=1:10000
x_int=10*rand(1000,1);
y_int=ppval(pp,x_int);
end
toc
plot(x,y,x_int,y_int,'*') % plot for sanity of data
Elapsed time is 1.265442 seconds.
Edit - I should probably mention that I would be more than happy with a simple linear interpolation between values but the interp1 function is slower than ppval
x=0:.1:10;
y=sin(x);
tic
for n=1:10000
x_int=10*rand(1000,1);
y_int=interp1(x,y,x_int,'linear');
end
toc
plot(x,y,x_int,y_int,'*') % plot for sanity of data
Elapsed time is 1.957256 seconds.
This is slow, because you're running into the single most annoying limitation of JIT. It's the cause of many many many oh so many questions in the MATLAB tag here on SO:
MATLAB's JIT accelerator cannot accelerate loops that call non-builtin functions.
Both ppval and interp1 are not built in (check with type ppval or edit interp1). Their implementation is not particularly slow, they just aren't fast when placed in a loop.
Now I have the impression it's getting better in more recent versions of MATLAB, but there are still quite massive differences between "inlined" and "non-inlined" loops. Why their JIT doesn't automate this task by simply recursing into non-builtins, I really have no idea.
Anyway, to fix this, you should copy-paste the essence of what happens in ppval into the loop body:
% Example data
x = 0:.1:10;
y = sin(x);
pp = spline(x,y);
% Your original version
tic
for n = 1:10000
x_int = 10*rand(1000,1);
y_int = ppval(pp, x_int);
end
toc
% "inlined" version
tic
br = pp.breaks.';
cf = pp.coefs;
for n = 1:10000
x_int = 10*rand(1000,1);
[~, inds] = histc(x_int, [-inf; br(2:end-1); +inf]);
x_shf = x_int - br(inds);
zero = ones(size(x_shf));
one = x_shf;
two = one .* x_shf;
three = two .* x_shf;
y_int = sum( [three two one zero] .* cf(inds,:), 2);
end
toc
Profiler:
Results on my crappy machine:
Elapsed time is 2.764317 seconds. % ppval
Elapsed time is 1.695324 seconds. % "inlined" version
The difference is actually less than what I expected, but I think that's mostly due to the sum() -- for this ppval case, I usually only need to evaluate a single site per iteration, which you can do without histc (but with simple vectorized code) and matrix/vector multiplication x*y (BLAS) instead of sum(x.*y) (fast, but not BLAS-fast).
Oh well, a ~60% reduction is not bad :)
It is a bit surprising that interp1 is slower than ppval, but having a quick look at its source code, it seems that it has to check for many special cases and has to loop over all the points since it it cannot be sure if the step-size is constant.
I didn't check the timing, but I guess you can speed up the linear interpolation by a lot if you can guarantee that steps in x of your table are constant, and that the values to be interpolated are stricktly within the given range, so that you do not have to do any checking. In that case, linear interpolation can be converted to a simple lookup problem like so:
%data to be interpolated, on grid with constant step
x = 0:0.5:10;
y = sin(x);
x_int = 0:0.1:9.9;
%make sure it is interpolation, not extrapolation
assert(all(x(1) <= x_int & x_int < x(end)));
% compute mapping, this can be precomputed for constant grid
slope = (length(x) - 1) / (x(end) - x(1));
offset = 1 - slope*x(1);
%map x_int to interval 1..lenght(i)
xmapped = offset + slope * x_int;
ind = floor(xmapped);
frac = xmapped - ind;
%interpolate by taking weighted sum of neighbouring points
y_int = y(ind) .* (1 - frac) + y(ind+1) .* frac;
% make plot to check correctness
plot(x, y, 'o-', x_int, y_int, '.')
When I am going to compute the following series 1+x+x^2+x^3+..., I would prefer to do like this: (1+x)(1+x^2)(1+x^4)... (which is like some sort of repeated squaring) so that the number of multiplications can be significantly reduced.
Now I want to compute the series 1+x/1!+(x^2)/2!+(x^3)/3!+..., how can I use the similar techniques to improve the number of multiplications?
Any suggestions are warmly welcome!
The method of optimization you refer, is probably Horner's method:
a + bx +cx^2 +dx^3 = ((c+dx)x + b)x + a
The alternating series A*(1-x)(1+x^2)(1-x^4)(1+x^8) ... OTOH is useful in calculating approximation for division of A/(1+x), where x is small.
The Taylor series sigma x^n/n! for exp(x) converges quite badly; other approximations are better suited to get accurate values; if there's a trick to make it with less multiplications, it is to iterate with a temporary value:
sum=1; temp=x; k=1;
// The sum after first iteration is (1+x) or 1+x^1/1!
for (i=1;i<=N;i++) { sum=sum+temp; k=k*(i+1); temp = temp * x / k; }
// or
prod=1.0; for (i=N;i>0;i--) prod = prod * x/(double)i + 1.0;
Multiplying the factorial should increase accuracy a bit -- in real life situation it's may be advisable to either combine temp=temp*x/(i+1) in order to be able to iterate much further, or to use a lookup table for the constant a_n / n!, as one typically needs just a few terms. (4 or 5 terms for sin/cos).
As it turned out, Horner's rule didn't have much role in the transformation of the geometric series Sigma x^n to product form. To calculate exponential, other powerful techniques have to be applied -- typically range reduction and rational (Pade), polynomial (chebyshev) approximations and such.
Converting comment to an answer:
Note that for first series, there is exact equivalence:
1+x+x^2+x^3+...+x^n = (1-x^(n+1))/(1-x)
Using it, you can compute it much, much faster.
Second one is convergence series for e^x, you might want to use standard math library functions pow(e, x) or exp(x) instead.
On your approach for the first series don't you think that using 1 + x(1+ x( 1+ x( 1+x)....)) would be a better approach. Similar approach can be applied for the second series. So 1 + x/1 ( 1+ x/2 (1 + x/3 * (1 + x/4(.....))))
Hy
I need to use this Kolmogorov filter in an apllication. You put it some measured data and with the filter it gets some hoe smoothed.
I tryed to do it with "nchoosek" however when I try to do this for an I of 50 or more it takes way too long.
Does someone know how to do this in a faster way?
function [ filterd ] = kolmo(data, inter)
temp = 0;
temp1 = 0;
filterd(1:10, 1) = NaN;
for t=inter+1:(length(data)-inter)
for o=-inter:inter
temp = temp + (nchoosek(2*inter, (inter+o))*data(t+o));
temp1 = temp1 + nchoosek(2*inter, (inter+o));
end
filterd(t, 1) = temp/temp1;
temp = 0;
temp1 = 0;
end
end
Thx
Andy
Here is a loop-less solution:
function y = MySoln(x, K)
%# Get the binomial coefficient terms
FacAll = factorial(0:1:2*K)';
BinCoefAll = FacAll(end) ./ (FacAll .* flipud(FacAll));
%# Get all numerator terms
NumerAll = conv(x, BinCoefAll, 'valid');
%# Rescale numerator terms into output
y = (1 / sum(BinCoefAll)) * NumerAll;
I've avoided using nchoosek and instead have calculated the binomial coefficients manually using the factorials. This ensures that each factorial calculation is only performed once. In contrast, the OP's solution potentially performs each factorial calculation hundreds of times.
Once the binomial coefficients are calculated, the solution from there is a straightforward application of conv, and then scale by the denominator term.
I did a quick speed test between the OP solution and my solution. The speed test uses a random vector x with 50 elements, and sets K to 5. Then I run 100 iterations over my solution versus the OP solution. Here are the results:
Elapsed time is 2.637597 seconds. %# OP Solution
Elapsed time is 0.010401 seconds. %# My Solution
I'm pretty happy with this. I doubt the method can be made much more efficient from this point (but would be happy to be proven wrong). :-)