Frequency computation and fast fourier transform in Matlab - for-loop

I have a question related to Fast Fourier transform. I want to calculate the phase and make FFT to draw power spectral density. However when I calculate the frequency f, there are some errors. This is my program code:
n = 1:32768;
T = 0.2*10^-9; % Sampling period
Fs = 1/T; % Sampling frequency
Fn = Fs/2; % Nyquist frequency
omega = 2*pi*200*10^6; % Carrier frequency
L = 32768; % % Length of signal
t = (0:L-1)*T; % Time vector
x_signal(n) = cos(omega*T*n + 0.1*randn(size(n))); % Additive phase noise (random)
y_signal(n) = sin(omega*T*n + 0.1*randn(size(n))); % Additive phase noise (random)
theta(n) = atan(y_signal(n)/x_signal(n));
f = (theta(n)-theta(n-1))/(2*pi)
Y = fft(f,t);
PSD = Y.*conj(Y); % Power Spectral Density
%Fv = linspace(0, 1, fix(L/2)+1)*Fn; % Frequency Vector

As posted, you would get the error 
error: subscript indices must be either positive integers less than 2^31 or logicals
which refers to the operation theta(n-1) when n=1 which results in an index of 0 (which is out of bounds since Matlab uses 1-based indexing). To avoid that could use a subset of indices in n:
f = (theta(n(2:end))-theta(n(1:end-1)))/(2*pi);
That said, if you are doing this to try to obtain an instantaneous measure of the frequency, then you will have a few more issues to deal with. The most trivial one is that you should also divide by T. Not as obvious is the fact that as given, theta is a scalar due to the use of the / operator (see Matlab's mrdivide) rather than the ./ operator which performs element-wise division. So a better expression would be:
theta(n) = atan(y_signal(n)./x_signal(n));
Now, the next problem you might notice is that you are actually losing some phase information since the result of atan is [-pi/2,pi/2] instead of the full [-pi,pi] range. To avoid this you should instead be using atan2:
theta(n) = atan2(y_signal(n), x_signal(n));
Even with this, you are likely to notice that the estimated frequency regularly has spikes whenever the phase jumps between near -pi and near pi. This can be avoided by computing the phase difference modulo 2*pi:
f = mod(theta(n(2:end))-theta(n(1:end-1)),2*pi)/(2*pi*T);
A final thing to note: when calling the fft, you should not be passing in a time variable (the input is implicitly assumed to be sampled at regular time intervals). You may however specify the desired length of the FFT. So, you would thus compute Y as follow:
Y = fft(f, L);
And you could then plot the resulting PSD using:
Fv = linspace(0, 1, fix(L/2)+1)*Fn; % Frequency Vector
plot(Fv, abs(PSD(1:L/2+1)));

Related

How to adapt np.fft.fft by group?

I am not high level in data strucutre, I need some help:
I want to adapt np.fft.fft to get specific Amplitudes and phase angles for each barcode.
However, for each barcode, there are 512 datapoints(rows) as a signal, and I want to build the loop to generate the corresonding complex numbers.
Which means, from index[0] to [511] as a single period and compute the np.fft.fft.
next barcode will be from index[512] to [1022] and so on until the end..
Could someone give me some guidelines?
Many thanks in advance!!
And I've already written the code like this:
def generate_Nth_sine_wave(signaldata,N):
"""Extracts the Nth harmonic of a given signal.
It assumes that the input belongs to a single period, spaced equally over the limits
Args:
signal : List containing signal values over
N : Nth Harmonic """
# Apply Fourier Transformation on the signal to obtain the Fourier Coefficients
x = np.fft.fft(signal)
# initiate a blank array with the same length as the coefficients list - "FT_Coeff"
Harmonic_list = [0] * len(x)
# The Nth list element of "x" will correspond to the coefficient of the Nth harmonic.
# Hence isolating only the Nth element by assigning null to the rest
Harmonic_list[N] = 1
Specific_Harmonic = Harmonic_list * x
# Applying inverse FFT to the isolated harmonic Coefficient to get back the curve that was contributed by the specific Harmonic
Harmonic_Curve = np.fft.ifft(Specific_Harmonic)*2
Harmonic_Curve = Harmonic_Curve.real
c = x[N]
a = c.real
b = c.imag
phi = math.degrees(math.atan2(b,a))%360 # Phase angle
hp = ((360-phi)/N)%360 # Fist higher peak position angle
Magnitude = max(Harmonic_Curve) # Magnitude of the harmonic curve
return Magnitude, hp

Why the algorithm fails when increase the number precision? How can we decrease the sensitivity of the algorithm to the number precision?

I am using Newton Raphson +successive Substitute algorithm to perform flash calculation(chemical process simulation).
The algorithm can converge well when the input is in low precision like 0.1, but when the number precision is increased to 0.11111 or 0.99999. The algorithm will not converge.
When I am using the quasi newton method with BFGS update, the same problem occurs again. How can we decrease the sensitivity of the code to the numerical precision?
Here is a simple example using matlab to solve the Rachford-Rice equation. When the comp_overall=[0.9,1-0.9], it converges well. However, when the number precision increase to like[0.99999,1-0.99999]. It will not converge.
K=[0.053154011443159 34.234731216532658],
comp_overall= [0.99999 1- 0.99999], phi=0.5; %initial values
epsilon = 1.0;
iter1 = 1;
while (epsilon >=1.e-05)
rc=0.0;
drc=0.0;
for i=1:2
% Rachford-Rice Equation
rc = comp_overall(i)*(K(i)-1.0)/(1.0+phi*(K(i)-1.0))+rc;
% Derivative
drc = comp_overall(i)*(K(i)-1.0)^2/(1.0+phiK(i)-1.0))^2+drc;
end
% Newton-Raphson
phi1 = phi +0.01 (rc / drc);
epsilon = abs( (phi1-phi)/phi );
% Convergence
phi = phi1;
iter1=iter1+1;
end
The Newton–Raphson method relies on the function being differentiable between any two consecutive approximations. Depending on the choice of the initial value, this may not be the case for z₁ = 0.99999. Let's look at the graph of the Rachford-Rice function:
The root of this function is φ₀ ≈ –0.0300781429 and the nearest point of discontinuity is –1/(K₂-1) ≈ –0.0300890052. They are close enough for the Newton–Raphson method to overshoot, to jump over that discontinuity.
For example:
φ₁ = –0.025
f(φ₁) ≈ -0.9229770571
f'(φ₁) ≈ 1.2416569960
φ₂ = φ₁ + 0.01 * f(φ₁) / f'(φ₁) ≈ -0.0324334302
φ₂ lies to the left of the discontinuity, so the following steps will be away from, not towards the root.
φ₃ = -0.0358986759 < φ₂
What can be done about it:
When the algorithm fails to converge, repeat it with smaller steps. For example, start with the coefficient 0.01 (as it is now) and decrease it 10 times after every failure.
Detect overshoots. On each iteration check if there is a discontinuity point (–1/(Kᵢ-1)) between the current approximation and the previous one. When it happens, discard the current approximation, decrease the coefficient and continue.
Limit the scope of the search. Are solutions outside of [0, 1] physically meaningful? If not, you can stop once the approximated value falls out of that range.
Use different method. The function is monotonic on any interval between two consecutive discontinuity points, so you can perform binary search on each such interval. It will be both faster and more robust than the Newton–Raphson method.

In matlab in a product dense matrix * sparse matrix, how can I only calculate specific entries?

We have a matlab program in which we want to calculate the following expression:
sum( (M*x) .* x)
Here, M is a small dense matrix (say 100 by 100) and x is a sparse fat matrix (say of size 100 by 1 000 000, with 5% non-zero entries). When I run the code, then first M*x is calculated, which is a dense matrix-- however, most of the computation that went into computing that matrix is a complete waste of time, as most of it will be zero-ed out in the point-wise product with x afterwards.
In other words: What I want to do is to only calculate those entries (i,j) of M*x which correspond to (i,j) for which x(i,j) is non-zero. In the end, I will then also only be interested in each column count.
It seems pretty simple to start with but I could not figure out how to tell matlab to do it or how to reshape the calculation so that matlab does it efficiently. I would really like to avoid having to code up a mex-file for this operation, and this operation is eating up most of the computation time.
Here is a code snippet for comparison:
m = 100;
n = 100000;
density = 0.05;
M = randn(m); M = M * M';
x = sprandn(m,n,density);
tic
for i = 1:100
xsi = sum((M * x).*x,1);
end
toc
Elapsed time is 13.570713 seconds.
To compute (M*x) .* x: find which entries of the final result can be nonzero (using find), compute manually only for those (sum(M(...).'.*x(...)) .* nonzeros(x).'), and from that build the final matrix (using sparse):
[ii jj] = find(x);
R = sparse(ii, jj, sum(M(ii,:).'.*x(:,jj)) .* nonzeros(x).');
Of course, to compute sum((M*x) .* x) you then simply use
full(sum(R))

Efficient replacement for ppval

I have a loop in which I use ppval to evaluate a set of values from a piecewise polynomial spline. The interpolation is easily the most time consuming part of the loop and I am looking for a way improve the function's efficiency.
More specifically, I'm using a finite difference scheme to calculate transient temperature distributions in friction welds. To do this I need to recalculate the material properties (as a function of temperature and position) at each time step. The rate limiting factor is the interpolation of these values. I could use an alternate finite difference scheme (less restrictive in the time domain) but would rather stick with what I have if at all possible.
I've included a MWE below:
x=0:.1:10;
y=sin(x);
pp=spline(x,y);
tic
for n=1:10000
x_int=10*rand(1000,1);
y_int=ppval(pp,x_int);
end
toc
plot(x,y,x_int,y_int,'*') % plot for sanity of data
Elapsed time is 1.265442 seconds.
Edit - I should probably mention that I would be more than happy with a simple linear interpolation between values but the interp1 function is slower than ppval
x=0:.1:10;
y=sin(x);
tic
for n=1:10000
x_int=10*rand(1000,1);
y_int=interp1(x,y,x_int,'linear');
end
toc
plot(x,y,x_int,y_int,'*') % plot for sanity of data
Elapsed time is 1.957256 seconds.
This is slow, because you're running into the single most annoying limitation of JIT. It's the cause of many many many oh so many questions in the MATLAB tag here on SO:
MATLAB's JIT accelerator cannot accelerate loops that call non-builtin functions.
Both ppval and interp1 are not built in (check with type ppval or edit interp1). Their implementation is not particularly slow, they just aren't fast when placed in a loop.
Now I have the impression it's getting better in more recent versions of MATLAB, but there are still quite massive differences between "inlined" and "non-inlined" loops. Why their JIT doesn't automate this task by simply recursing into non-builtins, I really have no idea.
Anyway, to fix this, you should copy-paste the essence of what happens in ppval into the loop body:
% Example data
x = 0:.1:10;
y = sin(x);
pp = spline(x,y);
% Your original version
tic
for n = 1:10000
x_int = 10*rand(1000,1);
y_int = ppval(pp, x_int);
end
toc
% "inlined" version
tic
br = pp.breaks.';
cf = pp.coefs;
for n = 1:10000
x_int = 10*rand(1000,1);
[~, inds] = histc(x_int, [-inf; br(2:end-1); +inf]);
x_shf = x_int - br(inds);
zero = ones(size(x_shf));
one = x_shf;
two = one .* x_shf;
three = two .* x_shf;
y_int = sum( [three two one zero] .* cf(inds,:), 2);
end
toc
Profiler:
Results on my crappy machine:
Elapsed time is 2.764317 seconds. % ppval
Elapsed time is 1.695324 seconds. % "inlined" version
The difference is actually less than what I expected, but I think that's mostly due to the sum() -- for this ppval case, I usually only need to evaluate a single site per iteration, which you can do without histc (but with simple vectorized code) and matrix/vector multiplication x*y (BLAS) instead of sum(x.*y) (fast, but not BLAS-fast).
Oh well, a ~60% reduction is not bad :)
It is a bit surprising that interp1 is slower than ppval, but having a quick look at its source code, it seems that it has to check for many special cases and has to loop over all the points since it it cannot be sure if the step-size is constant.
I didn't check the timing, but I guess you can speed up the linear interpolation by a lot if you can guarantee that steps in x of your table are constant, and that the values to be interpolated are stricktly within the given range, so that you do not have to do any checking. In that case, linear interpolation can be converted to a simple lookup problem like so:
%data to be interpolated, on grid with constant step
x = 0:0.5:10;
y = sin(x);
x_int = 0:0.1:9.9;
%make sure it is interpolation, not extrapolation
assert(all(x(1) <= x_int & x_int < x(end)));
% compute mapping, this can be precomputed for constant grid
slope = (length(x) - 1) / (x(end) - x(1));
offset = 1 - slope*x(1);
%map x_int to interval 1..lenght(i)
xmapped = offset + slope * x_int;
ind = floor(xmapped);
frac = xmapped - ind;
%interpolate by taking weighted sum of neighbouring points
y_int = y(ind) .* (1 - frac) + y(ind+1) .* frac;
% make plot to check correctness
plot(x, y, 'o-', x_int, y_int, '.')

matlab: optimum amount of points for linear fit

I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.

Resources