I am not high level in data strucutre, I need some help:
I want to adapt np.fft.fft to get specific Amplitudes and phase angles for each barcode.
However, for each barcode, there are 512 datapoints(rows) as a signal, and I want to build the loop to generate the corresonding complex numbers.
Which means, from index[0] to [511] as a single period and compute the np.fft.fft.
next barcode will be from index[512] to [1022] and so on until the end..
Could someone give me some guidelines?
Many thanks in advance!!
And I've already written the code like this:
def generate_Nth_sine_wave(signaldata,N):
"""Extracts the Nth harmonic of a given signal.
It assumes that the input belongs to a single period, spaced equally over the limits
Args:
signal : List containing signal values over
N : Nth Harmonic """
# Apply Fourier Transformation on the signal to obtain the Fourier Coefficients
x = np.fft.fft(signal)
# initiate a blank array with the same length as the coefficients list - "FT_Coeff"
Harmonic_list = [0] * len(x)
# The Nth list element of "x" will correspond to the coefficient of the Nth harmonic.
# Hence isolating only the Nth element by assigning null to the rest
Harmonic_list[N] = 1
Specific_Harmonic = Harmonic_list * x
# Applying inverse FFT to the isolated harmonic Coefficient to get back the curve that was contributed by the specific Harmonic
Harmonic_Curve = np.fft.ifft(Specific_Harmonic)*2
Harmonic_Curve = Harmonic_Curve.real
c = x[N]
a = c.real
b = c.imag
phi = math.degrees(math.atan2(b,a))%360 # Phase angle
hp = ((360-phi)/N)%360 # Fist higher peak position angle
Magnitude = max(Harmonic_Curve) # Magnitude of the harmonic curve
return Magnitude, hp
Related
Currently, I'm writing a simulation that asses the performance of a positioning algorithm by measuring the mean error of the position estimator for different points around the room. Unfortunately the running times are pretty slow and so I am looking for ways to speed up my code.
The working principle of the position estimator is based on the MUSIC algorithm. The estimator gets an autocorrelation matrix (sized 12x12, with complex values in general) as an input and follows the next steps:
Find the 12 eigenvalues and eigenvectors of the autocorrelation matrix R.
Construct a new 12x11 matrix EN whose columns are the 11 eigenvectors corresponding to the 11 smallest eigenvalues.
Using the matrix EN, construct a function P = 1/(a' EN EN' a).
Where a is a 12x1 complex vector and a' is the Hermitian conjugate of a. The components of a are functions of 3 variables (named x,y and z) and so the scalar P is also a function P(x,y,z)
Finally, find the values (x0,y0,z0) which maximizes the value of P and return it as the position estimate.
In my code, I choose some constant z and create a grid on points in the plane (at heigh z, parallel to the xy plane). For each point I make n4Avg repetitions and calculate the error of the estimated point. At the end of the parfor loop (and some reshaping), I have a matrix of errors with dims (nx) x (ny) x (n4Avg) and the mean error is calculated by taking the mean of the error matrix (acting on the 3rd dimension).
nx=30 is the number of point along the x axis.
ny=15 is the number of points along the y axis.
n4Avg=100 is the number of repetitions used for calculating the mean error at each point.
nGen=100 is the number of generations in the GA algorithm (100 was tested to be good enough).
x = linspace(-20,20,nx);
y = linspace(0,20,ny);
z = 5;
[X,Y] = meshgrid(x,y);
parfor ri = 1:nx*ny
rT = [X(ri);Y(ri);z];
[ENs] = getEnNs(rT,stdv,n4R,n4Avg); % create n4Avg EN matrices
for rep = 1:n4Avg
pos_est = estPos_helper(squeeze(ENs(:,:,rep)),nGen);
posEstErr(ri,rep) = vecnorm(pos_est(:)-rT(:));
end
end
The matrices EN are generated by the following code
function [ENs] = getEnNs(rT,stdv,n4R,nEN)
% generate nEN simulated EN matrices, each using n4R simulated phases
f_c = 2402e6; % center frequency [Hz]
c0 = 299702547; % speed of light [m/s]
load antennaeArr1.mat antennaeArr1;
% generate initial phases.
phi0 = 2*pi*rand(n4R*nEN,1);
k0 = 2*pi.*(f_c)./c0;
I = cos(-k0.*vecnorm(antennaeArr1 - rT(:),2,1)-phi0);
Q = -sin(-k0.*vecnorm(antennaeArr1 - rT(:),2,1)-phi0);
phases = I+1i*Q;
phases = phases + stdv/sqrt(2)*(randn(size(phases)) + 1i*randn(size(phases)));
phases = reshape(phases',[12,n4R,nEN]);
Rxx = pagemtimes(phases,pagectranspose(phases));
ENs = zeros(12,11,nEN);
for i=1:nEN
[ENs(:,:,i),~] = eigs(squeeze(Rxx(:,:,i)),11,'smallestabs');
end
end
The position estimator uses a solver utilizing a 'genetic algorithm' (chosen because it preformed the best of all the other solvers).
function pos_est = estPos_helper(EN,nGen)
load antennaeArr1.mat antennaeArr1; % 3x12 constant matrix
antennae_array = antennaeArr1;
x0 = [0;10;5];
lb = [-20;0;0];
ub = [20;20;10];
function y = myfun(x)
k0 = 2*pi*2.402e9/299702547;
a = exp( -1i*k0*sqrt( (x(1)-antennae_array(1,:)').^2 + (x(2) - antennae_array(2,:)').^2 + (x(3)-antennae_array(3,:)').^2 ) );
y = 1/real((a')*(EN)*(EN')*a);
end
% Create optimization variables
x3 = optimvar("x",3,1,"LowerBound",lb,"UpperBound",ub);
% Set initial starting point for the solver
initialPoint2.x = x0;
% Create problem
problem = optimproblem("ObjectiveSense","Maximize");
% Define problem objective
problem.Objective = fcn2optimexpr(#myfun,x3);
% Set nondefault solver options
options2 = optimoptions("ga","Display","off","HybridFcn","fmincon",...
"MaxGenerations",nGen);
% Solve problem
solution = solve(problem,initialPoint2,"Solver","ga","Options",options2);
% Clear variables
clearvars x3 initialPoint2 options2
pos_est = solution.x;
end
The current runtime of the code, when setting the parameters as shown above, is around 700-800 seconds. This is a problem as I would like to increase the number of points in the grid and the number of repetitions to get a more accurate result.
The main ways I've tried to tackle this is by using parallel computing (in the form of the parloop) and by reducing the nested loops I had (one for x and one for y) into a single vectorized loop going over all the points in the grid.
It indeed helped, but not quite enough.
I apologize for the messy code.
Michael.
Can someone explain min_distance and min_angle optional parameters, please ?
http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.hough_line_peaks
For min_angle=n, I thought it would check if the next angle's line was minimum superior to n element in my theta array for being accepted.
import numpy as np
from skimage.transform import hough_line,hough_line_peaks
iden = np.identity(200)
hspace, angles, dists = hough_line(iden,theta=np.linspace(-np.pi/2,np.pi/2,1800)) # 0.1 degree resolution
hspace, angles, dists = hough_line_peaks(hspace, angles, dists,min_distance=0,min_angle=20) # 2 degree minimum before accepting as new line?
print(hspace, angles*180/np.pi, dists)
output : [200 126 124] [-44.9749861 -45.27515286 -44.67481934] [ 0.50088496 -0.50088496 1.50265487]
The angle array shows that i'm getting wrong with this. The parameter accepts only integer, i'm not sure of what it could be ...
I don't think there is anything wrong with the function hough_line_peaks() itself.
min_angle and min_distance define a zone around an already found peak, in which no other peak can be found (i.e., you consider that a peak that close from another peak is actually a single unique peak)
In the accumulator of the Hough transform, the 2 dimensions are: angles and distances. You basically set by an integer the number of bins in the accumulator that have to be ignored around an already found peak.
By setting min_distance to 0, you are only avoiding to get 2 peaks that have the exact same distance parameter AND an angle parameter difference less than 20 * angle_resolution ~= 20 * 0.1 = .2. None of the 3 peaks that are returned have the same distance parameter and therefore, the condition you set is respected.
Also, be aware that your angle resolution is not exactly 0.1 degrees unless the third parameter in np.linspace is 1801. This is the way np.linspace behaves, you give it the total number of points. hough_line_peaks just takes the returned vector as an input argument. You could also use np.arange which allows you to pass the step as an argument.
Edit
The angle array returned is in degrees ?!?. I would expect radians as for the input... The values should correspond to some of the values of np.linspace(-np.pi/2, np.pi/2, 1800).
End of edit
Basically, it works this way:
Find the highest value in accumulator -> 200, -44.9749861, 0.50088496 (200 means that 200 pixels have been assigned to this bin of the accumulator)
Set the bins of the accumulator that are around the peak bin [bin - min_dist: bin + min_dist, bin - min_angle:bin + min_angle] to 0
Find the second biggest value in the accumulator and so on.
Edit 2:
Why the results accumulator_value = [200 126 124], angle_params = [a b c] and dist_params = [d e f] (for all d, e, f such as d != e and e != f) are not incoherent with the parameters min_angle = X and min_distance = 0
The strongest peak in the accumulator is found at the binangle_param = a and dist_param = d.
The search for the second peak will be carried out by discarding this bin in the accumulator as well as the bins that are located at a number of bins <= X (side note: it is possible that it is X/2 but this does not change the reasoning here) on the angle "direction" and at a number of bins <= 0 on the distance "direction" from the "peak's" bin.
Only this. So, the other peaks found in your case are located in a bin whose distance parameter is different that any other peak found. Therefore there is no reason for discarding them.
The accumulator is simply a 2-dimensional table of bins, one direction representing the angles and the other the distances.
I have a question related to Fast Fourier transform. I want to calculate the phase and make FFT to draw power spectral density. However when I calculate the frequency f, there are some errors. This is my program code:
n = 1:32768;
T = 0.2*10^-9; % Sampling period
Fs = 1/T; % Sampling frequency
Fn = Fs/2; % Nyquist frequency
omega = 2*pi*200*10^6; % Carrier frequency
L = 32768; % % Length of signal
t = (0:L-1)*T; % Time vector
x_signal(n) = cos(omega*T*n + 0.1*randn(size(n))); % Additive phase noise (random)
y_signal(n) = sin(omega*T*n + 0.1*randn(size(n))); % Additive phase noise (random)
theta(n) = atan(y_signal(n)/x_signal(n));
f = (theta(n)-theta(n-1))/(2*pi)
Y = fft(f,t);
PSD = Y.*conj(Y); % Power Spectral Density
%Fv = linspace(0, 1, fix(L/2)+1)*Fn; % Frequency Vector
As posted, you would get the error
error: subscript indices must be either positive integers less than 2^31 or logicals
which refers to the operation theta(n-1) when n=1 which results in an index of 0 (which is out of bounds since Matlab uses 1-based indexing). To avoid that could use a subset of indices in n:
f = (theta(n(2:end))-theta(n(1:end-1)))/(2*pi);
That said, if you are doing this to try to obtain an instantaneous measure of the frequency, then you will have a few more issues to deal with. The most trivial one is that you should also divide by T. Not as obvious is the fact that as given, theta is a scalar due to the use of the / operator (see Matlab's mrdivide) rather than the ./ operator which performs element-wise division. So a better expression would be:
theta(n) = atan(y_signal(n)./x_signal(n));
Now, the next problem you might notice is that you are actually losing some phase information since the result of atan is [-pi/2,pi/2] instead of the full [-pi,pi] range. To avoid this you should instead be using atan2:
theta(n) = atan2(y_signal(n), x_signal(n));
Even with this, you are likely to notice that the estimated frequency regularly has spikes whenever the phase jumps between near -pi and near pi. This can be avoided by computing the phase difference modulo 2*pi:
f = mod(theta(n(2:end))-theta(n(1:end-1)),2*pi)/(2*pi*T);
A final thing to note: when calling the fft, you should not be passing in a time variable (the input is implicitly assumed to be sampled at regular time intervals). You may however specify the desired length of the FFT. So, you would thus compute Y as follow:
Y = fft(f, L);
And you could then plot the resulting PSD using:
Fv = linspace(0, 1, fix(L/2)+1)*Fn; % Frequency Vector
plot(Fv, abs(PSD(1:L/2+1)));
I am running a speech enhancement algorithm based on Gaussian Mixture Model. The problem is that the estimation algorithm underflows during the training processing.
I am trying to calculate the PDF of a log spectrum frame X given a Gaussian cluster which is a product of the PDF of each frequnecy component X_k (fft is done for k=1..256)
what i get is a product of 256 exp(-v(k)) such that v(k)>=0
Here is a snippet of the MATLAB calculation:
N - number of frames; M- number of mixtures; c_i weight for each mixture;
gamma(n,i) = c_i*f(X_n|I = i)
for i=1 : N
rep_DataMat(:,:,i) = repmat(DataMat(:,i),1,M);
gamma_exp(:,:) = (1./sqrt((2*pi*sigmaSqr_curr))).*exp(((-1)*((rep_DataMat(:,:,i) - mue_curr).^2)./(2*sigmaSqr_curr)));
gamma_curr(i,:) = c_curr.*(prod(10*gamma_exp(:,:),1));
alpha_curr(i,:) = gamma_curr(i,:)./sum(gamma_curr(i,:));
end
The product goes quickly to zero due to K = 256 since the numbers being smaller then one. Is there a way I can calculate this with causing an underflow (like logsum or similar)?
You can perform the computations in the log domain.
The conversion of products into sums is straightforward.
Sums on the other hand can be converted with something such as logsumexp.
This works using the formula:
log(a + b) = log(exp(log(a)) + exp(log(b)))
= log(exp(loga) + exp(logb))
Where loga and logb are the respective representation of a and b in the log domain.
The basic idea is then to factorize the exponent with the largest argument (eg. loga for sake of illustration):
log(exp(loga)+exp(logb)) = log(exp(loga)*(1+exp(logb-loga)))
= loga + log(1+exp(logb-loga))
Note that the same idea applies if you have more than 2 terms to add.
I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.