Related
I have a data of pulse train samples as amplitude samples with equal intervals.
Let's call the sampled pulse amplitude array as A and time array as t.
So the plot is obtained by plot(t, A) in MATLAB.
Here below is plot of the pulse train:
And below is the zoomed version(green dots are samples, reds circles are max points):
What I need to do is, I need an algorithm which can detect and save the max point of each pulse(I circled them in red above) into an array.
So far I tried the following but didn't work:
kk = 0
for i=1:length(t)-2
if y(i)>0 & y(i+1)>y(i) & y(i+2)>y(i+1) & y(i+3)<y(i+2)
kk = kk+1;
maxPointTime(kk) = t(i+2);
maxPointVoltage(kk) = A(i+2);
end
end
So you want to find the local maxima, right? MATLAB has a build in function to do so, cf. doc.
x = 1:100;
A = (1-cos(2*pi*0.01*x)).*sin(2*pi*0.15*x);
TF = islocalmax(A);
plot(x,A,x(TF),A(TF),'r*')
I have been attempting to detect peaks in sinusoidal time-series data in real time, however I've had no success thus far. I cannot seem to find a real-time algorithm that works to detect peaks in sinusoidal signals with a reasonable level of accuracy. I either get no peaks detected, or I get a zillion points along the sine wave being detected as peaks.
What is a good real-time algorithm for input signals that resemble a sine wave, and may contain some random noise?
As a simple test case, consider a stationary, sine wave that is always the same frequency and amplitude. (The exact frequency and amplitude don't matter; I have arbitrarily chosen a frequency of 60 Hz, an amplitude of +/− 1 unit, at a sampling rate of 8 KS/s.) The following MATLAB code will generate such a sinusoidal signal:
dt = 1/8000;
t = (0:dt:(1-dt)/4)';
x = sin(2*pi*60*t);
Using the algorithm developed and published by Jean-Paul, I either get no peaks detected (left) or a zillion "peaks" detected (right):
I've tried just about every combination of values for these 3 parameters that I could think of, following the "rules of thumb" that Jean-Paul gives, but I have so far been unable to get my expected result.
I found an alternative algorithm, developed and published by Eli Billauer, that does give me the results that I want—e.g.:
Even though Eli Billauer's algorithm is much simpler and does tend to reliably produce the results that I want, it is not suitable for real-time applications.
As another example of a signal that I'd like to apply such an algorithm to, consider the test case given by Eli Billauer for his own algorithm:
t = 0:0.001:10;
x = 0.3*sin(t) + sin(1.3*t) + 0.9*sin(4.2*t) + 0.02*randn(1, 10001);
This is a more unusual (less uniform/regular) signal, with a varying frequency and amplitude, but still generally sinusoidal. The peaks are plainly obvious to the eye when plotted, but hard to identify with an algorithm.
What is a good real-time algorithm to correctly identify the peaks in a sinusoidal input signal? I am not really an expert when it comes to signal processing, so it would be helpful to get some rules of thumb that consider sinusoidal inputs. Or, perhaps I need to modify e.g. Jean-Paul's algorithm itself in order to work properly on sinusoidal signals. If that's the case, what modifications would be required, and how would I go about making these?
Case 1: sinusoid without noise
If your sinusoid does not contain any noise, you can use a very classic signal processing technique: taking the first derivative and detecting when it is equal to zero.
For example:
function signal = derivesignal( d )
% Identify signal
signal = zeros(size(d));
for i=2:length(d)
if d(i-1) > 0 && d(i) <= 0
signal(i) = +1; % peak detected
elseif d(i-1) < 0 && d(i) >= 0
signal(i) = -1; % trough detected
end
end
end
Using your example data:
% Generate data
dt = 1/8000;
t = (0:dt:(1-dt)/4)';
y = sin(2*pi*60*t);
% Add some trends
y(1:1000) = y(1:1000) + 0.001*(1:1000)';
y(1001:2000) = y(1001:2000) - 0.002*(1:1000)';
% Approximate first derivative (delta y / delta x)
d = [0; diff(y)];
% Identify signal
signal = derivesignal(d);
% Plot result
figure(1); clf; set(gcf,'Position',[0 0 677 600])
subplot(4,1,1); hold on;
title('Data');
plot(t,y);
subplot(4,1,2); hold on;
title('First derivative');
area(d);
ylim([-0.05, 0.05]);
subplot(4,1,3); hold on;
title('Signal (-1 for trough, +1 for peak)');
plot(t,signal); ylim([-1.5 1.5]);
subplot(4,1,4); hold on;
title('Signals marked on data');
markers = abs(signal) > 0;
plot(t,y); scatter(t(markers),y(markers),30,'or','MarkerFaceColor','red');
This yields:
This method will work extremely well for any type of sinusoid, with the only requirement that the input signal contains no noise.
Case 2: sinusoid with noise
As soon as your input signal contains noise, the derivative method will fail. For example:
% Generate data
dt = 1/8000;
t = (0:dt:(1-dt)/4)';
y = sin(2*pi*60*t);
% Add some trends
y(1:1000) = y(1:1000) + 0.001*(1:1000)';
y(1001:2000) = y(1001:2000) - 0.002*(1:1000)';
% Add some noise
y = y + 0.2.*randn(2000,1);
Will now generate this result because first differences amplify noise:
Now there are many ways to deal with noise, and the most standard way is to apply a moving average filter. One disadvantage of moving averages is that they are slow to adapt to new information, such that signals may be identified after they have occurred (moving averages have a lag).
Another very typical approach is to use Fourier Analysis to identify all the frequencies in your input data, disregard all low-amplitude and high-frequency sinusoids, and use the remaining sinusoid as a filter. The remaining sinusoid will be (largely) cleansed from the noise and you can then use first-differencing again to determine the peaks and troughs (or for a single sine wave you know the peaks and troughs happen at 1/4 and 3/4 pi of the phase). I suggest you pick up any signal processing theory book to learn more about this technique. Matlab also has some educational material about this.
If you want to use this algorithm in hardware, I would suggest you also take a look at WFLC (Weighted Fourier Linear Combiner) with e.g. 1 oscillator or PLL (Phase-Locked Loop) that can estimate the phase of a noisy wave without doing a full Fast Fourier Transform. You can find a Matlab algorithm for a phase-locked loop on Wikipedia.
I will suggest a slightly more sophisticated approach here that will identify the peaks and troughs in real-time: fitting a sine wave function to your data using moving least squares minimization with initial estimates from Fourier analysis.
Here is my function to do that:
function [result, peaks, troughs] = fitsine(y, t, eps)
% Fast fourier-transform
f = fft(y);
l = length(y);
p2 = abs(f/l);
p1 = p2(1:ceil(l/2+1));
p1(2:end-1) = 2*p1(2:end-1);
freq = (1/mean(diff(t)))*(0:ceil(l/2))/l;
% Find maximum amplitude and frequency
maxPeak = p1 == max(p1(2:end)); % disregard 0 frequency!
maxAmplitude = p1(maxPeak); % find maximum amplitude
maxFrequency = freq(maxPeak); % find maximum frequency
% Initialize guesses
p = [];
p(1) = mean(y); % vertical shift
p(2) = maxAmplitude; % amplitude estimate
p(3) = maxFrequency; % phase estimate
p(4) = 0; % phase shift (no guess)
p(5) = 0; % trend (no guess)
% Create model
f = #(p) p(1) + p(2)*sin( p(3)*2*pi*t+p(4) ) + p(5)*t;
ferror = #(p) sum((f(p) - y).^2);
% Nonlinear least squares
% If you have the Optimization toolbox, use [lsqcurvefit] instead!
options = optimset('MaxFunEvals',50000,'MaxIter',50000,'TolFun',1e-25);
[param,fval,exitflag,output] = fminsearch(ferror,p,options);
% Calculate result
result = f(param);
% Find peaks
peaks = abs(sin(param(3)*2*pi*t+param(4)) - 1) < eps;
% Find troughs
troughs = abs(sin(param(3)*2*pi*t+param(4)) + 1) < eps;
end
As you can see, I first perform a Fourier transform to find initial estimates of the amplitude and frequency of the data. I then fit a sinusoid to the data using the model a + b sin(ct + d) + et. The fitted values represent a sine wave of which I know that +1 and -1 are the peaks and troughs, respectively. I can therefore identify these values as the signals.
This works very well for sinusoids with (slowly changing) trends and general (white) noise:
% Generate data
dt = 1/8000;
t = (0:dt:(1-dt)/4)';
y = sin(2*pi*60*t);
% Add some trends
y(1:1000) = y(1:1000) + 0.001*(1:1000)';
y(1001:2000) = y(1001:2000) - 0.002*(1:1000)';
% Add some noise
y = y + 0.2.*randn(2000,1);
% Loop through data (moving window) and fit sine wave
window = 250; % How many data points to consider
interval = 10; % How often to estimate
result = nan(size(y));
signal = zeros(size(y));
for i = window+1:interval:length(y)
data = y(i-window:i); % Get data window
period = t(i-window:i); % Get time window
[output, peaks, troughs] = fitsine(data,period,0.01);
result(i-interval:i) = output(end-interval:end);
signal(i-interval:i) = peaks(end-interval:end) - troughs(end-interval:end);
end
% Plot result
figure(1); clf; set(gcf,'Position',[0 0 677 600])
subplot(4,1,1); hold on;
title('Data');
plot(t,y); xlim([0 max(t)]); ylim([-4 4]);
subplot(4,1,2); hold on;
title('Model fit');
plot(t,result,'-k'); xlim([0 max(t)]); ylim([-4 4]);
subplot(4,1,3); hold on;
title('Signal (-1 for trough, +1 for peak)');
plot(t,signal,'r','LineWidth',2); ylim([-1.5 1.5]);
subplot(4,1,4); hold on;
title('Signals marked on data');
markers = abs(signal) > 0;
plot(t,y,'-','Color',[0.1 0.1 0.1]);
scatter(t(markers),result(markers),30,'or','MarkerFaceColor','red');
xlim([0 max(t)]); ylim([-4 4]);
Main advantages of this approach are:
You have an actual model of your data, so you can predict signals in the future before they happen! (e.g. fix the model and calculate the result by inputting future time periods)
You don't need to estimate the model every period (see parameter interval in the code)
The disadvantage is that you need to select a lookback window, but you will have this problem with any method that you use for real-time detection.
Video demonstration
Data is the input data, Model fit is the fitted sine wave to the data (see code), Signal indicates the peaks and troughs and Signals marked on data gives an impression of how accurate the algorithm is. Note: watch the model fit adjust itself to the trend in the middle of the graph!
That should get you started. There are also a lot of excellent books on signal detection theory (just google that term), which will go much further into these types of techniques. Good luck!
Consider using findpeaks, it is fast, which may be important for realtime. You should filter high-frequency noise to improve accuracy. here I smooth the data with a moving window.
t = 0:0.001:10;
x = 0.3*sin(t) + sin(1.3*t) + 0.9*sin(4.2*t) + 0.02*randn(1, 10001);
[~,iPeak0] = findpeaks(movmean(x,100),'MinPeakProminence',0.5);
You can time the process (0.0015sec)
f0 = #() findpeaks(movmean(x,100),'MinPeakProminence',0.5)
disp(timeit(f0,2))
To compare, processing the slope is only a bit faster (0.00013sec), but findpeaks have many useful options, such as minimum interval between peaks etc.
iPeaks1 = derivePeaks(x);
f1 = #() derivePeaks(x)
disp(timeit(f1,1))
Where derivePeaks is:
function iPeak1 = derivePeaks(x)
xSmooth = movmean(x,100);
goingUp = find(diff(movmean(xSmooth,100)) > 0);
iPeak1 = unique(goingUp([1,find(diff(goingUp) > 100),end]));
iPeak1(iPeak1 == 1 | iPeak1 == length(iPeak1)) = [];
end
I am trying to implement an algorithm described in this paper:
Decomposition of biospeckle images in temporary spectral bands
Here is an explanation of the algorithm:
We recorded a sequence of N successive speckle images with a sampling
frequency fs. In this way it was possible to observe how a pixel
evolves through the N images. That evolution can be treated as a time
series and can be processed in the following way: Each signal
corresponding to the evolution of every pixel was used as input to a
bank of filters. The intensity values were previously divided by their
temporal mean value to minimize local differences in reflectivity or
illumination of the object. The maximum frequency that can be
adequately analyzed is determined by the sampling theorem and s half
of sampling frequency fs. The latter is set by the CCD camera, the
size of the image, and the frame grabber. The bank of filters is
outlined in Fig. 1.
In our case, ten 5° order Butterworth filters
were used, but this number can be varied according to the required
discrimination. The bank was implemented in a computer using MATLAB
software. We chose the Butter-worth filter because, in addition to its
simplicity, it is maximally flat. Other filters, an infinite impulse
response, or a finite impulse response could be used.
By means of this
bank of filters, ten corresponding signals of each filter of each
temporary pixel evolution were obtained as output. Average energy Eb
in each signal was then calculated:
where pb(n) is the intensity of the filtered pixel in the nth image
for filter b divided by its mean value and N is the total number of
images. In this way, En values of energy for each pixel were obtained,
each of hem belonging to one of the frequency bands in Fig. 1.
With these values it is possible to build ten images of the active object,
each one of which shows how much energy of time-varying speckle there
is in a certain frequency band. False color assignment to the gray
levels in the results would help in discrimination.
and here is my MATLAB code base on that :
for i=1:520
for j=1:368
ts = [];
for k=1:600
ts = [ts D{k}(i,j)]; %%% kth image pixel i,j --- ts is time series
end
ts = double(ts);
temp = mean(ts);
if (temp==0)
for l=1:10
filtImag1{l}(i,j)=0;
end
continue;
end
ts = ts-temp;
ts = ts/temp;
N = 5; % filter order
W = [0.0 0.10;0.10 0.20;0.20 0.30;0.30 0.40;0.40 0.50;0.50 0.60 ;0.60 0.70;0.70 0.80 ;0.80 0.90;0.90 1.0];
[B,A]=butter(N,0.10,'low');
ts_f(1,:) = filter(B,A,ts);
N1 = 5;
for ind = 2:9
Wn = W(ind,:);
[B,A] = butter(N1,Wn);
ts_f(ind,:) = filter(B,A,ts);
end
[B,A]=butter(N,0.90,'high');
ts_f(10,:) = filter(B,A,ts);
for ind=1:10
%Following Paper Suggestion
filtImag1{ind}(i,j) =sum(ts_f(ind,:).^2);
end
end
end
for i=1:10
figure,imshow(filtImag1{i});
colorbar
end
pre_max = max(filtImag1{1}(:));
for i=1:10
new_max = max(filtImag1{i}(:));
if (pre_max<new_max)
pre_max=max(filtImag1{i}(:));
end
end
new_max = pre_max;
pre_min = min(filtImag1{1}(:));
for i=1:10
new_min = min(filtImag1{i}(:));
if (pre_min>new_min)
pre_min = min(filtImag1{i}(:));
end
end
new_min = pre_min;
%normalize
for i=1:10
temp_imag = filtImag1{i}(:,:);
x=isnan(temp_imag);
temp_imag(x)=0;
t_max = max(max(temp_imag));
t_min = min(min(temp_imag));
temp_imag = (double(temp_imag-t_min)).*((double(new_max)-double(new_min))/double(t_max-t_min))+(double(new_min));
%median filter
%temp_imag = medfilt2(temp_imag);
imag_test2{i}(:,:) = temp_imag;
end
for i=1:10
figure,imshow(imag_test2{i});
colorbar
end
for i=1:10
A=imag_test2{i}(:,:);
B=A/max(max(A));
B=histeq(A);
figure,imshow(B);
colorbar
imag_test2{i}(:,:)=B;
end
but I am not getting the same result as paper. has anybody has any idea why? or where I have gone wrong?
EDIT
by getting help from #Amro and using his code I endup with the following images:
here is my Original Image from 72hrs germinated Lentil (400 images, with 5 frame per second):
here is the results images for 10 different band :
A couple of issue I can spot:
when you divide the signal by its mean, you need to check that it was not zero. Otherwise the result will be NaN.
the authors (I am following this article) used a bank of filters with frequency bands covering the entire range up to the Nyquist frequency. You are doing half of that. The normalized frequencies you pass to butter should go all the way up to 1 (corresponds to fs/2)
When computing the energy of each filtered signal, I think you should not divide by its mean (you have already accounted for that before). Instead simply do: E = sum(sig.^2); for each of the filtered signals
In the last post-processing step, you should normalize to the range [0,1], and then apply the median filtering algorithm medfilt2. The computation doesn't look right, it should be something like:
img = ( img - min(img(:)) ) ./ ( max(img(:)) - min(img(:)) );
EDIT:
With the above points in mind, I tried to rewrite the code in a vectorized way. Since you didn't post sample input images, I can't test if the result is as expected... Plus I am not sure how to interpret the final images anyway :)
%# read biospeckle images
fnames = dir( fullfile('folder','myimages*.jpg') );
fnames = {fnames.name};
N = numel(fnames); %# number of images
Fs = 1; %# sampling frequency in Hz
sz = [209 278]; %# image sizes
T = zeros([sz N],'uint8'); %# store all images
for i=1:N
T(:,:,i) = imread( fullfile('folder',fnames{i}) );
end
%# timeseries corresponding to every pixel
T = reshape(T, [prod(sz) N])'; %# columns are the signals
T = double(T); %# work with double class
%# normalize signals before filtering (avoid division by zero)
mn = mean(T,1);
T = bsxfun(#rdivide, T, mn+(mn==0)); %# divide by temporal mean
%# bank of filters
numBanks = 10;
order = 5; % butterworth filter order
fCutoff = linspace(0, Fs/2, numBanks+1)'; % lower/upper cutoff freqs
W = [fCutoff(1:end-1) fCutoff(2:end)] ./ (Fs/2); % normalized frequency bands
W(1,1) = W(1,1) + 1e-5; % adjust first freq
W(end,end) = W(end,end) - 1e-5; % adjust last freq
%# filter signals using the bank of filters
Tf = cell(numBanks,1); %# filtered signals using each filter
for i=1:numBanks
[b,a] = butter(order, W(i,:)); %# bandpass filter
Tf{i} = filter(b,a,T); %# apply filter to all signals
end
clear T %# cleanup unnecessary stuff
%# compute average energy in each signal across frequency bands
Tf = cellfun(#(x)sum(x.^2,1), Tf, 'Uniform',false);
%# normalize each to [0,1], and build corresponding images
Tf = cellfun(#(x)reshape((x-min(x))./range(x),sz), Tf, 'Uniform',false);
%# show images
for i=1:numBanks
subplot(4,3,i), imshow(Tf{i})
title( sprintf('%g - %g Hz',W(i,:).*Fs/2) )
end
colormap(gray)
(I used the image from here for the above result)
EDIT#2
Made some changes and simplified the above code a bit. This shall reduce memory footprint. For example I used cell array instead of a single multidimensional matrix to store the result. That way we don't allocate one big block of contiguous memory. I also reused same variables instead of introducing new ones at each intermediate step...
The paper doesn't mention subtracting the mean of the time series, are you sure that's necessary? Also, you only compute the new_max and new_min once, from the last image.
I have been querying google for some material about kd-trees and image comparison but I couldn't make the 'link' between the technics for image comparison using kd-trees.
Firstly, I found some articles talking about speed improvement with randomized kd-trees, then I was introduced to SIFT. After understanding basically how SIFT works, I read about nearest neighbor search.
My real question is: If I have a mesh of points from SIFT, then I create the kd-tree for every image. How the nearest neighbor search can help me compare the images? At first, I thought that comparing images with a tree would work with some algorithm checking the tree structure and how near every point is from an image A from a point in the same node in and image B.
If the question is too dumb, please suggest material or some topic for search.
Thank you!
I'd suggest first understanding slow feature matching, without kdtrees.
input: 1000 reference features, e.g. of faces or flowers; call these F1 .. F1000
a query feature Q: which face or flower feature is most like, nearest, Q ?
As you know,
SIFT
reduces an image feature to 128 8-bit numbers, scaled so that
similarity( feature F, feature Q ) =
Euclidean distance( SIFT(F), SIFT(Q) ).
The simplest way to find which of F1 .. F1000 is most like Q
is just to look at F1, F2 ... one by one:
# find the feature of F1 .. F1000 nearest Q
nearestdistance = infinity
nearestindex = 0
for j in 1 .. 1000:
distance = Euclideandistance( SIFT(Fj), SIFT(Q) ) # 128 numbers vs. 128 numbers
if distance < nearestdistance:
nearestdistance = distance
nearestindex = j
(Of course one computes the SIFT numbers outside the loop.)
A Kdtree
is just a way of finding nearby vectors quickly;
it has little to do with what is being matched
(vectors of numbers representing ...), or how (Euclidean distance).
Now kdtrees are very fast for 2d, 3d ... up to perhaps 20d,
but may be no faster than a linear scan of all the data above 20d.
So how can a kdtree work for features in 128d ?
The main trick is to quit searching early.
The paper by Muja and Lowe,
Fast approximate nearest neighbors with automatic algorithm configuration,
2009, 10p, describes multiple randomized kdtrees for matching 128d SIFT features.
(Lowe is the inventor of SIFT.)
To compare two images I and Q, one finds a set of feature vectors --
several hundred up to a few thousand SIFT vectors -- for each,
and looks for near matches of these sets.
(One may think of images as molecules, features as atoms;
near-matching molecules is much harder than near-matching atoms,
but it helps to be able to match atoms quickly.)
Hope this helps.
If you are planning on using kd-trees for approximate NN search in higher dimensions, you might want to review the experiments here: http://zach.in.tu-clausthal.de/software/approximate_nn/
I suggest you to extract color code values of each image and create a KD tree using those features vectors.
You can use the following mat lab code to extract the color code features.
im = imread('image.jpg');
len = size(im,3);
if(len == 1)
im = ind2rgb(im, colourMap);
im = uint8(im.*255);
end
im(logical( 0 <= im & im <= 63)) = 0;
im(logical( 64 <= im & im <= 127)) = 1;
im(logical(128 <= im & im <= 191)) = 2;
im(logical(192 <= im & im <= 255)) = 3;
im = im(:,:,1) * 16 + im(:,:,2) * 4 + im(:,:,3);
imHist = histc(im(:),0:63);
I want to know the frequency of data. I had a little bit idea that it can be done using FFT, but I am not sure how to do it. Once I passed the entire data to FFT, then it is giving me 2 peaks, but how can I get the frequency?
Thanks a lot in advance.
Here's what you're probably looking for:
When you talk about computing the frequency of a signal, you probably aren't so interested in the component sine waves. This is what the FFT gives you. For example, if you sum sin(2*pi*10x)+sin(2*pi*15x)+sin(2*pi*20x)+sin(2*pi*25x), you probably want to detect the "frequency" as 5 (take a look at the graph of this function). However, the FFT of this signal will detect the magnitude of 0 for the frequency 5.
What you are probably more interested in is the periodicity of the signal. That is, the interval at which the signal becomes most like itself. So most likely what you want is the autocorrelation. Look it up. This will essentially give you a measure of how self-similar the signal is to itself after being shifted over by a certain amount. So if you find a peak in the autocorrelation, that would indicate that the signal matches up well with itself when shifted over that amount. There's a lot of cool math behind it, look it up if you are interested, but if you just want it to work, just do this:
Window the signal, using a smooth window (a cosine will do. The window should be at least twice as large as the largest period you want to detect. 3 times as large will give better results). (see http://zone.ni.com/devzone/cda/tut/p/id/4844 if you are confused).
Take the FFT (however, make sure the FFT size is twice as big as the window, with the second half being padded with zeroes. If the FFT size is only the size of the window, you will effectively be taking the circular autocorrelation, which is not what you want. see https://en.wikipedia.org/wiki/Discrete_Fourier_transform#Circular_convolution_theorem_and_cross-correlation_theorem )
Replace all coefficients of the FFT with their square value (real^2+imag^2). This is effectively taking the autocorrelation.
Take the iFFT
Find the largest peak in the iFFT. This is the strongest periodicity of the waveform. You can actually be a little more clever in which peak you pick, but for most purposes this should be enough. To find the frequency, you just take f=1/T.
Suppose x[n] = cos(2*pi*f0*n/fs) where f0 is the frequency of your sinusoid in Hertz, n=0:N-1, and fs is the sampling rate of x in samples per second.
Let X = fft(x). Both x and X have length N. Suppose X has two peaks at n0 and N-n0.
Then the sinusoid frequency is f0 = fs*n0/N Hertz.
Example: fs = 8000 samples per second, N = 16000 samples. Therefore, x lasts two seconds long.
Suppose X = fft(x) has peaks at 2000 and 14000 (=16000-2000). Therefore, f0 = 8000*2000/16000 = 1000 Hz.
If you have a signal with one frequency (for instance:
y = sin(2 pi f t)
With:
y time signal
f the central frequency
t time
Then you'll get two peaks, one at a frequency corresponding to f, and one at a frequency corresponding to -f.
So, to get to a frequency, can discard the negative frequency part. It is located after the positive frequency part. Furthermore, the first element in the array is a dc-offset, so the frequency is 0. (Beware that this offset is usually much more than 0, so the other frequency components might get dwarved by it.)
In code: (I've written it in python, but it should be equally simple in c#):
import numpy as np
from pylab import *
x = np.random.rand(100) # create 100 random numbers of which we want the fourier transform
x = x - mean(x) # make sure the average is zero, so we don't get a huge DC offset.
dt = 0.1 #[s] 1/the sampling rate
fftx = np.fft.fft(x) # the frequency transformed part
# now discard anything that we do not need..
fftx = fftx[range(int(len(fftx)/2))]
# now create the frequency axis: it runs from 0 to the sampling rate /2
freq_fftx = np.linspace(0,2/dt,len(fftx))
# and plot a power spectrum
plot(freq_fftx,abs(fftx)**2)
show()
Now the frequency is located at the largest peak.
If you are looking at the magnitude results from an FFT of the type most common used, then a strong sinusoidal frequency component of real data will show up in two places, once in the bottom half, plus its complex conjugate mirror image in the top half. Those two peaks both represent the same spectral peak and same frequency (for strictly real data). If the FFT result bin numbers start at 0 (zero), then the frequency of the sinusoidal component represented by the bin in the bottom half of the FFT result is most likely.
Frequency_of_Peak = Data_Sample_Rate * Bin_number_of_Peak / Length_of_FFT ;
Make sure to work out your proper units within the above equation (to get units of cycles per second, per fortnight, per kiloparsec, etc.)
Note that unless the wavelength of the data is an exact integer submultiple of the FFT length, the actual peak will be between bins, thus distributing energy among multiple nearby FFT result bins. So you may have to interpolate to better estimate the frequency peak. Common interpolation methods to find a more precise frequency estimate are 3-point parabolic and Sinc convolution (which is nearly the same as using a zero-padded longer FFT).
Assuming you use a discrete Fourier transform to look at frequencies, then you have to be careful about how to interpret the normalized frequencies back into physical ones (i.e. Hz).
According to the FFTW tutorial on how to calculate the power spectrum of a signal:
#include <rfftw.h>
...
{
fftw_real in[N], out[N], power_spectrum[N/2+1];
rfftw_plan p;
int k;
...
p = rfftw_create_plan(N, FFTW_REAL_TO_COMPLEX, FFTW_ESTIMATE);
...
rfftw_one(p, in, out);
power_spectrum[0] = out[0]*out[0]; /* DC component */
for (k = 1; k < (N+1)/2; ++k) /* (k < N/2 rounded up) */
power_spectrum[k] = out[k]*out[k] + out[N-k]*out[N-k];
if (N % 2 == 0) /* N is even */
power_spectrum[N/2] = out[N/2]*out[N/2]; /* Nyquist freq. */
...
rfftw_destroy_plan(p);
}
Note it handles data lengths that are not even. Note particularly if the data length is given, FFTW will give you a "bin" corresponding to the Nyquist frequency (sample rate divided by 2). Otherwise, you don't get it (i.e. the last bin is just below Nyquist).
A MATLAB example is similar, but they are choosing the length of 1000 (an even number) for the example:
N = length(x);
xdft = fft(x);
xdft = xdft(1:N/2+1);
psdx = (1/(Fs*N)).*abs(xdft).^2;
psdx(2:end-1) = 2*psdx(2:end-1);
freq = 0:Fs/length(x):Fs/2;
In general, it can be implementation (of the DFT) dependent. You should create a test pure sine wave at a known frequency and then make sure the calculation gives the same number.
Frequency = speed/wavelength.
Wavelength is the distance between the two peaks.