Confusion with FFT algorithm - algorithm

I am trying to understand the FFT algorithm and so far I think that I understand the main concept behind it. However I am confused as to the difference between 'framesize' and 'window'.
Based on my understanding, it seems that they are redundant with each other? For example, I present as input a block of samples with a framesize of 1024. So I have byte[1024] presented as input.
What then is the purpose of the windowing function? Since initially, I thought the purpose of the windowing function is to select the block of samples from the original data.
Thanks!

What then is the purpose of the windowing function?
It's to deal with so-called "spectral leakage": the FFT assumes an infinite series that repeats the given sample frame over and over again. If you have a sine wave that is an integral number of cycles within the sample frame, then all is good, and the FFT gives you a nice narrow peak at the proper frequency. But if you have a sine wave that is not an integral number of cycles, there's a discontinuity between the last and first sample, and the FFT gives you false harmonics.
Windowing functions lower the amplitudes at the beginning and the end of the sample frame, to reduce the harmonics caused by this discontinuity.
some diagrams from a National Instruments webpage on windowing:
integral # of cycles:
non-integer # of cycles:
for additional information:
http://www.tmworld.com/article/322450-Windowing_Functions_Improve_FFT_Results_Part_I.php
http://zone.ni.com/reference/en-XX/help/371361B-01/lvanlsconcepts/char_smoothing_windows/
http://www.physik.uni-wuerzburg.de/~praktiku/Anleitung/Fremde/ANO14.pdf

A rectangular window of length M has frequency response of sin(ω*M/2)/sin(ω/2), which is zero when ω = 2*π*k/M, for k ≠ 0. For a DFT of length N, where ω = 2*π*n/N, there are nulls at n = k * N/M. The ratio N/M isn't necessarily an integer. For example, if N = 40, and M = 32, then there are nulls at multiples of 1.25, but only the integer multiples will appear in the DFT, which is bins 5, 10, 15, and 20 in this case.
Here's a plot of the 1024-point DFT of a 32-point rectangular window:
M = 32
N = 1024
w = ones(M)
W = rfft(w, N)
K = N/M
nulls = abs(W[K::K])
plot(abs(W))
plot(r_[K:N/2+1:K], nulls, 'ro')
xticks(r_[:512:64])
grid(); axis('tight')
Note the nulls at every N/M = 32 bins. If N=M (i.e. the window length equals the DFT length), then there are nulls at all bins except at n = 0.
When you multiply a window by a signal, the corresponding operation in the frequency domain is the circular convolution of the window's spectrum with the signal's spectrum. For example, the DTFT of a sinusoid is a weighted delta function (i.e. an impulse with infinite height, infinitesimal extension, and finite area) located at the positive and negative frequency of the sinusoid. Convolving a spectrum with a delta function just shifts it to the location of the delta and scales it by the delta's weight. Therefore when you multiply a window by a sinusoid in the sample domain, the window's frequency response is scaled and shifted to the frequency of the sinusoid.
There are a couple of scenarios to examine regarding the length of a rectangular window. First let's look at the case where the window length is an integer multiple of the sinusoid's period, e.g. a 32-sample rectangular window of a cosine with a period of 32/8 = 4 samples:
x1 = cos(2*pi*8*r_[:32]/32) # ω0 = 8π/16, bin 8/32 * 1024 = 256
X1 = rfft(x1 * w, 1024)
plot(abs(X1))
xticks(r_[:513:64])
grid(); axis('tight')
As before, there are nulls at multiples of N/M = 32. But the window's spectrum has been shifted to bin 256 of the sinusoid and scaled by its magnitude, which is 0.5 split between the positive frequency and the negative frequency (I'm only plotting positive frequencies). If the DFT length had been 32, the nulls would line up at every bin, prompting the appearance that there's no leakage. But that misleading appearance is only a function of the DFT length. If you pad the windowed signal with zeros (as above), you'll get to see the sinc-like response at frequencies between the nulls.
Now let's look at a case where the window length is not an integer multiple of the sinusoid's period, e.g. a cosine with an angular frequency of 7.5π/16 (the period is 64 samples):
x2 = cos(2*pi*15*r_[:32]/64) # ω0 = 7.5π/16, bin 15/64 * 1024 = 240
X2 = rfft(x2 * w, 1024)
plot(abs(X2))
xticks(r_[-16:513:64])
grid(); axis('tight')
The center bin location is no longer at an integer multiple of 32, but shifted by a half down to bin 240. So let's see what the corresponding 32-point DFT would look like (inferring a 32-point rectangular window). I'll compute and plot the 32-point DFT of x2[n] and also superimpose a 32x decimated copy of the 1024-point DFT:
X2_32 = rfft(x2, 32)
X2_sample = X2[::32]
stem(r_[:17],abs(X2_32))
plot(abs(X2_sample), 'rs') # red squares
grid(); axis([0,16,0,11])
As you can see in the previous plot, the nulls are no longer aligned at multiples of 32, so the magnitude of the 32-point DFT is non-zero at each bin. In the 32 point DFT, the window's nulls are still spaced every N/M = 32/32 = 1 bin, but since ω0 = 7.5π/16, the center is at 'bin' 7.5, which puts the nulls at 0.5, 1.5, etc, so they're not present in the 32-point DFT.
The general message is that spectral leakage of a windowed signal is always present but can be masked in the DFT if the signal specrtum, window length, and DFT length come together in just the right way to line up the nulls. Beyond that you should just ignore these DFT artifacts and concentrate on the DTFT of your signal (i.e. pad with zeros to sample the DTFT at higher resolution so you can clearly examine the leakage).
Spectral leakage caused by convolving with a window's spectrum will always be there, which is why the art of crafting particularly shaped windows is so important. The spectrum of each window type has been tailored for a specific task, such as dynamic range or sensitivity.
Here's an example comparing the output of a rectangular window vs a Hamming window:
from pylab import *
import wave
fs = 44100
M = 4096
N = 16384
# load a sample of guitar playing an open string 6
# with a fundamental frequency of 82.4 Hz
g = fromstring(wave.open('dist_gtr_6.wav').readframes(-1),
dtype='int16')
L = len(g)/4
g_t = g[L:L+M]
g_t = g_t / float64(max(abs(g_t)))
# compute the response with rectangular vs Hamming window
g_rect = rfft(g_t, N)
g_hamm = rfft(g_t * hamming(M), N)
def make_plot():
fmax = int(82.4 * 4.5 / fs * N) # 4 harmonics
subplot(211); title('Rectangular Window')
plot(abs(g_rect[:fmax])); grid(); axis('tight')
subplot(212); title('Hamming Window')
plot(abs(g_hamm[:fmax])); grid(); axis('tight')
if __name__ == "__main__":
make_plot()

If you don't modify the sample values, and select the same length of data as the FFT length, this is equivalent to using a rectangular window, in which case the frame and the window are identical. However multiplying your input data by a rectangular window in the time domain is the same as convolving the input signal's spectrum with a Sinc function in the frequency domain, which will spread any spectral peaks for frequencies which are not exactly periodic in the FFT aperture across the entire spectrum.
Non-rectangular windows are often used so the the resulting FFT spectrum is convolved with something a bit more "focused" than a Sinc function.
You can also use a rectangular window that is a different size than the FFT length or aperture. In the case of a shorter data window, the FFT frame can be zero padded, which can result in an smoother looking interpolated FFT result spectrum. You can even use a rectangular window that is longer that the length of the FFT by wrapping data around the FFT aperture in a summed circular manner for some interesting effects with the frequency resolution.
ADDED due to a request:
Multiplying by a window in the time domain produces the same result as convolving with the transform of that window in the frequency domain.
In general, a narrower time domain window with produce a wider looking frequency domain transform. This is the reason that zero-padding produces a smoother frequency plot. The narrower time domain window produces a wider Sinc with fatter and smoother curves in relation to the frame width than would a window the full width of the FFT frame, thus making the interpolated frequency results look smoother than an non-zero padded FFT of the same frame length.
The converse is also true to some extent. A wider rectangular window will produce a narrower Sinc, with the nulls closer to the peak. Thus you might be able to use a carefully chosen wider window to produce a narrower looking Sinc to null a frequency closer to a bin of interest than 1 frequency bin away. How do you use a wider window? Wrap the data around and sum, which is identical to using FT basis vectors that are not truncated to 1 FFT frame in length. However, since when doing this the FFT result vector is shorter than the data, this is a lossy process which will introduce artifacts, and introduce some new novel aliasing. But it will give you a sharper frequency selection peak at each bin, and notch filters that can be placed less than 1 bin away, say halfway between bins, etc.

Related

Average set of color images and standard deviation

I am learning image analysis and trying to average set of color images and get standard deviation at each pixel
I have done this, but it is not by averaging RGB channels. (for ex rchannel = I(:,:,1))
filelist = dir('dir1/*.jpg');
ims = zeros(215, 300, 3);
for i=1:length(filelist)
imname = ['dir1/' filelist(i).name];
rgbim = im2double(imread(imname));
ims = ims + rgbim;
end
avgset1 = ims/length(filelist);
figure;
imshow(avgset1);
I am not sure if this is correct. I am confused as to how averaging images is useful.
Also, I couldn't get the matrix holding standard deviation.
Any help is appreciated.
If you are concerned about finding the mean RGB image, then your code is correct. What I like is that you converted the images using im2double before accumulating the mean and so you are making everything double precision. As what Parag said, finding the mean image is very useful especially in machine learning. It is common to find the mean image of a set of images before doing image classification as it allows the dynamic range of each pixel to be within a normalized range. This allows the training of the learning algorithm to converge quickly to the optimum solution and provide the best set of parameters to facilitate the best accuracy in classification.
If you want to find the mean RGB colour which is the average colour over all images, then no your code is not correct.
You have summed over all channels individually which is stored in sumrgbims, so the last step you need to do now take this image and sum over each channel individually. Two calls to sum in the first and second dimensions chained together will help. This will produce a 1 x 1 x 3 vector, so using squeeze after this to remove the singleton dimensions and get a 3 x 1 vector representing the mean RGB colour over all images is what you get.
Therefore:
mean_colour = squeeze(sum(sum(sumrgbims, 1), 2));
To address your second question, I'm assuming you want to find the standard deviation of each pixel value over all images. What you will have to do is accumulate the square of each image in addition to accumulating each image inside the loop. After that, you know that the standard deviation is the square root of the variance, and the variance is equal to the average sum of squares subtracted by the mean squared. We have the mean image, now you just have to square the mean image and subtract this with the average sum of squares. Just to be sure our math is right, supposing we have a signal X with a mean mu. Given that we have N values in our signal, the variance is thus equal to:
Source: Science Buddies
The standard deviation would simply be the square root of the above result. We would thus calculate this for each pixel independently. Therefore you can modify your loop to do that for you:
filelist = dir('set1/*.jpg');
sumrgbims = zeros(215, 300, 3);
sum2rgbims = sumrgbims; % New - for standard deviation
for i=1:length(filelist)
imname = ['set1/' filelist(i).name];
rgbim = im2double(imread(imname));
sumrgbims = sumrgbims + rgbim;
sum2rgbims = sum2rgbims + rgbim.^2; % New
end
rgbavgset1 = sumrgbims/length(filelist);
% New - find standard deviation
rgbstdset1 = ((sum2rgbims / length(filelist)) - rgbavgset.^2).^(0.5);
figure;
imshow(rgbavgset1, []);
% New - display standard deviation image
figure;
imshow(rgbstdset1, []);
Also to make sure, I've scaled the display of each imshow call so the smallest value gets mapped to 0 and the largest value gets mapped to 1. This does not change the actual contents of the images. This is just for display purposes.

How to modify d3js fisheye distortion so that it will support radius

I am trying to modify fisheye this project so that I can use radius function to increase fisheye size. My aim is to see more cells bigger around mouse. Current implementation does not support radius function. If I use circular instead of scale, I can use radius function. But in this case, I dont know how to use circular.
Either way, help is appreciated :)
Thanks!
The radius parameter on the circular fisheye puts a boundary to the magnification effects. In contrast, in the scale/Cartesian fisheye, the entire graph is modified. The focus cell is enlarged, and other cells are compressed according to how far away they are from the focus. There is no boundary, the compression continues smoothly (getting progressively more compressed) until the edge of the plot. See http://bost.ocks.org/mike/fisheye/#cartesian
If what you want is that cells near to the focus aren't compressed as much (so you can still compare adjacent cells effectively), then the parameter to change is the distortion parameter. Lower distortion will reduce the amount by which the focus cell is magnified, and therefore leave more room for adjacent cells. The default distortion parameter is 3, you're using higher values, which increases the magnification of the focus cell at the expense of all the others.
If changing the distortion doesn't satisfy you, try changing the scale type by using d3.fisheye.scale(d3.scale.sqrt); this will change the function determining how the image magnification changes as you move away from the focus point. (I couldn't get other scale types to work -- log gives an error, and with power scales there is no way to set the exponent.)
Edit
After additional playing around, I'm not satisfied with the results from changing the input scale type. I misunderstood how that would affect it: it doesn't change the scale function for the distortion, but for the raw data, so that changes are different for points above the focus compared to point below the focus. The scale type you give as a parameter to the fisheye scale should be the underlying scale type that makes sense for the data, and is distinct from the fisheye effects.
Instead, I've tried some custom code to add an exponent into the calculation. To understand how it works, you need to first break down the original function:
The original code for the fisheye scale is:
function fisheye(_) {
var x = scale(_),
left = x < a,
range = d3.extent(scale.range()),
min = range[0],
max = range[1],
m = left ? a - min : max - a;
if (m == 0) m = max - min;
return a + (left ? -1 : 1) * m * (d + 1) / (d + (m / Math.abs(x - a)));
}
The _ is the input value, scale is usually a linear scale for which domain and range have been set, a is the focus point in the output range, and d is the distortion parameter.
In other words: to determine the point at which a value is drawn on the distorted scale:
calculate the range position of the value based on the default/undistorted scale;
calculate it's distance from the focal point ({distance}, Math.abs(x-a));
calculate the distance between edge of the graph and the focal point ({total distance}, m);
the returned value is offset from the focal point by {total distance} multiplied by
(d + 1) / (d + ({total distance} / {distance}) );
adjust as appropriate depending on whether the value is below or above the focal point.
For an input point that is half-way between the focal point and the edge of the graph on the undistorted scale, the inner fraction {total distance}/{distance} will equal 2. The outer fraction will therefore be (d+1)/(d+2). If d is 0 (no distortion), this will equal 1/2, and the output point will also be half-way between the focal point and the edge of the graph. As the distortion parameter, d, increases, that fraction also increases: at d=1, the output point would be 2/3 of the way from the focal point to the edge of the graph; at d=2, it would be 3/4 of the way to the edge of the graph; and so on.
In contrast, when the input value is very close to the focal point, {distance} is nearly 0, so the inner fraction approaches infinity and the outer fraction approaches 0, so the returned point will be very close to the focal point.
Finally, when the input value is very close to the edge of the graph, {distance} is nearly {total distance}, and both the inner and outer fractions will be nearly 1, so the returned point will also be very close to the edge of the graph.
Those last two identities we want to keep. We just want to change the relationship in between -- how the offset from focal point changes as the input point gets farther away from the focal point and closer to the edge of the graph. Changing the distortion parameter changes the amount of distortion in both near and far values equally. If you reduce the distortion parameter you also reduce the overall magnification, since all the data still has to fit in the same space.
The OP wanted to reduce the rate of change in magnification between cells near the focal point. Reducing the distortion parameter does this, but only by reducing the magnification overall. The ideal approach would be to change the relationship between distance from the focal point and degree of distortion.
My changed code for the same function is:
function fisheye(_) {
var x = scale(_),
left = x < a,
range = d3.extent(scale.range()),
min = range[0],
max = range[1],
m = left ? a - min : max - a,
dp = Math.pow(d, p);
if (m == 0) return left? min : max;
return a + (left ? -1 : 1) * m *
Math.pow(
(dp + 1)
/ (dp + (m / Math.abs(x-a) ) )
, p);
}
I've changed two things: I raise the fraction (d + 1)/(d + {total distance}/{distance}) to a power, and I also replace the original d value with it raised to the same exponent (dp). The first change is what changes the relationship, the second is just an adjustment so that a given distortion parameter will have approximately the same overall magnification effect regardless of the power parameter.
The fraction raised to the power will still be close to zero if the fraction is close to zero, and will still be close to one if the fraction is close to one, so the basic identities remain the same. However, when the power parameter is less than one, the rate of change will be shallower at the edges, and steeper in the middle. For a power parameter greater than 1, the rate of change will be quite steep at the edges and shallower near the focal point.
Example here: http://codepen.io/AmeliaBR/pen/zHqac
The horizontal fisheye scale has a square-root power function (p = 0.5), while the vertical has a square function (p = 2). Both have the same unadjusted distortion parameter (d = 6).
The effect of the square root function is that even the farthest columns still have some visible width, but the change in column width near the focal point is significant. The effect of the power of 2 function is that the rows far away from the focal point are compressed to nearly invisible height, but the rows above and below the focus are still of significant size. I think this latter version achieves what #piedpiper was hoping for.
I've of course also added a .power function to the fisheye scale in order to set the p parameter, and have set the default value for p to 1, which will give the same results as the original fisheye scale. I use the name power for the method to distinguish from the exponent method of power scales, which would be used if they underlying scale (before distortion) had a power relationship.

How do I determine the size of an arbitrary number of equal squares to be fit into an fixed size 2D space?

I have a fixed size 2D space that I would like to fill with an arbitrary number of equal sized squares. I'd like an algorithm that determines the exact size (length of one side) that these squares should be in order to fit perfectly into the given space.
Notice that there must be an integer number of squares which fill the width and height. Therefore, the aspect ratio must be a rational number.
Input: width(float or int), height(float or int)
Algorithm:
aspectRatio = RationalNumber(width/height).lowestTerms #must be rational number
# it must be the case that our return values
# numHorizontalSqaures/numVerticalSquares = aspectRatio
return {
numHorizontalSquares = aspectRatio.numerator,
numVerticalSquares = aspectRatio.denominator,
squareLength = width/aspectRatio.numerator
}
If the width/height is a rational number, your answer is merely any multiple of the aspect ratio! (e.g. if your aspect ratio was 4/3, you could fill it with 4x3 squares of length width/4=height/3, or 8x6 squares of half that size, or 12x9 squares of a third that size...) If it is not a rational number, your task is impossible.
You convert a fraction to lowest terms by factoring the numerator and denominator, and removing all duplicate factor pairs; this is equivalent to just using the greatest common divisor algorithm GCD(numer,denom) , and dividing both numerator and denominator by that.
Here is an example implementation in python3:
from fractions import Fraction
def largestSquareTiling(width, height, maxVerticalSquares=10**6):
"""
Returns the minimum number (corresponding to largest size) of square
which will perfectly tile a widthxheight rectangle.
Return format:
(numHorizontalTiles, numVerticalTiles), tileLength
"""
if isinstance(width,int) and isinstance(height,int):
aspectRatio = Fraction(width, height)
else:
aspectRatio = Fraction.from_float(width/height)
aspectRatio2 = aspectRatio.limit_denominator(max_denominator=maxVerticalSquares)
if aspectRatio != aspectRatio2:
raise Exception('too many squares') #optional
aspectRatio = aspectRatio2
squareLength = width/aspectRatio.numerator
return (aspectRatio.numerator, aspectRatio.denominator), squareLength
e.g.
>>> largestSquareTiling(2.25, 11.75)
((9, 47), 0.25)
You can tune the optional parameter maxVerticalSquares to give yourself more robustness versus floating-point imprecision (but the downside is the operation may fail), or to avoid a larger number of vertical squares (e.g. if this is architecture code and you are tiling a floor); depending on the range of numbers you are working with, a default value of maxVerticalSquares=500 might be reasonable or something (possibly not even including the exception code).
Once you have this, and a range of desired square lengths (minLength, maxLength), you just multiply:
# inputs
desiredTileSizeRange = (0.9, 0.13)
(minHTiles, minVTiles), maxTileSize = largestSquareTiling(2.25, 11.75)
# calculate integral shrinkFactor
shrinkFactorMin = maxTileSize/desiredTileSizeRange[0]
shrinkFactorMax = maxTileSize/desiredTileSizeRange[1]
shrinkFactor = int(scaleFactorMax)
if shrinkFactor<shrinkFactorMin:
raise Exception('desired tile size range too restrictive; no tiling found')
If shrinkFactor is now 2 for example, the new output value in the example would be ((9*2,47*2), 0.25/2).
If the sides are integers you need to find the Greatest common divisor between the sides A and B, that is the side of the square that you are looking for.
You want to use a space-filling-curve or a spatial index. A sfc recursivley subdivide the plane into 4 tiles and reduce the 2d complexity to a 1d complexity. You want to look for Nick's hilbert curve quadtree spatial index blog.
I think this is what you want. At least it solved the problem I was googling for when I found this question.
// width = width of area to fill
// height = height of area to fill
// sqareCount = number of sqares to fill the area with
function CalcSquareSide(float width, float height, int squareCount)
{
return Math.Sqrt((height * width) / squareCount);
}

How to compute frequency of data using FFT?

I want to know the frequency of data. I had a little bit idea that it can be done using FFT, but I am not sure how to do it. Once I passed the entire data to FFT, then it is giving me 2 peaks, but how can I get the frequency?
Thanks a lot in advance.
Here's what you're probably looking for:
When you talk about computing the frequency of a signal, you probably aren't so interested in the component sine waves. This is what the FFT gives you. For example, if you sum sin(2*pi*10x)+sin(2*pi*15x)+sin(2*pi*20x)+sin(2*pi*25x), you probably want to detect the "frequency" as 5 (take a look at the graph of this function). However, the FFT of this signal will detect the magnitude of 0 for the frequency 5.
What you are probably more interested in is the periodicity of the signal. That is, the interval at which the signal becomes most like itself. So most likely what you want is the autocorrelation. Look it up. This will essentially give you a measure of how self-similar the signal is to itself after being shifted over by a certain amount. So if you find a peak in the autocorrelation, that would indicate that the signal matches up well with itself when shifted over that amount. There's a lot of cool math behind it, look it up if you are interested, but if you just want it to work, just do this:
Window the signal, using a smooth window (a cosine will do. The window should be at least twice as large as the largest period you want to detect. 3 times as large will give better results). (see http://zone.ni.com/devzone/cda/tut/p/id/4844 if you are confused).
Take the FFT (however, make sure the FFT size is twice as big as the window, with the second half being padded with zeroes. If the FFT size is only the size of the window, you will effectively be taking the circular autocorrelation, which is not what you want. see https://en.wikipedia.org/wiki/Discrete_Fourier_transform#Circular_convolution_theorem_and_cross-correlation_theorem )
Replace all coefficients of the FFT with their square value (real^2+imag^2). This is effectively taking the autocorrelation.
Take the iFFT
Find the largest peak in the iFFT. This is the strongest periodicity of the waveform. You can actually be a little more clever in which peak you pick, but for most purposes this should be enough. To find the frequency, you just take f=1/T.
Suppose x[n] = cos(2*pi*f0*n/fs) where f0 is the frequency of your sinusoid in Hertz, n=0:N-1, and fs is the sampling rate of x in samples per second.
Let X = fft(x). Both x and X have length N. Suppose X has two peaks at n0 and N-n0.
Then the sinusoid frequency is f0 = fs*n0/N Hertz.
Example: fs = 8000 samples per second, N = 16000 samples. Therefore, x lasts two seconds long.
Suppose X = fft(x) has peaks at 2000 and 14000 (=16000-2000). Therefore, f0 = 8000*2000/16000 = 1000 Hz.
If you have a signal with one frequency (for instance:
y = sin(2 pi f t)
With:
y time signal
f the central frequency
t time
Then you'll get two peaks, one at a frequency corresponding to f, and one at a frequency corresponding to -f.
So, to get to a frequency, can discard the negative frequency part. It is located after the positive frequency part. Furthermore, the first element in the array is a dc-offset, so the frequency is 0. (Beware that this offset is usually much more than 0, so the other frequency components might get dwarved by it.)
In code: (I've written it in python, but it should be equally simple in c#):
import numpy as np
from pylab import *
x = np.random.rand(100) # create 100 random numbers of which we want the fourier transform
x = x - mean(x) # make sure the average is zero, so we don't get a huge DC offset.
dt = 0.1 #[s] 1/the sampling rate
fftx = np.fft.fft(x) # the frequency transformed part
# now discard anything that we do not need..
fftx = fftx[range(int(len(fftx)/2))]
# now create the frequency axis: it runs from 0 to the sampling rate /2
freq_fftx = np.linspace(0,2/dt,len(fftx))
# and plot a power spectrum
plot(freq_fftx,abs(fftx)**2)
show()
Now the frequency is located at the largest peak.
If you are looking at the magnitude results from an FFT of the type most common used, then a strong sinusoidal frequency component of real data will show up in two places, once in the bottom half, plus its complex conjugate mirror image in the top half. Those two peaks both represent the same spectral peak and same frequency (for strictly real data). If the FFT result bin numbers start at 0 (zero), then the frequency of the sinusoidal component represented by the bin in the bottom half of the FFT result is most likely.
Frequency_of_Peak = Data_Sample_Rate * Bin_number_of_Peak / Length_of_FFT ;
Make sure to work out your proper units within the above equation (to get units of cycles per second, per fortnight, per kiloparsec, etc.)
Note that unless the wavelength of the data is an exact integer submultiple of the FFT length, the actual peak will be between bins, thus distributing energy among multiple nearby FFT result bins. So you may have to interpolate to better estimate the frequency peak. Common interpolation methods to find a more precise frequency estimate are 3-point parabolic and Sinc convolution (which is nearly the same as using a zero-padded longer FFT).
Assuming you use a discrete Fourier transform to look at frequencies, then you have to be careful about how to interpret the normalized frequencies back into physical ones (i.e. Hz).
According to the FFTW tutorial on how to calculate the power spectrum of a signal:
#include <rfftw.h>
...
{
fftw_real in[N], out[N], power_spectrum[N/2+1];
rfftw_plan p;
int k;
...
p = rfftw_create_plan(N, FFTW_REAL_TO_COMPLEX, FFTW_ESTIMATE);
...
rfftw_one(p, in, out);
power_spectrum[0] = out[0]*out[0]; /* DC component */
for (k = 1; k < (N+1)/2; ++k) /* (k < N/2 rounded up) */
power_spectrum[k] = out[k]*out[k] + out[N-k]*out[N-k];
if (N % 2 == 0) /* N is even */
power_spectrum[N/2] = out[N/2]*out[N/2]; /* Nyquist freq. */
...
rfftw_destroy_plan(p);
}
Note it handles data lengths that are not even. Note particularly if the data length is given, FFTW will give you a "bin" corresponding to the Nyquist frequency (sample rate divided by 2). Otherwise, you don't get it (i.e. the last bin is just below Nyquist).
A MATLAB example is similar, but they are choosing the length of 1000 (an even number) for the example:
N = length(x);
xdft = fft(x);
xdft = xdft(1:N/2+1);
psdx = (1/(Fs*N)).*abs(xdft).^2;
psdx(2:end-1) = 2*psdx(2:end-1);
freq = 0:Fs/length(x):Fs/2;
In general, it can be implementation (of the DFT) dependent. You should create a test pure sine wave at a known frequency and then make sure the calculation gives the same number.
Frequency = speed/wavelength.
Wavelength is the distance between the two peaks.

Measuring the average thickness of traces in an image

Here's the problem: I have a number of binary images composed by traces of different thickness. Below there are two images to illustrate the problem:
First Image - size: 711 x 643 px
Second Image - size: 930 x 951 px
What I need is to measure the average thickness (in pixels) of the traces in the images. In fact, the average thickness of traces in an image is a somewhat subjective measure. So, what I need is a measure that have some correlation with the radius of the trace, as indicated in the figure below:
Notes
Since the measure doesn't need to be very precise, I am willing to trade precision for speed. In other words, speed is an important factor to the solution of this problem.
There might be intersections in the traces.
The trace thickness might not be constant, but an average measure is OK (even the maximum trace thickness is acceptable).
The trace will always be much longer than it is wide.
I'd suggest this algorithm:
Apply a distance transformation to the image, so that all background pixels are set to 0, all foreground pixels are set to the distance from the background
Find the local maxima in the distance transformed image. These are points in the middle of the lines. Put their pixel values (i.e. distances from the background) image into a list
Calculate the median or average of that list
I was impressed by #nikie's answer, and gave it a try ...
I simplified the algorithm for just getting the maximum value, not the mean, so evading the local maxima detection algorithm. I think this is enough if the stroke is well-behaved (although for self intersecting lines it may not be accurate).
The program in Mathematica is:
m = Import["http://imgur.com/3Zs7m.png"] (* Get image from web*)
s = Abs[ImageData[m] - 1]; (* Invert colors to detect background *)
k = DistanceTransform[Image[s]] (* White Pxs converted to distance to black*)
k // ImageAdjust (* Show the image *)
Max[ImageData[k]] (* Get the max stroke width *)
The generated result is
The numerical value (28.46 px X 2) fits pretty well my measurement of 56 px (Although your value is 100px :* )
Edit - Implemented the full algorithm
Well ... sort of ... instead of searching the local maxima, finding the fixed point of the distance transformation. Almost, but not quite completely unlike the same thing :)
m = Import["http://imgur.com/3Zs7m.png"]; (*Get image from web*)
s = Abs[ImageData[m] - 1]; (*Invert colors to detect background*)
k = DistanceTransform[Image[s]]; (*White Pxs converted to distance to black*)
Print["Distance to Background*"]
k // ImageAdjust (*Show the image*)
Print["Local Maxima"]
weights =
Binarize[FixedPoint[ImageAdjust#DistanceTransform[Image[#], .4] &,s]]
Print["Stroke Width =",
2 Mean[Select[Flatten[ImageData[k]] Flatten[ImageData[weights]], # != 0 &]]]
As you may see, the result is very similar to the previous one, obtained with the simplified algorithm.
From Here. A simple method!
3.1 Estimating Pen Width
The pen thickness may be readily estimated from the area A and perimeter length L of the foreground
T = A/(L/2)
In essence, we have reshaped the foreground into a rectangle and measured the length of the longest side. Stronger modelling of the pen, for instance, as a disc yielding circular ends, might allow greater precision, but rasterisation error would compromise the signicance.
While precision is not a major issue, we do need to consider bias and singularities.
We should therefore calculate area A and perimeter length L using functions which take into account "roundedness".
In MATLAB
A = bwarea(.)
L = bwarea(bwperim(.; 8))
Since I don't have MATLAB at hand, I made a small program in Mathematica:
m = Binarize[Import["http://imgur.com/3Zs7m.png"]] (* Get Image *)
k = Binarize[MorphologicalPerimeter[m]] (* Get Perimeter *)
p = N[2 Count[ImageData[m], Except[1], 2]/
Count[ImageData[k], Except[0], 2]] (* Calculate *)
The output is 36 Px ...
Perimeter image follows
HTH!
Its been a 3 years since the question was asked :)
following the procedure of #nikie, here is a matlab implementation of the stroke width.
clc;
clear;
close all;
I = imread('3Zs7m.png');
X = im2bw(I,0.8);
subplottight(2,2,1);
imshow(X);
Dist=bwdist(X);
subplottight(2,2,2);
imshow(Dist,[]);
RegionMax=imregionalmax(Dist);
[x, y] = find(RegionMax ~= 0);
subplottight(2,2,3);
imshow(RegionMax);
List(1:size(x))=0;
for i = 1:size(x)
List(i)=Dist(x(i),y(i));
end
fprintf('Stroke Width = %u \n',mean(List));
Assuming that the trace has constant thickness, is much longer than it is wide, is not too strongly curved and has no intersections / crossings, I suggest an edge detection algorithm which also determines the direction of the edge, then a rise/fall detector with some trigonometry and a minimization algorithm. This gives you the minimal thickness across a relatively straight part of the curve.
I guess the error to be up to 25%.
First use an edge detector that gives us the information where an edge is and which direction (in 45° or PI/4 steps) it has. This is done by filtering with 4 different 3x3 matrices (Example).
Usually I'd say it's enough to scan the image horizontally, though you could also scan vertically or diagonally.
Assuming line-by-line (horizontal) scanning, once we find an edge, we check if it's a rise (going from background to trace color) or a fall (to background). If the edge's direction is at a right angle to the direction of scanning, skip it.
If you found one rise and one fall with the correct directions and without any disturbance in between, measure the distance from the rise to the fall. If the direction is diagonal, multiply by squareroot of 2. Store this measure together with the coordinate data.
The algorithm must then search along an edge (can't find a web resource on that right now) for neighboring (by their coordinates) measurements. If there is a local minimum with a padding of maybe 4 to 5 size units to each side (a value to play with - larger: less information, smaller: more noise), this measure qualifies as a candidate. This is to ensure that the ends of the trail or a section bent too much are not taken into account.
The minimum of that would be the measurement. Plausibility check: If the trace is not too tangled, there should be a lot of values in that area.
Please comment if there are more questions. :-)
Here is an answer that works in any computer language without the need of special functions...
Basic idea: Try to fit a circle into the black areas of the image. If you can, try with a bigger circle.
Algorithm:
set image background = 0 and trace = 1
initialize array result[]
set minimalExpectedWidth
set w = minimalExpectedWidth
loop
set counter = 0
create a matrix of zeros size w x w
within a circle of diameter w in that matrix, put ones
calculate area of the circle (= PI * w)
loop through all pixels of the image
optimization: if current pixel is of background color -> continue loop
multiply the matrix with the image at each pixel (e.g. filtering the image with that matrix)
(you can do this using the current x and y position and a double for loop from 0 to w)
take the sum of the result of each multiplication
if the sum equals the calculated circle's area, increment counter by one
store in result[w - minimalExpectedWidth]
increment w by one
optimization: include algorithm from further down here
while counter is greater zero
Now the result array contains the number of matches for each tested width.
Graph it to have a look at it.
For a width of one this will be equal to the number of pixels of trace color. For a greater width value less circle areas will fit into the trace. The result array will thus steadily decrease until there is a sudden drop. This is because the filter matrix with the circular area of that width now only fits into intersections.
Right before the drop is the width of your trace. If the width is not constant, the drop will not be that sudden.
I don't have MATLAB here for testing and don't know for sure about a function to detect this sudden drop, but we do know that the decrease is continuous, so I'd take the maximum of the second derivative of the (zero-based) result array like this
Algorithm:
set maximum = 0
set widthFound = 0
set minimalExpectedWidth as above
set prevvalue = result[0]
set index = 1
set prevFirstDerivative = result[1] - prevvalue
loop until index is greater result length
firstDerivative = result[index] - prevvalue
set secondDerivative = firstDerivative - prevFirstDerivative
if secondDerivative > maximum or secondDerivative < maximum * -1
maximum = secondDerivative
widthFound = index + minimalExpectedWidth
prevFirstDerivative = firstDerivative
prevvalue = result[index]
increment index by one
return widthFound
Now widthFound is the trace width for which (in relation to width + 1) many more matches were found.
I know that this is in part covered in some of the other answers, but my description is pretty much straightforward and you don't have to have learned image processing to do it.
I have interesting solution:
Do edge detection, for edge pixels extraction.
Do physical simulation - consider edge pixels as positively charged particles.
Now put some number of free positively charged particles in the stroke area.
Calculate electrical force equations for determining movement of these free particles.
Simulate particles movement for some time until particles reach position equilibrium.
(As they will repel from both stoke edges after some time they will stay in the middle line of stoke)
Now stroke thickness/2 would be average distance from edge particle to nearest free particle.

Resources