I am attempting to extract pitch data from an audio stream. From what I can see, it looks as though FFT is the best algorithm to use.
Rather than digging straight into the math, could someone help me understand what this FFT algorithm does?
Please don't say something obvious like 'FFT extracts frequency data from a raw signal.' I need the next level of detail.
What do I pass in, and what do I get out?
Once I understand the interface clearly, this will help me to understand the implementation.
I take it I need to pass in an audio buffer, I need to tell it how many bytes to use for each computation (say the most recent 1024 bytes from this buffer). and maybe I need to specify the range of pitches I want it to detect. Now it is going to pass back what? An array of frequency bins? What are these?
(Edit:) I have found a C++ algorithm to use (if I can only understand it)
Performous extracts pitch from the microphone. Also the code is open source. Here is a description of what the algorithm does, from the guy that coded it.
PCM input (with buffering)
FFT (1024 samples at a time, remove 200 samples from front of the buffer afterwards)
Reassignment method (against the previous FFT that was 200 samples earlier)
Filtering of peaks (this part could be done much better or even left out)
Combining peaks into sets of harmonics (we call the combination a tone)
Temporal filtering of tones (update the set of tones detected earlier instead of simply using the newly detected ones)
Pick the best vocal tone (frequency limits, weighting, could use the harmonic array also but I don't think we do)
But could someone help me understand how this works? What is it that is getting sent from the FFT to the Reassignment method?
The FFT is just one building block in the process, and it may not be the best approach for pitch detection. Read up on pitch detection and decide which algo you want to use first (this will depend on what exactly you are trying to measure the pitch of - speech, single musical instrument, other types of sound, etc. Get this right before getting into low level details such as the FFT (some, but not all pitch detection algorithms use the FFT internally).
There are numerous similar questions on SO already, e.g. Real-time pitch detection using FFT and Pitch detection using FFT for trumpet, and there is good overview material on Wikipedia etc - read these and then decide whether you still want to roll your own FFT-based solution or perhaps use an existing library which is suitable for your particular application.
There is an element of choice here. The most straightforward to implement is to do (2^n samples in) complex numbers in, and 2^n complex numbers out, so maybe you should start with that.
In the special case of a DCT(discrete cosine transform), typically what goes in is 2^n samples (often floats), and out go 2^n values, often floats too. DCT is an FFT but that takes only the real values, and analyses the function in terms of cosines.
It is smart (but commonly skipped) to define a struct to handle the complex values. Traditionally FFT's are done in-place, but it works fine if you don't.
It can be useful to instantiate a class that contains a work buffer for the FFT (if you don't want to do the FFT in-place), and reuse that for several FFTs.
In goes N samples of PCM (purely real complex numbers). Out comes N bins of frequency domain (each bin corresponding to a 1/N slice of sample rate). Each bin is a complex number. Rather than real and imaginary parts, these values should generally be handled in polar format (absolute value and argument). The absolute value tells the amount of sound near the bin center frequency while the argument tells the phase (at which position the sine wave is travelling).
Most often coders only use the magnitude (absolute value) and throw away the phase angle (argument).
Related
Articles on image compression often focus on generating the best possible image quality (PSNR) given a fixed compression ratio. I'm curious about getting the best possible compression ratio given a maximum permissible per-pixel error. My instinct is to greedily remove the smallest coefficients in the transformed data, keep track of the error I've caused, until I can't remove any more without passing the maximum error. But I can't find any papers to confirm it. Can anyone point me to a reference about this problem?
edit
Let me give some more details. I'm trying to compress depth images from 3D scanners, not regular images. Color is not a factor. Depth images tend to have large smooth patches, but accurate discontinuities are important. Some pixels will be empty - outside the scanner's range or low confidence level - and not require compression.
The algorithm will need to run fast - optimally at 30 fps like the Microsoft Kinect, or at least somewhere in the 100 millisecond area. The algorithm will be included in a library I distribute. I prefer to minimize dependencies, so compression schemes that I can implement myself in a reasonably small amount of code are preferable.
This answer won't satisfy your request for references, but it's too long to post as a comment.
First, depth buffer compression for computer generated imagery may apply to your case. Usually this compression is done at the hardware level with a transparent interface, so it's typically designed to be simple and fast. Given this, it may be worth your while to search for depth buffer compression.
One of the major issues you're going to have with transform-based compressors (DCTs, Wavelets, etc...) is that there's no easy way to find compact coefficients that meet your hard maximum error criteria. (The problem you end up with looks a lot like linear programming. Wavelets can have localized behavior in most of their basis vectors which can help somewhat, but it's still rather inconvenient.) To achieve the accuracy you desire you may need to add on another refinement step but this will also add more computation time, complexity, and will introduce another layer imperfect entropy coding leading to a loss of compression efficiency.
What you want is more akin to lossless compression than lossy compression. In this light, one approach would be to simply throw away the bits under your error threshold: if your maximum allowable error is X and your depths are represented as integers, integer-divide your depths by X and then apply lossless compression.
Another issue you're facing is the representation of depth -- depending on your circumstances it may be a floating point number, an integer, it may be in a projective coordinate system, or even more bizarre.
Given these restrictions, I recommend a scheme like
BTPC as it allows for a more easily adapted wavelet-like scheme where errors are more clearly localized and easier to understand and account for. Additionally, BTPC has shown a great resistance to many types of images and a good ability to handle continuous gradients and sharp edges with low loss of fidelity -- exactly the sorts of traits you're looking for.
Since BTPC is predictive, it doesn't matter particularly how your depth format is stored -- you just need to modify your predictor to take your coordinate system and numeric type (integer vs. floating) into account.
Since BTPC doesn't do terribly much math, it can run pretty fast on general CPUs, too, although it may not be as easy to vectorize as you'd like. (It sounds like you're possibly doing low level optimized game programming, so this may be a serious consideration for you.)
If you're looking for something simpler to implement I'd recommend a "filter" type of approach (similar to PNG) with a Golomb-Rice coder strapped on. Rather than coding the deltas perfectly to end up with lossless compression, you can code to a "good enough" degree. The advantage of doing this as compared to a quantize-then-lossless-encode style compressor is that you can potentially maintain more continuity.
"greedily remove the smallest coefficients" reminds me of SVD compression, where you use the data associated with the first k largest eigenvalues to approximate the data. The rest of the eigenvalues that are small don't hold significant information and can be discarded.
Large k -> high quality, low compression
Small k -> lower quality, high compression
(disclaimer: I have no idea what I'm talking here but it might help)
edit:
here is a better illustration of SVD compression
I am not aware of any references in case of the problem you have proposed.
However, one direction in which I can think is using optimization techniques to select best coefficients. Techniques like genetic algorithms, hill climbing, simulated annihilation can be used in this regard.
Given that I have experience in genetic algorithms, I can suggest the following process. If you are not aware about genetic algorithm, I recommend you to read up the wiki page on genetic algorithms.
Your problem can be thought of has selecting a subset of coefficients which give the minimum reconstruction error. Say there are N coefficients. It is easy to establish that there are 2^N subsets. Each subset can be represented by a string on N binary numbers. For example, for N=5,
the string 11101 represents that the selected subset contains all the coeff except the coeff4. With genetic algorithms it is possible to find an optimum bit sting. The objective function can be chosen as the absolute error between the reconstructed and the original signals. However, I am aware that you can get an error of zero when all the coeffs are taken.
To get around this problem, you may choose to modulate the objective function with an appropriate function which discourages objective function value near zero and is a monotonically increasing function after a threshold. A function like | log( \epsion + f ) | may suffice.
If what I propose seem interesting to you do let me know. I have an implementation of genetic algorithm with me. But it is tailored to my needs and you might not be in a position to adapt it for this problem. I am willing to work with you on this problem as the problem seem interesting to explore.
Do let me know.
I think you are pretty close to the solution, but there are an issue that i think you should pay some attention.
Because different wavelet coefficients are corresponding to functions with different scale (and shift), so error being introduced by elimination of a partricular coefficient depends not only on it value, but on it position (especially scale), so the weight of the coefficient should be something like w(c) = amp(c) * F(scale, shift) where amp(c) is an amplitude of coefficient and F is a function that depends on the compressed data. When you determine weigths like that the problem is reduced to the backpack problem, that could be solved in many ways (for example reorder the coefficients and eliminate tha smallest one until you get a threshold error on a pixel affected by the corresponding function). The hard part is to determine F(scale,shift). You can do it in the following way. If the data that you are compressing is relatively stable (for example surveillance video), you could estimate F as a middle probability of recieve an unacceptable error eliminating the component with given scale and shift from the wavelet decomposition. So you could perform SVD (or PCA) decomposition on historical data and calculate 'F(scale, shift)' as a weighted (with weights equal to eigenvalues) summ of scalar products of the component with given scale and shift to the eigenvectors
F(scale,shift) = summ eValue(i) * (w(scale,shift) * eVector(i)) where eValue is eigenvalue corresponding to the eigenvector - eVector(i), w(scale,shift) is a wavelet function with given scale and shift.
Iteratively evaluating different sets of coefficients will not help your goal of being able to compress frames as quickly as they are generated, and will not help you to keep complexity low.
Depth maps are different from intensity maps in several ways that can help you.
Large areas of "no data" can be handled very efficiently by run-length encoding.
Measurement error in intensity images is constant across the image after fixed-noise has been subtracted, but depth maps from both Kinects and stereo vision systems have errors that increase as an inverse function of depth. If these are the scanners you are targeting then you can use lossier compression for closer pixels - because the errors your lossy function introduces are independent of sensor error, the total error won't be increased until your lossy function's error is greater than the sensor error.
A team at Microsoft had a lot of success with a very low-loss algorithm that relied heavily on run-length encoding (see paper here), beating out JPEG 2000 with better compression and excellent performance; however, part of their success seemed to step from the relatively crude depth maps their sensor produces. If you are targeting Kinects, you may find it hard to improve on their method.
I think you are looking for something like JPEG-LS algorithm, which tries to limit the maximum amount pixel error. Albeit, it is mainly designed for compression of natural or medical images and is not well designed for depth images ( which are smoother).
The term "near-lossless compression" refers to a lossy algorithm for which each reconstructed image sample differs from the corresponding original image sample by not more than a pre-specified value, the (usually small) "loss." Lossless compression corresponds to loss=0.link to the original reference
I'd try preprocessing the image, then compressing with a general method, such as PNG.
Preprocessing for PNG (first read this)
for y in 1..height
for x in 1..width
if(abs(A[y][x-1] - A[y][x]) < threshold)
A[y][x] = A[y][x-1]
elsif (abs(A[y-1][x] - A[y][x]) < threshold)
A[y][x] = A[y-1][x]
I have the following problem: I have 2 signals over time. They are from the same source so they should be the same. I want to check if they really are.
Complications:
they may be measured with different sample rates
the start / end time do not correlate. The measurement does not start at the same time and end at the same time.
there may be an time offset between the two signals.
My thoughts go along Fourier transformation, convolution and statistical methods for comparison. Can someone post me some links where I can find more information on how to handle this?
You can easily correct for the phase by just shifting them so their centers of mass line up. (Or alternatively, in the Fourier domain just multiplying by the inverse of the phase of the first coefficient.)
Similarly, if you want to line up the images given only partial data, you can just cross correlate and take the maximal value (which is again easy to do in the Fourier domain).
That leaves the only tricky part of this process as dealing with the sampling rates. Now if you know a-priori what the sample rates are, (and if they are related by a rational number), you can just use sinc interpolation/downsampling to rescale them to a common sampling rate:
https://ccrma.stanford.edu/~jos/st/Bandlimited_Interpolation_Time_Limited_Signals.html
If you don't know the sampling rate, you may be a bit screwed. Technically, you can try just brute forcing over all the different rescalings of your signal, but doing this tends to be either slow or else give mediocre results.
As a last suggestion, if you just want to match sounds exactly you can try using the cepstrum and verifying that the peaks of the signal are close enough to within some tolerance. This type of analysis is used a lot in sound and speech recognition, with some refinements to make it operate a bit more locally. It tends to work best with frequency modulated data like speech and music:
http://en.wikipedia.org/wiki/Cepstrum
Fourier transformation does sound like the right way.
There is too much mathematical information for me to just start explaining here so if you really wanna know what's going on with that (cause I don't think you can just use FT without understanding it) you should use this reference from MIT OpenCourseWare: http://ocw.mit.edu/courses/mathematics/18-103-fourier-analysis-theory-and-applications-spring-2004/lecture-notes/
Hope it helped.
If you are working with a linux box and the waveforms that need to be processed have already been recorded, you can try to use the file command to display details about the recording. It gives you the sampling rate when it is invoked on a wav file, though I am not sure what format you are recording in.
If the signals are time-shifted with respect to each other, you may try to convolve one with a delta function with increasing delays and then comparing. On MATLAB, conv and all should be good enough.
These are just 'crude' attempts (almost like hacking at the problem). There may be algorithms that are shift-invariant that may do a better job.
Hope that helps.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I currently have an array full of data which from what I believe is the Amplitude of my wave file. It is currently at a low -32768 and at a high 32767.
I also have the SampleRate which was 16,000hz.
My understanding of sound isn't very good; does anyone know from this how I can calculate the Frequency?
Help greatly appreciated,
Monkeyguy.
What exactly is it that you're wanting to do? The method will depend entirely on what you're hoping to achieve. Do you have a signal that contains a single sinusoid, eg a detector from a piece of mechanical equipment? Or more likely, are you wanting to play/sing into a microphone and transcribe the music?
In both cases, the FFT will be your first port of call. In the first case this may be pretty much all you need, as FFTs are good for isolated steady-state sinusoids. In the latter case, you have got a very long road ahead of you in order to get any useful results at all. Pitch recognition is a difficult problem, and merely throwing some FFTs won't get you very far. You'll need to have a good grounding in digital signal processing and also the characteristics of musical signals, and then probably your best bet is to use an autocorrelation based method instead.
See my previous answer on a related subject for some links which may be useful: Algorithms for determining the key of an audio sample
In almost all cases, an audio file has no single frequency. A sound in which the sound wave has a single frequency, is (typically) a pure sine tone, and sounds like this:
http://www.wolframalpha.com/input/?i=sound+440+Hz&a=*MC.~-_*PlaySoundTone-&a=*FS-_**DopplerShift.fo-.*DopplerShift.vs-.*DopplerShift.c--&f3=10+m/s&f=DopplerShift.vs_10+m/s&f4=340.3+m/s&f=DopplerShift.c_340.3+m/s&a=*FVarOpt.1-_***DopplerShift.fo-.*DopplerShift.fs--.***DopplerShift.DopplerRatio---.*--&a=*FVarOpt.2-_**-.***DopplerShift.vo--.**DopplerShift.vw---.**DopplerShift.fo-.*DopplerShift.fs---
This is a pure 440 Hz sine wave. (It was not possible to make a proper link of this, due to MarkDown limitations.)
A general sound, such as a recording (of speech, music, or just urban noise), consists of (an infinite number of) combinations of such sine waves, superimposed. That is, if you were to draw the graph of pressure vs. time (at a given point in space) of the wave, or, (more or less) equivalently, the position of the speaker's membrane as a function of time, it would hence not be a pure sine wave, but something much more complicated. (Indeed, how could all the information of a Beethoven symphony be represented in a simple sine wave, that is completely determined by only its frequency, a single number?)
The sampling rate of a digital recording is merely the number of samples per second of the sound wave. Indeed, a physical sound wave has an amplitude p(t) at each time, so, because there are an infinite number of times t between 0 s and 10 s (say), theoretically, to save the audio we would need an infinite number of bytes (each sample requireing a fixed number of bytes -- for instance, a 16-bit recording utilizes 16 bits, or 2 bytes, per sample -- of course, the higher the "bit number" is, the higher quality we get; for a 16-bit sound, we have 216 = 65536 levels to choose from when specifying a single sample). In practice, a sound is sampled, so that the amplitude p(t) is saved only at fixed intervals. For instance, a typical audio CD has a sampling rate of 44.1 kHz; that is, a sample is saved every 22.7 µs.
Hence, a pure sine wave of any frequency, or any recording, could be stored on a computer using any sampling rate, the quality of the recording determined by the sampling rate (the higher the better). [Technical note: Of course there is a lower limit (in some sense) on the sampling rate. This is called the Nyquist rate.]
To determine the mean frequency of the sound at any small time, you could use some advanced techniques from Fourier analysis, but it is not entirely trivial.
This is just what I remember from physics, and I'm definitely no music expert, either.
Unless it's a recording of a constant tone, it probably doesn't have a single frequency. Each tone has a different frequency, which is why they sound different. There is generally a relationship between a wave's (not wav) frequency and wavelength, but none that I know of regarding amplitude.
Your SampleRate is similar to a frequency, being measured in Hz, but it only tells you about the precision of the recording, rather than the actual frequency of the sounds recorded.
As a quick addendum to the other two answers, if you are trying to measure frequencies within the sound file itself, you will need to look into the Fast Fourier Transform (FFT), which is an algorithm used to determine the strength of the frequencies within a sampled data set.
While it's true that an audio recording will not have a single frequency, you can find the fundamental frequency easily enough. Start at the beginning of your sample, and trace through it; you're looking for the highest absolute value, and in a wave with multiple frequencies you will not know what it is until you're back to zero. Remember the highest or lowest value you've seen thus far. Now, trace forward, hopefully in the opposite direction. You're looking for the next peak or trough of similar absolute value as the one you found, using the same method as before. Find out how many samples there are between your two highest absolute value readings. Divide your sample rate by this number (it better not be zero) and then divide by 2. This is the lowest, or fundamental, frequency of your recording at this point.
You can also generate a sinusoidal function that represents a synthetic waveform at a given frequency, and subtract this waveform's instantaneous values from your sample. Find the difference in the root-mean-square amplitudes of the before and after samples. This difference is a rough approximation of the amplitude of the signal at that frequency. Repeat this process, doubling the frequency each time. You can use this to create a basic EQ spectrum.
I would like to get some sort of distance measure between two pieces of audio. For example, I want to compare the sound of an animal to the sound of a human mimicking that animal, and then return a score of how similar the sounds were.
It seems like a difficult problem. What would be the best way to approach it? I was thinking to extract a couple of features from the audio signals and then do a Euclidian distance or cosine similarity (or something like that) on those features. What kind of features would be easy to extract and useful to determine the perceptual difference between sounds?
(I saw somewhere that Shazam uses hashing, but that's a different problem because there the two pieces of audio being compared are fundamentally the same, but one has more noise. Here, the two pieces of audio are not the same, they are just perceptually similar.)
The process for comparing a set of sounds for similarities is called Content Based Audio Indexing, Retrieval, and Fingerprinting in computer science research.
One method of doing this is to:
Run several bits of signal processing on each audio file to extract features, such as pitch over time, frequency spectrum, autocorrelation, dynamic range, transients, etc.
Put all the features for each audio file into a multi-dimensional array and dump each multi-dimensional array into a database
Use optimization techniques (such as gradient descent) to find the best match for a given audio file in your database of multi-dimensional data.
The trick to making this work well is which features to pick. Doing this automatically and getting good results can be tricky. The guys at Pandora do this really well, and in my opinion they have the best similarity matching around. They encode their vectors by hand though, by having people listen to music and rate them in many different ways. See their Music Genome Project and List of Music Genome Project attributes for more info.
For automatic distance measurements, there are several projects that do stuff like this, including marsysas, MusicBrainz, and EchoNest.
Echonest has one of the simplest APIs I've seen in this space. Very easy to get started.
I'd suggest looking into spectrum analysis. Whilst this isn't as straightforward as you're most likely wanting, I'd expect that decomposing the audio into it's underlying frequencies would provide some very useful data to analyse. Check out this link
Your first step will definitely be taking a Fourier Transform(FT) of the sound waves. If you perform an FT on the data with respect to Frequency over Time1, you'll be able to compare how often certain key frequencies are hit over the course of the noise.
Perhaps you could also subtract one wave from the other, to get a sort of stepwise difference function. Assuming the mock-noise follows the same frequency and pitch trends2 as the original noise, you could calculate the line of best fit to the points of the difference function. Comparing the best fit line against a line of best fit taken of the original sound wave, you could average out a trend line to use as the basis of comparison. Granted, this would be a very loose comparison method.
- 1. hz/ms, perhaps? I'm not familiar with the unit magnitude being worked with here, I generally work in the femto- to nano- range.
- 2. So long as ∀ΔT, ΔPitch/ΔT & ΔFrequency/ΔT are within some tolerance x.
- Edited for formatting, and because I actually forgot to finish writing the full answer.
I'm interested in image scaling algorithms and have implemented the bilinear and bicubic methods. However, I have heard of the Lanczos and other more sophisticated methods for even higher quality image scaling, and I am very curious how they work.
Could someone here explain the basic idea behind scaling an image using Lanczos (both upscaling and downscaling) and why it results in higher quality?
I do have a background in Fourier analysis and have done some signal processing stuff in the past, but not with relation to image processing, so don't be afraid to use terms like "frequency response" and such in your answer :)
EDIT: I guess what I really want to know is the concept and theory behind using a convolution filter for interpolation.
(Note: I have already read the Wikipedia article on Lanczos resampling but it didn't have nearly enough detail for me)
The selection of a particular filter for image processing is something of a black art, because the main criterion for judging the result is subjective: in computer graphics, the ultimate question is almost always: "does it look good?". There are a lot of good filters out there, and the choice between the best frequently comes down to a judgement call.
That said, I will go ahead with some theory...
Since you are familiar with Fourier analysis for signal processing, you don't really need to know much more to apply it to image processing -- all the filters of immediate interest are "separable", which basically means you can apply them independently in the x and y directions. This reduces the problem of resampling a (2-D) image to the problem of resampling a (1-D) signal. Instead of a function of time (t), your signal is a function of one of the coordinate axes (say, x); everything else is exactly the same.
Ultimately, the reason you need to use a filter at all is to avoid aliasing: if you are reducing the resolution, you need to filter out high-frequency original data that the new, lower resolution doesn't support, or it will be added to unrelated frequencies instead.
So. While you're filtering out unwanted frequencies from the original, you want to preserve as much of the original signal as you can. Also, you don't want to distort the signal you do preserve. Finally, you want to extinguish the unwanted frequencies as completely as possible. This means -- in theory -- that a good filter should be a "box" function in frequency space: with zero response for frequencies above the cutoff, unity response for frequencies below the cutoff, and a step function in between. And, in theory, this response is achievable: as you may know, a straight sinc filter will give you exactly that.
There are two problems with this. First, a straight sinc filter is unbounded, and doesn't drop off very fast; this means that doing a straightforward convolution will be very slow. Rather than direct convolution, it is faster to use an FFT and do the filtering in frequency space...
However, if you actually do use a straight sinc filter, the problem is that it doesn't actually look very good! As the related question says, perceptually there are ringing artifacts, and practically there is no completely satisfactory way to deal with the negative values that result from "undershoot".
Finally, then: one way to deal with the problem is to start out with a sinc filter (for its good theoretical properties), and tweak it until you have something that also solves your other problems. Specifically, this will get you something like the Lanczos filter:
Lanczos filter: L(x) = sinc(pi x) sinc(pi x/a) box(|x|<a)
frequency response: F[L(x)](f) = box(|f|<1/2) * box(|f|<1/2a) * sinc(2 pi f a)
[note that "*" here is convolution, not multiplication]
[also, I am ignoring normalization completely...]
the sinc(pi x) determines the overall shape of the frequency response (for larger a, the frequency response looks more and more like a box function)
the box(|x|<a) gives it finite support, so you can use direct convolution
the sinc(pi x/a) smooths out the edges of the box and (consequently? equivalently?) greatly improves the rejection of undesirable high frequencies
the last two factors ("the window") also tone down the ringing; they make a vast improvement in both the perceptual artifact and the practical incidence of "undershoot" -- though without completely eliminating them
Please note that there is no magic about any of this. There are a wide variety of windows available, which work just about as well. Also, for a=1 and 2, the frequency response does not look much like a step function. However, I hope this answers your question "why sinc", and gives you some idea about frequency responses and so forth.