Calculating frequency from amplitude and bitrate [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I currently have an array full of data which from what I believe is the Amplitude of my wave file. It is currently at a low -32768 and at a high 32767.
I also have the SampleRate which was 16,000hz.
My understanding of sound isn't very good; does anyone know from this how I can calculate the Frequency?
Help greatly appreciated,
Monkeyguy.

What exactly is it that you're wanting to do? The method will depend entirely on what you're hoping to achieve. Do you have a signal that contains a single sinusoid, eg a detector from a piece of mechanical equipment? Or more likely, are you wanting to play/sing into a microphone and transcribe the music?
In both cases, the FFT will be your first port of call. In the first case this may be pretty much all you need, as FFTs are good for isolated steady-state sinusoids. In the latter case, you have got a very long road ahead of you in order to get any useful results at all. Pitch recognition is a difficult problem, and merely throwing some FFTs won't get you very far. You'll need to have a good grounding in digital signal processing and also the characteristics of musical signals, and then probably your best bet is to use an autocorrelation based method instead.
See my previous answer on a related subject for some links which may be useful: Algorithms for determining the key of an audio sample

In almost all cases, an audio file has no single frequency. A sound in which the sound wave has a single frequency, is (typically) a pure sine tone, and sounds like this:
http://www.wolframalpha.com/input/?i=sound+440+Hz&a=*MC.~-_*PlaySoundTone-&a=*FS-_**DopplerShift.fo-.*DopplerShift.vs-.*DopplerShift.c--&f3=10+m/s&f=DopplerShift.vs_10+m/s&f4=340.3+m/s&f=DopplerShift.c_340.3+m/s&a=*FVarOpt.1-_***DopplerShift.fo-.*DopplerShift.fs--.***DopplerShift.DopplerRatio---.*--&a=*FVarOpt.2-_**-.***DopplerShift.vo--.**DopplerShift.vw---.**DopplerShift.fo-.*DopplerShift.fs---
This is a pure 440 Hz sine wave. (It was not possible to make a proper link of this, due to MarkDown limitations.)
A general sound, such as a recording (of speech, music, or just urban noise), consists of (an infinite number of) combinations of such sine waves, superimposed. That is, if you were to draw the graph of pressure vs. time (at a given point in space) of the wave, or, (more or less) equivalently, the position of the speaker's membrane as a function of time, it would hence not be a pure sine wave, but something much more complicated. (Indeed, how could all the information of a Beethoven symphony be represented in a simple sine wave, that is completely determined by only its frequency, a single number?)
The sampling rate of a digital recording is merely the number of samples per second of the sound wave. Indeed, a physical sound wave has an amplitude p(t) at each time, so, because there are an infinite number of times t between 0 s and 10 s (say), theoretically, to save the audio we would need an infinite number of bytes (each sample requireing a fixed number of bytes -- for instance, a 16-bit recording utilizes 16 bits, or 2 bytes, per sample -- of course, the higher the "bit number" is, the higher quality we get; for a 16-bit sound, we have 216 = 65536 levels to choose from when specifying a single sample). In practice, a sound is sampled, so that the amplitude p(t) is saved only at fixed intervals. For instance, a typical audio CD has a sampling rate of 44.1 kHz; that is, a sample is saved every 22.7 µs.
Hence, a pure sine wave of any frequency, or any recording, could be stored on a computer using any sampling rate, the quality of the recording determined by the sampling rate (the higher the better). [Technical note: Of course there is a lower limit (in some sense) on the sampling rate. This is called the Nyquist rate.]
To determine the mean frequency of the sound at any small time, you could use some advanced techniques from Fourier analysis, but it is not entirely trivial.

This is just what I remember from physics, and I'm definitely no music expert, either.
Unless it's a recording of a constant tone, it probably doesn't have a single frequency. Each tone has a different frequency, which is why they sound different. There is generally a relationship between a wave's (not wav) frequency and wavelength, but none that I know of regarding amplitude.
Your SampleRate is similar to a frequency, being measured in Hz, but it only tells you about the precision of the recording, rather than the actual frequency of the sounds recorded.

As a quick addendum to the other two answers, if you are trying to measure frequencies within the sound file itself, you will need to look into the Fast Fourier Transform (FFT), which is an algorithm used to determine the strength of the frequencies within a sampled data set.

While it's true that an audio recording will not have a single frequency, you can find the fundamental frequency easily enough. Start at the beginning of your sample, and trace through it; you're looking for the highest absolute value, and in a wave with multiple frequencies you will not know what it is until you're back to zero. Remember the highest or lowest value you've seen thus far. Now, trace forward, hopefully in the opposite direction. You're looking for the next peak or trough of similar absolute value as the one you found, using the same method as before. Find out how many samples there are between your two highest absolute value readings. Divide your sample rate by this number (it better not be zero) and then divide by 2. This is the lowest, or fundamental, frequency of your recording at this point.
You can also generate a sinusoidal function that represents a synthetic waveform at a given frequency, and subtract this waveform's instantaneous values from your sample. Find the difference in the root-mean-square amplitudes of the before and after samples. This difference is a rough approximation of the amplitude of the signal at that frequency. Repeat this process, doubling the frequency each time. You can use this to create a basic EQ spectrum.

Related

How do I interpret audio encoded binary data?

I have built a little program that encodes binary data into a sound. For example the following binary input:
00101101
will produce a 'sound' like this:
################..S.SS.S################
where each character represents a constant unit of time. # stands for a 880 Hertz sine wave which is used to determine start and end of transmission, . stands for silence, representing the zeroes, and S stands for a 440 Hertz sine wave, representing the ones. Obviously, the part in the middle is much longer in practice.
The essence of my question is: How can I invert this operation?
The sound file is transmitted to the recipient via simple playback and recording of the sound. That means I am not trying to decode the original sound file which would be easy.
Obviously I have to analyze the recorded data with respect to frequency. But how? I have read a bit about Fourier Transform but I am quite lost here.
I am not sure where to start but I know that this is not trivial and probably requires quite some knowledge about signal processing. Can somebody point me in the right direction?
BTW: I am doing this in Ruby (I know, it's slow - it's just a proof of concept) but the problem itself is not programming language specific so any answers are very welcome.
Your problem is clearly trying to demodulate an FSK modulated signal. I would recommend implementing a correlation bank tuned to each frequency, it is a lot faster than fft if speed is one of your concerns
If you know the frequencies and the modulation rate, you can try using 2 sliding Goertzel filters for FSK demodulation.

Comparing 2 one dimensional signals

I have the following problem: I have 2 signals over time. They are from the same source so they should be the same. I want to check if they really are.
Complications:
they may be measured with different sample rates
the start / end time do not correlate. The measurement does not start at the same time and end at the same time.
there may be an time offset between the two signals.
My thoughts go along Fourier transformation, convolution and statistical methods for comparison. Can someone post me some links where I can find more information on how to handle this?
You can easily correct for the phase by just shifting them so their centers of mass line up. (Or alternatively, in the Fourier domain just multiplying by the inverse of the phase of the first coefficient.)
Similarly, if you want to line up the images given only partial data, you can just cross correlate and take the maximal value (which is again easy to do in the Fourier domain).
That leaves the only tricky part of this process as dealing with the sampling rates. Now if you know a-priori what the sample rates are, (and if they are related by a rational number), you can just use sinc interpolation/downsampling to rescale them to a common sampling rate:
https://ccrma.stanford.edu/~jos/st/Bandlimited_Interpolation_Time_Limited_Signals.html
If you don't know the sampling rate, you may be a bit screwed. Technically, you can try just brute forcing over all the different rescalings of your signal, but doing this tends to be either slow or else give mediocre results.
As a last suggestion, if you just want to match sounds exactly you can try using the cepstrum and verifying that the peaks of the signal are close enough to within some tolerance. This type of analysis is used a lot in sound and speech recognition, with some refinements to make it operate a bit more locally. It tends to work best with frequency modulated data like speech and music:
http://en.wikipedia.org/wiki/Cepstrum
Fourier transformation does sound like the right way.
There is too much mathematical information for me to just start explaining here so if you really wanna know what's going on with that (cause I don't think you can just use FT without understanding it) you should use this reference from MIT OpenCourseWare: http://ocw.mit.edu/courses/mathematics/18-103-fourier-analysis-theory-and-applications-spring-2004/lecture-notes/
Hope it helped.
If you are working with a linux box and the waveforms that need to be processed have already been recorded, you can try to use the file command to display details about the recording. It gives you the sampling rate when it is invoked on a wav file, though I am not sure what format you are recording in.
If the signals are time-shifted with respect to each other, you may try to convolve one with a delta function with increasing delays and then comparing. On MATLAB, conv and all should be good enough.
These are just 'crude' attempts (almost like hacking at the problem). There may be algorithms that are shift-invariant that may do a better job.
Hope that helps.

How to get volume from mic input on WP7 [duplicate]

Given two byte arrays of data captured from a microphone, how can I determine which one has more spikes in noise? I would assume there is an algorithm I can apply to the data, but I have no idea where to start.
Getting down to it, I need to be able to determine when a baby is crying vs ambient noise in the room.
If it helps, I am using the Microsoft.Xna.Framework.Audio.Microphone class to capture the sound.
you can convert each sample (normalised to a range 1.0 to -1.0) into a decibel rating by applying the formula
dB = 20 * log-base-10 (sample-value)
To be honest, so long as you don't mind the occasional false positive, and your microphone is set up OK, you should have no problem telling the difference between a baby crying and ambient background noise, without going through the hassle of doing an FFT.
I'd recommend you having a look at the source code for a noise gate, which does pretty much what you are after, with configurable attack times & thresholds.
First use a Fast Fourier Transform to transform the signal into the frequency domain.
Then check if the signal in the typical "cry-frequencies" is significantly higher than the other amplitudes.
The preprocessor of the speex codec supports noise vs signal detection, but I don't know if you can get it to work with XNA.
Or if you really want some kind of loudness calculate the sum of squares of the amplitudes from the frequencies you're interested in (for example 50-20000Hz) and if the average of that over the last 30 seconds is significantly higher than the average over the last 10 minutes or exceeds a certain absolute threshold sound the alarm.
Louder at what point? The signal's average amplitude will tell you which one is louder on average, but that is kind of a dumb, brute force way to go about it. It may work for you in practice though.
Getting down to it, I need to be able to determine when a baby is crying vs ambient noise in the room.
Ok, so, I'm just throwing out ideas here; I am by no means an expert on audio processing.
If you know your input, i.e., a baby crying (relatively loud with a high pitch) versus ambient noise (relatively quiet), you should be able to analyze the signal in terms of pitch (frequency) and amplitude (loudness). Of course, if during he recording someone drops some pots and pans onto the kitchen floor, that will be tough to discern.
As a first pass I would simply traverse the signal, maintaining a standard deviation of pitch and amplitude throughout, and then set a flag when those deviations jump beyond some threshold that you will have to define. When they come back down you may be able to safely assume that you captured the baby's cry.
Again, just throwing you an idea here. You will have to see how it works in practice with actual data.
I agree with #Ed Swangren, it will take a lot of playing with samples of data for a lot of sources. To me, it sounds like the trick will be to limit or hopefully eliminate false positives. My experience with babies is they are much louder crying than the environment. so, keeping track of the average measurements (freq/amp/??) of the normal environment and then classifying how well the changes match the characteristics of a crying baby which changes from kid to kid, so you'll probably want a system that 'learns'. Best of luck.
update: you might find this library useful http://naudio.codeplex.com/

FFT Algorithm: What goes IN/OUT? (re: real-time pitch detection)

I am attempting to extract pitch data from an audio stream. From what I can see, it looks as though FFT is the best algorithm to use.
Rather than digging straight into the math, could someone help me understand what this FFT algorithm does?
Please don't say something obvious like 'FFT extracts frequency data from a raw signal.' I need the next level of detail.
What do I pass in, and what do I get out?
Once I understand the interface clearly, this will help me to understand the implementation.
I take it I need to pass in an audio buffer, I need to tell it how many bytes to use for each computation (say the most recent 1024 bytes from this buffer). and maybe I need to specify the range of pitches I want it to detect. Now it is going to pass back what? An array of frequency bins? What are these?
(Edit:) I have found a C++ algorithm to use (if I can only understand it)
Performous extracts pitch from the microphone. Also the code is open source. Here is a description of what the algorithm does, from the guy that coded it.
PCM input (with buffering)
FFT (1024 samples at a time, remove 200 samples from front of the buffer afterwards)
Reassignment method (against the previous FFT that was 200 samples earlier)
Filtering of peaks (this part could be done much better or even left out)
Combining peaks into sets of harmonics (we call the combination a tone)
Temporal filtering of tones (update the set of tones detected earlier instead of simply using the newly detected ones)
Pick the best vocal tone (frequency limits, weighting, could use the harmonic array also but I don't think we do)
But could someone help me understand how this works? What is it that is getting sent from the FFT to the Reassignment method?
The FFT is just one building block in the process, and it may not be the best approach for pitch detection. Read up on pitch detection and decide which algo you want to use first (this will depend on what exactly you are trying to measure the pitch of - speech, single musical instrument, other types of sound, etc. Get this right before getting into low level details such as the FFT (some, but not all pitch detection algorithms use the FFT internally).
There are numerous similar questions on SO already, e.g. Real-time pitch detection using FFT and Pitch detection using FFT for trumpet, and there is good overview material on Wikipedia etc - read these and then decide whether you still want to roll your own FFT-based solution or perhaps use an existing library which is suitable for your particular application.
There is an element of choice here. The most straightforward to implement is to do (2^n samples in) complex numbers in, and 2^n complex numbers out, so maybe you should start with that.
In the special case of a DCT(discrete cosine transform), typically what goes in is 2^n samples (often floats), and out go 2^n values, often floats too. DCT is an FFT but that takes only the real values, and analyses the function in terms of cosines.
It is smart (but commonly skipped) to define a struct to handle the complex values. Traditionally FFT's are done in-place, but it works fine if you don't.
It can be useful to instantiate a class that contains a work buffer for the FFT (if you don't want to do the FFT in-place), and reuse that for several FFTs.
In goes N samples of PCM (purely real complex numbers). Out comes N bins of frequency domain (each bin corresponding to a 1/N slice of sample rate). Each bin is a complex number. Rather than real and imaginary parts, these values should generally be handled in polar format (absolute value and argument). The absolute value tells the amount of sound near the bin center frequency while the argument tells the phase (at which position the sine wave is travelling).
Most often coders only use the magnitude (absolute value) and throw away the phase angle (argument).

Algorithms needed on filtering the noise caused by the vibration

For example you measure the data coming from some device, it can be a mass of the object moving on the bridge. Because it is moving the mass will give data which will vibrate in some amplitude depending on the mass of the object. Bigger the mass - bigger the vibrations.
Are there any methods for filtering such kind of noise from that data?
May be using some formulas of vibrations? Have no idea what kind of formulas or algorithms (filters) can be used here. Please suggest anything.
EDIT 2:
Better picture, I just draw it for better understanding:
Not very good picture. From that graph you can see that the frequency is the same every
time, but the amplitude chanbges periodically. Something like that I have when there are no objects on the moving road. (conveyer belt). vibrating near zero value.
When the object moves, I there are the same waves with changing amplitude.
The graph can tell that there may be some force applying to the system and which produces forced occilations. So I am interested in removing such kind of noise. I do not know what force causes such occilations. Soon I hope I will get some data on the non moving road with and without object on it for comparison with moving road case.
What you have in your last plot is basically an amplitude modulated oscillation coming from a function like:
f[x] := 10 * (4 + Sin[x]) * Sin[80 * x]
The constants have been chosen to match your plot (using just a rule of thumb)
The Plot of this function is
That isn't "noise" (although may be some noise is there too), but can be filtered easily.
Let's see your data for the static and moving payloads ....
Edit
Based on your response to several comments, and based in my previous experience with weighting devices:
You are interfacing the physical world, not just getting input from a mouse and keyboard. It is very important for you understand the device, how it works and how it is designed.
You need a calibration procedure. You have to use several master weights to be sure that the device is working properly and linearly in the whole scale, and that the static case is measured much better than your dynamic needs.
You'll not be able to predict if you can measure with several loads in the conveyor until you do some experiments and look very carefully at the resulting plots
You need to be sure that a load placed anywhere in the conveyor shows the same reading. Or at least you should be able to correlate reading and position.
As I said before, you need a lot of info, and it seems that is not available. I always worked as a team with the engineers designing the device.
Don't hesitate to add more info ...
Have you tried filters with lowpass characteristics? There are different approaches for smoothing data (i.e. Savitzky-Golay, Gauss, moving average) but often, a simple N-point median filter is already sufficient.
It really depends on what you're after.
Take a look at this book:
The Scientist and Engineer's Guide to Digital Signal Processing
You can download it for free. In particular, check chapters 14 and 15.
If the frequency changes with mass and you're trying to measure mass, why not measure the frequency of the oscillations and use that as your primary measure?
Otherwise you need a notch filter which is tunable - figure out the frequency of the "noise" and tune the notch filter to that.
Another book to try is Lyons Understanding Digital Signal Processing
In order to smooth the signal, I'd average the previous 2 * n samples where n is the maximum expected wavelength of the vibrations.
This should cause most of the noise to be eliminated.
If you have some idea of the range of frequencies, you could do a simple average as long as the measurement period were sufficiently long to give you the level of accuracy you want to achieve. The more wavelengths worth of data you average against, the smaller the ratio of contributed error from a partial wavelength.
I'd suggest first simulating/modeling this in software like Matlab.
Data you'll need to consider:
The expected range of vibration frequencies
The measurement accuracy you want to achieve
The expected range of mass you'll want to measure
The function of mass to vibration amplitude
You should be able to apply the same principles as noise-cancelling microphones: put two sensors out, then subtract the secondary sensor's (farther away from the good signal source) signal from the primary sensor's (closer to the good signal source) signal.
Obviously, this works best if the "noise" will reach both sensors fairly equally while the "signal" reaches the primary sensor much more strongly.
For things like sound, this is pretty easy to do in the sensor itself, which makes your software a lot easier and more performant. Depending on what you're measuring, this might be easier to do with multiple sets of hardware and doing the cancellation in software.
If you can characterize the frequency spectra of the unwanted vibration noise, you might be able to synthesize a set of (near) minimum phase notch or band reject filter(s) to allow you to acquire your desired signal at your desired S/N ratio with minimized latency or data set size.
Filtering noisy digital signals is straight forward, as previous posters have noted. There are lots of references. You have not however stated what your objectives are clearly, so we cannot point you into a good direction. Are you looking for a single measurement of a single object on a bridge? [Then see other answers].
Are you monitoring traffic on this bridge and weighing each entity as it passes by? Then you need to determine when entities are on the sensor and when they are not. Typically, as long as the sensor's noise floor is significantly lower than the signal you're measuring this can be accomplished by simple thresholding.
Are you trying to measure the vibrations of the bridge caused by other vehicles? In which case you need either a more expensive sensor if you're having problems doing this, or a clearer measuring objective.

Resources