OLA FFT Windows : Blackman-Nuttall or Dolph–Chebyshev? - window

I found a web page describing all the existing windows for FFT. it's here:
http://en.wikipedia.org/wiki/Window_function
it's very interesting as it shows the frequency response depending on the window used.
So when i look at the freq responses, i found that Blackman-nuttall and Dolph–Chebyshev windows seems the best
but what is the best of the best ?
and are they really better for audio processing than Hamming or
Hanning?
many thanks
Jeff

Blow your mind here:
http://www.rssd.esa.int/SP/LISAPATHFINDER/docs/Data_Analysis/GH_FFT.pdf
I can tell you a couple of things on the matter.
There is no "best" window function because it depends on what your application is about. The common parameters on which you should focus your choice are:
Scalloping loss
Main lobe width (of a sine wave)
Sidelobes max level/decrease
Computational cost
For example, the simple rectangular window does not require any computational cost and it provides the thinnest possible lobe, but at the expense of a big scalloping and very noisy sidelobes.
Blackman-style windows are usually built to minimize sidelobe levels, but they tend to have a heavy scalloping. You might instead choose one of the so-called "flat-top" windows if you need more precise peak measurements since scalloping is usually less than 1% even with the simplest ones, but their lobes are very fat (6-10 bins in width perhaps).
Example Nuttall window in [0, 1]:
1 - 1.369982685*cos(z) + 0.4054102674*cos(2*z) - 0.03542758202*cos(3*z)
Example flat-top window (SFT3M) in [0, 1]:
1 - 1.84540464*cos(z) + 0.6962635*cos(2*z)
If there is a window function that has no scalloping loss, that is very narrow and with no sidelobes, then it would be extremely expensive to calculate.

Related

Mathematica's NDSolve is finding a stiff system very quickly trying to solve a large set of differential equations

I'm a physics PhD student solving lubrication equations (non-linear PDEs) related to the evaporation of binary droplets in a cylindrical pixel in order to simulate the shape evolution. So I split the system into N ODEs, each representing the height evolution (dh/dt) at point i, write as finite differences, and solve simultaneously using NDSolve. For single-liquid droplets this works perfectly, the droplet evaporates cleanly.
However for binary droplets I include N additional ODEs for the composition fraction evolution (dX/dt) at each point. This also adds a new term (marangoni stress) to the dh/dt equations. Immediately NDSolve says:
At t == 0.00029140763302667777`, step size is effectively zero;
singularity or stiff system suspected.
I plot the droplet height at this t-value and it shows a narrow spike at the origin (clearly exploding, hence the stiffness); plotting the composition shows a narrow plummet at the origin (also exploding, just negative. Also, the physics obviously shouldn't permit the composition fraction to be below 0).
Finally, both sets of equations have terms depending on dX/dr, the composition gradient. If this is set to zero and I also set the evaporation interaction to zero (meaning the two liquids evaporate at the same rate and there can be no X gradient) there should be no change in X anywhere and it should reduce to the single liquid case (dX/dt = 0 and dh/dt no longer depends on X). However, the procedure is introducing some small gradient in X nonetheless, which explodes and causes the same numerical instability.
My question is this: is there anything about NDSolve that might be causing this? I've been over the equations and discretisation a hundred times and am sure it's correct. I've also looked into the documentation of NDSolve, but didn't find anything that helps me. Could it be introducing a small numerical error in the composition gradient?
I can post an MRE code below, but it's pretty dense and obviously written in mathematica code (doesn't transfer to the real world well...) so I don't know how much it'd help. Anyway thank you for reading this!

What is the motivation behind PixelOffsetModeHighSpeed and PixelOffsetModeHighQuality?

I do quite a bit of manual GDI+ drawing in C# and was always annoyed by the apparent (0.5, 0.5) pixel offset that GDI+ uses by default (my mind is more compatible with the IMO simpler definition of (0, 0) being the upper left corner of the upper left pixel). Until recently I thought it was probably just a stupid .NET thing to make things """easier""" - I just translated by (-0.5, -0.5) before doing anything else. Until I stumbled upon the PixelOffsetMode enum.
.NET definition | C API definition
typedef enum {
PixelOffsetModeInvalid = QualityModeInvalid,
PixelOffsetModeDefault = QualityModeDefault,
PixelOffsetModeHighSpeed = QualityModeLow,
PixelOffsetModeHighQuality = QualityModeHigh,
PixelOffsetModeNone = QualityModeHigh + 1,
PixelOffsetModeHalf = QualityModeHigh + 2
} PixelOffsetMode;
It seems that the "off by (0.5, 0.5)" is a deliberate GDI+ thing.
There are also these 2 answers on SO:
Looking for details on the PixelOffsetMode Enumeration in .Net, WinForms
What is PixelOffsetMode?
The answer to the latter question seems to be subtly incorrect as well. There is no difference between HighSpeed and Half (which is the mode that puts the origin in the upper left corner of the upper left pixel, and HighSpeed and None (which puts the origin in the center of the upper left pixel). The documentation of the C API enum definition even confirms this.
What bugs me most is, even though 2 of the options contain the words "Speed" and "Quality", which value you choose has nothing at all to do with speed or quality, it's just a different definition of the coordinate system used for drawing. Both can produce the exact same result with the exact same speed. In practice, this is very obscure and knowing the precise location of the origin is crucial for writing correct drawing code - vague terms like "Quality" or "Speed" aren't helpful here. Using the incorrect enum value doesn't make the drawing slow or low-quality, it simply makes it wrong.
Yet someone must have come up with those enum values when GDI+ was developed and may have thought of a reason for HighQuality and HighSpeed to exist. I'd like to know that reason - maybe there is a subtle difference, or there used to be a difference but it's not relevant anymore.
I don’t know the motivation but I can make a guess.
GDI is very old API, and that thing appeared in Windows 2000. The recommended hardware requirements for that OS say Pentium II 300MHz, 128 MB RAM, minimum is Pentium 133 MHz, 32MB RAM. By today’s standards, that’s extremely slow hardware. Very likely, that’s why you aren’t observing any differences in rendering speed on a modern Windows PC.

Ising 2D Optimization

I have implemented a MC-Simulation of the 2D Ising model in C99.
Compiling with gcc 4.8.2 on Scientific Linux 6.5.
When I scale up the grid the simulation time increases, as expected.
The implementation simply uses the Metropolis–Hastings algorithm.
I tried to find out a way to speed up the algorithm, but I haven't any good idea ?
Are there some tricks to do so ?
As jimifiki wrote, try to do a profiling session.
In order to improve on the algorithmic side only, you could try the following:
Lookup Table:
When calculating the energy difference for the Metropolis criteria you need to evaluate the exponential exp[-K / T * dE ] where K is your scaling constant (in units of Boltzmann's constant) and dE the energy-difference between the original state and the one after a spin-flip.
Calculating exponentials is expensive
So you simply build a table beforehand where to look up the possible values for the dE. There will be (four choose one plus four choose two plus four choose three plus four choose four) possible combinations for a nearest-neightbour interaction, exploit the problem's symmetry and you get five values fordE: 8, 4, 0, -4, -8. Instead of using the exp-function, use the precalculated table.
Parallelization:
As mentioned before, it is possible to parallelize the algorithm. To preserve the physical correctness, you have to use a so-called checkerboard concept. Consider the two-dimensional grid as a checkerboard and compute only the white cells parallel at once, then the black ones. That should be clear, considering the nearest-neightbour interaction which introduces dependencies of the values.
Use GPGPU:
You can also implement the simulation on a GPGPU, e.g. using CUDA, if you're already working on C99.
Some tips:
- Don't forget to align C99-structs properly.
- Use linear Arrays, not that nested ones. Aligned memory is normally faster to access, if done properly.
- Try to let the compiler do loop-unrolling, etc. (gcc special options, not default on O2)
Some more information:
If you look for an efficient method to calculate the critical point of the system, the method of choice would be finite-size scaling where you simulate at different system-sizes and different temperature, then calculate a value which is system-size independet at the critical point, therefore an intersection point of the corresponding curves (please see the theory to get a detailed explaination)
I hope I was helpful.
Cheers...
It's normal that your simulation times scale at least with the square of the size. Isn't it?
Here some subjestions:
If you are concerned with thermalization issues, try to use parallel tempering. It can be of help.
The Metropolis-Hastings algorithm can be made parallel. You could try to do it.
Check you are not pessimizing the code.
Are your spin arrays of ints? You could put many spins on the same int. It's a lot of work.
Moreover, remember what Donald taught us:
premature optimisation is the root of all evil
Before optimising you should first understand where your program is slow. This is called profiling.

Generate Random Numbers non-algorithmically

I am looking for a satisfying solution of how to generate a random number.
I looked at this, this, this and this.
But am looking for something else.
Most of the posts mention using, R[n+1] = (a *R[n-1 + b) %n, this pseudo-random function, or some other mathematical functions.
But weirdly I am not looking for these; I want some non-algorithmic answer. Precisely, an "Interview" answer. Something easy to understand, not to make the interviewer feel that I mugged up a method :) .
For an interview question, a common answer might be to look at the intervals between keystrokes (ask the user to type something), disc seek times or input from a disconnected source -- that will give you thermal electrons from inside your MIC socket or whatever.
LavaRnd uses a digital camera with the lens cap on, which is a version of the last.
Some operating systems allows indirect access to some of this random input, usually through a secure random function; slower but more secure than the usual RNG.
Depending on what job the interview is for, you can talk about testing the raw data to check for entropy, and concentrating the entropy by using a cryptographic hash function like SHA-256.
There are also specialised, and expensive, hardware cards which use various quantum effects to generate true random numbers.
Take the system time, add a seed, modulo the upper limit. if upper limit is less than 0 than multiply it by -1 and then later the result subtract the max... not very strong but meets your requirement?
If you have a UI and only need a couple of randoms can ask the user to move mouse around, enter a few seeds, enter a few words and use them as seeds

Determining the level of dissonance between two frequencies [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Using continued fractions, I'm generating integer ratios between frequencies to a certain precision, along with the error (difference from integer ratio to the real ratio). So I end up with things like:
101 Hz with 200 Hz = 1:2 + 0.0005
61 Hz with 92 Hz = 2:3 - 0.0036
However, I've run into a snag on actually deciding which of these will be more dissonant than others. At first I thought low numbers = better, but something like 1:51 would likely be not very dissonant since it's a frequency up 51 octaves from the other. It might be a screaming high, ear bleeding pitch, but I don't think it would have dissonance.
It seems to me that it must be related to the product of the two sides of the ratio compared to the constituents somehow. 1 * 51 = 51, which doesn't "go up much" from one side. 2 * 3 = 6, which I would think would indicate higher dissonance than 1:51. But I need to turn this feeling into an actual number, so I can compare 5:7 vs 3:8, or any other combinations.
And how could I work error into this? Certainly 1:2 + 0 would be less dissonant than 1:2 + 1. Would it be easier to apply an algorithm that works for the above integer ratios directly to the frequencies themselves? Or does having the integer ratio with an error allow for a simpler calculation?
edit: Thinking on it, an algorithm that could extend to any set of N frequencies in a chord would be awesome, but I get the feeling that would be much more difficult...
edit 2: Clarification:
Let's consider that I am dealing with pure sine waves, and either ignoring the specific thresholds of the human ear or abstracting them into variables. If there are severe complications, then they are ignored. My question is how it could be represented in an algorithm, in that case.
Have a look at Chapter 4 of http://homepages.abdn.ac.uk/mth192/pages/html/maths-music.html. From memory:
1) If two sine waves are just close enough for the human ear to be confused, but not so close that the human ear cannot tell they are different, there will be dissonance.
2) Pure sine waves are extremely rare - most tones have all sorts of harmonics. Dissonance is very likely to occur from colliding harmonics, rather than colliding main tones - to sort of follow your example, two tones many octaves apart are unlikely to be dissonant because their harmonics may not meet, whereas with just a couple of octaves different and loads of harmonics a flute could sound out of tune with a double bass. Therefore dissonance or not depends not only on the frequencies of the main tones, but on the harmonics present, and this has been experimentally demonstrated by constructing sounds with peculiar pseudo-harmonics.
The answer is in Chapter 4 of Music: a Mathematical Offering. In particular, see the following two figures:
consonance / dissonance plotted against the x critical bandwidth in 4.3. History of consonance and dissonance
dissonance vs. frequency in 4.5. Complex tones
Of course you still have to find a nice way to turn these data into a formula / program that gives you a measure of dissonance but I believe this gives you a good start. Good luck!
This will help:
http://www.acs.psu.edu/drussell/demos/superposition/superposition.html
You want to look at superposition.
Discrete or Fast Fourier Transform is the most generic means to get what you're asking for.

Resources