I have 200 Hz EEG signal.I have applied a bandpass filter with a cutoff frequenct 1 to 60 Hz. If now I apply discret wavelet transform for 5 level decomposition .... How the signal will be decomposed??starting from 60 Hz or 200Hz??
Thanks in advance
Welcome to SO.
First of all, I think that the DSP Stack Exchange may be better suited to get help on this topic.
I suspect that you get mixed up between sampling rate (200Hz) and cutoff frequencies (1Hz-60Hz).
Your ECG data are sampled at 200Hz, or 200 samples per second (SPS). This is the sampling rate.
Based on Nyquist's theorem, the highest frequency contained in this signal is 100Hz. This is the frequency content of your signal.
When you apply a band-pass filter to your data, you reject frequencies outside of the 1Hz-60Hz range, but your signal is still sampled at 200SPS.
So what you feed into your 5-level decomposition is a 200Hz sampled time series, whose frequency content has been band-passed between 1 and 60Hz.
I suggest that you plot the spectrum of your ECG before and after band-pass filtering to better understand what is going on.
Related
I have velocity signal that that is calculated from the derivative of a logged position signal. As the calculation is done in post-processing both previous and future samples are available. I wan't to improve the quality of my calculated velocity signal by filtering it. I can easily understand how for example a moving average filter can be done non casually by shifting the samples, i.e.
y(k) = (x(i-1) + x(i) + x(i+1))/3
But how does it work for more advanced filters, in signal processing I have worked with causal chebyshev, butterworth filters etc., does it make sense to apply this kind of filters in post-processing and shift the data in a similar way as for the moving average or are there other proper filters to use?
Yes, filtering a signal with Chebyshev or Butterworth filter shifts the signal. This is known as the "group delay" of the filter: let φ(ω) be the filter's phase response, then the group delay is the derivative
τ_g(ω) = −dφ(ω)/dω.
The group delay is a function of frequency ω, meaning that in general each frequency might experience a different shift, i.e. dispersion.
For a linear phase filter like the 3-tap moving average filter, this does not happen; group delay is constant over frequency.
But for causal IIR filters, some variation in group delay is unavoidable. For popular IIR designs like Chebyshev or Butterworth, the group delay varies slowly across frequency except around the filter's cutoff.
Like you said, you can advance the signal by some number of samples in post-processing to counteract the shift. Since group delay generally varies with frequency, the number of samples to advance by is a choice about where you want it to match best. Something like the average group delay over the passband is a reasonable place to start.
Alternatively, another way to address shifting is "forward-backward filtering". Apply the filter running forward, then again going backward. That way the shift introduced by the forward pass is counteracted in the backward pass. You can use Matlab's filtfilt or scipy.signal's filtfilt to perform forward-backward filtering.
I'm reading about Algorithms in Frecuency Modulation. In most synthetizers each algorithm operator have an "Out" level knob, in carriers this knob controls the output volume. For modulators however the level knob decides the amount of change it does to the carrier.
Is this amount the Modulation Index?
Short answer: yes, I think you're understanding the concept correctly.
The modulation index is the ratio between the carrier and modulator frequency deviations. The modulation index is directly proportional to the amplitude of the modulator, and inversely proportional to the frequency of the modulator.
The formula for the modulation index is:
You've mentioned that you can set the output level for each operator on your synth. For FM radio, the amplitude of the carrier wave is constant. In music synthesizers you can adapt it to tweak sounds.
That's also because you often have more complex algorithms than what's used for FM radio (one modulator+carrier only). In a DX7 you can cascade up to 6 operators, and in the FS1R and Montage you have 8.
In FM synths you'd use it to get more or fewer sideband frequencies in the resulting signal.
By the way, if you're talking about FM synths:
It's mostly an implementation detail, but they don't actually modulate the frequency but the phase.
I have sampled sensor data for 1 minute with 5kHz sampling.
So, one sampled data file includes 5,000 x 60 = 300,000 data points.
Note that the sensor measures periodic data such as 60Hz AC current.
Now, I would like to apply FFT (using python numpy.rfft function) to the one data file.
As I know, the number of FFT results is half of the number of input data, i.e., 150,000 FFT results in the case of 300,000 data points.
However, the number of FFT results is too large to analyze them.
So, I would like to reduce the number of FFT results.
Regarding that, my question is that the following method valid given the one sampled data file?
Segment the one sampled data file into M segments
Apply FFT to each segment
Average the M FFT results to get one averaged FFT result
Use the average FFT result as FFT result of the given one sampled data file
Thank you in advance.
It depends on your purposes.
If source signal is sampled with 5 kHz, then frequency of max output element will corresponds to 2.5 kHz. So for 150K output length frequency resolution will about 0.017 Hz. If you apply transform to 3000 data points, you'll get freq.resolution 1.7 Hz.
Is this important for you? Do you need to register all possible frequency components of AC current?
AC quality (magnitude, frequency, noise) might vary during one-minute interval. Do you need to register such instability?
Perhaps, high freq. resolution and short-range temporal stability is not necessary for AC control, in this case you approach is quite well.
Edit: Longer interval also diminishes finite-duration signal windowing effect that gives false peaks
P.S. Note that fast Fourier transform usually (not always, I don't see such directions in rfft description) works with interval length = 2^N, so here output might contain 256K
I want to do some modeling that will repeatedly call an iFFT. This will take as input a parametric model of the complex frequency response (amplitude, phase) and produce as output an impulse response. I would like to compare this to a "windowed" impulse response that I have measured for a loudspeaker in a room. The measured impulse can be characterized by an initial portion that corresponds to sound traveling directly through the air to the microphone that lasts a few millisecond, after which sounds that reflect off of the surfaces in the room (floor, walls, etc.) contaminates the signal. The uncontaminated portion is maybe 5% of the total measured impulse. I want to compare the impulse response that the iFFT generates from the frequency response to ONLY the uncontaminated portion of the measured impulse.
If needed I can calculate the entire impulse response from the frequency response and then just throw away 95% of the result but this seems to be very inefficient. The iFFT will be calculated many, many times (thousands probably) while my model is being optimized so I want to make sure that I can make it as efficient as possible. At this point my only option seems to be using FFTW and then just throwing away the data that is not needed (for lack of a better idea).
Is there a fast way to calculate the inverse FFT only for those time points of interest, e.g. not for the entire time span that the FFT can access? For instance, I may only need 5% of the time points. I am not intimately familiar with the computation of the FFT and iFFT, so I don't have insight on the answer to this question.
Edit: I rechecked, and if I record a 16k impulse at 96kHz, there are only about 475 samples of "good data" before the reflections contaminate the signal. This is just under 3% of the total recorded signal. How can I efficiently calculate only these 200 points from my frequency response???
Its all a matter of time-length and frequency-resolution.
If your original (measured) impulse response is under 512 samples, you may just use these 512 samples and calculate a 512 point-FFT. This will give you poor frequency-resolution, but you could interplate the spectrum if you wish.
In the other direction, just "downsample" your long spectrum to some 512 frequency bin-spectrum (for example take every 4th line) and do the inverse FFT, this will result in a short 512 sample impulse response.
I'm trying to find a pitch of a guitar string. Sound is coming in through mic at a sample rate of 44100. I'm using 2048 bites for a buffer size. Considering the Nyquist rate there is no point for using bigger buffer size. After recieving the data, I apply hanning window... and this is the point where I get confused. Should I use Lowpass filter in the time domain or take FFT first? If I would take FFT first, wouldn't it be easier to use just the first half of the samples, disregarding the other half, because I need frequencies in range of 50-1000? After FFT I will use Harmonic Product Spectrum to find fundamental frequency.
What you suggest makes some sense: if you don't need low frequencies you don't need to use long samples. With long samples you gain frequency resolution, which might be useful in some circumstances, but you lose time resolution (in the sense that successive samples are further apart).
A few things that don't make sense:
1) using a low-pass digital filter in the computation prior to the FFT (I'm assuming this is what you mean) just takes extra computation time and doesn't really gain you anything.
2) "Considering the Nyquist rate there is no point for using bigger buffer size": these aren't really related. The Nyquist rate determines the maximum frequency of the FFT, and the buffer size determines the frequency resolution, and therefore also the lowest frequency.
It really depends on your pitch detection algorithm, but why would you use a low-pass filter in the first place?
In addition, a guitar usually produces spectral information way beyond 1000Hz. Notes on the high E string easily produce harmonics at 4-5kHz and beyond, and these harmonics are exactly what will make your HPS nice and clear.
The less data used or the shorter your FFT, the lower the resulting FFT frequency resolution.
From what I read here a guitar ranges from 82.4 (open 6th string) to 659.2 (12th fret on 1st string) and the difference between the lowest 2 notes is about 5Hz.
If possible, I would apply an analog filter after the mic, but before the sampling circuit. Failing that, you would normally apply an FIR filter before shaping everything with the Hanning function. You could also use Decimation to reduce the sample rate, or simply choose a lower sample rate to start with.
Since you are doing an FFT anyway, simply throw away results above 1000 Hz. Sadly, you can't cut back on the number of samples - cutting the sample rate reduces frequency resolution.
2048 samples at 44100 Hz will give the same resolution as 1024 samples at 22050 Hz.
Which the same as 512 samples at 11025 Hz.