Calculate std of Kalman Filter - std

I developed Kalman filter (for k=1 to 100)
How I calculate the filter standard-deviations ?
In the first question i show the Monte-Carlo (30 runs) means of the estimation errors are almost 0, and now I need to show that KF stds are very close to the Monte-Carlo standard deviations.
I would appreciate help on how to calculate the KF std
Thanks

the α−β−(γ)
filter, the Kalman filter utilizes the "Measure, Update, Predict" algorithm.
Contrary to the α−β−(γ)
filter, the Kalman Filter treats measurements, current state estimation, and next state estimation (predictions) as normally distributed random variables. The random variable is described by mean and variance.

Related

power sampling in GNU Radio

I am using GNU Radio with hackrf. I need to get a picking for each frequency in accordance with the selected decibel level or a level above a certain decibel threshold and save the frequencies/db to a file.
Trying to solve this, I decided to recreate the "QT GUI Frequency Sink" algorithm through the Embedded python block, but unfortunately I lack knowledge of how to convert comlex64 data to fft freq / amplitude signal. I hit the wall for several months, I will be glad to any advice.
First off: the hackRF is not a calibrated measurement device. You need to use a calibrated device to know at any given gain, sampling rate and RF frequency, what a digital power corresponds to in physical units. No exceptions.
but unfortunately I lack knowledge of how to convert comlex64 data to fft freq / amplitude signal.
You literally just apply the FFT of the length of your choosing, then calculate the magnitude of each complex value in that result. There's nothing more to it. (If the FFT being an algorithm that works on vectors of complex number and produces vectors of complex numbers of the same size confuses you, you don't have a programming, but a math basics problem!)
In GNU Radio, you can ask for your signal to come in multiples of any given length, so that getting vectors of input samples to transform is trivial python slicing.

Non-causal filtering in signal post-processing

I have velocity signal that that is calculated from the derivative of a logged position signal. As the calculation is done in post-processing both previous and future samples are available. I wan't to improve the quality of my calculated velocity signal by filtering it. I can easily understand how for example a moving average filter can be done non casually by shifting the samples, i.e.
y(k) = (x(i-1) + x(i) + x(i+1))/3
But how does it work for more advanced filters, in signal processing I have worked with causal chebyshev, butterworth filters etc., does it make sense to apply this kind of filters in post-processing and shift the data in a similar way as for the moving average or are there other proper filters to use?
Yes, filtering a signal with Chebyshev or Butterworth filter shifts the signal. This is known as the "group delay" of the filter: let φ(ω) be the filter's phase response, then the group delay is the derivative
τ_g(ω) = −dφ(ω)/dω.
The group delay is a function of frequency ω, meaning that in general each frequency might experience a different shift, i.e. dispersion.
For a linear phase filter like the 3-tap moving average filter, this does not happen; group delay is constant over frequency.
But for causal IIR filters, some variation in group delay is unavoidable. For popular IIR designs like Chebyshev or Butterworth, the group delay varies slowly across frequency except around the filter's cutoff.
Like you said, you can advance the signal by some number of samples in post-processing to counteract the shift. Since group delay generally varies with frequency, the number of samples to advance by is a choice about where you want it to match best. Something like the average group delay over the passband is a reasonable place to start.
Alternatively, another way to address shifting is "forward-backward filtering". Apply the filter running forward, then again going backward. That way the shift introduced by the forward pass is counteracted in the backward pass. You can use Matlab's filtfilt or scipy.signal's filtfilt to perform forward-backward filtering.

Some details about adjusting cascaded AdaBoost stage threshold

I have implemented AdaBoost sequence algorithm and currently I am trying to implement so called Cascaded AdaBoost, basing on P. Viola and M. Jones original paper. Unfortunately I have some doubts, connected with adjusting the threshold for one stage. As we can read in original paper, the procedure is described in literally one sentence:
Decrease threshold for the ith classifier until the current
cascaded classifier has a detection rate of at least
d × Di − 1 (this also affects Fi)
I am not sure mainly two things:
What is the threshold? Is it 0.5 * sum (alpha) expression value or only 0.5 factor?
What should be the initial value of the threshold? (0.5?)
What does "decrease threshold" mean in details? Do I need to iterative select new threshold e.g. 0.5, 0.4, 0.3? What is the step of decreasing?
I have tried to search this info in Google, but unfortunately I could not find any useful information.
Thank you for your help.
I had the exact same doubt and have not found any authoritative source so far. However, this is what is my best guess to this issue:
1. (0.5*sum(aplha)) is the threshold.
2. Initial value of the threshold is what is above. Next, try to classify the samples using the intermediate strong classifier (what you currently have). You'll get the scores each of the samples attain, and depending on the current value of threshold, some of the positive samples will be classified as negative etc. So, depending on the desired detection rate desired for this stage (strong classifier), reduce the threshold so that that many positive samples get correctly classified ,
eg:
say thresh. was 10, and these are the current classifier outputs for positive training samples:
9.5, 10.5, 10.2, 5.4, 6.7
and I want a detection rate of 80% => 80% of above 5 samples classified correctly => 4 of above => set threshold to 6.7
Clearly, by changing the threshold, the FP rate also changes, so update that, and if the desired FP rate for the stage not reached, go for another classifier at that stage.
I have not done a formal course on ada-boost etc, but this is my observation based on some research papers I tried to implement. Please correct me if something is wrong. Thanks!
I have found a Master thesis on real-time face detection by Karim Ayachi (pdf) in which he describes the Viola Jones face detection method.
As it is written in Section 5.2 (Creating the Cascade using AdaBoost), we can set the maximal threshold of the strong classifier to sum(alpha) and the minimal threshold to 0 and then find the optimal threshold using binary search (see Table 5.1 for pseudocode).
Hope this helps!

Second order low-pass filter algorithm

I need to filter some noise from a signal and a simple RC first order filter seems not to be enough. I've been looking around but I haven't found algorithms for other filters (although many examples of how to do it with analogue circuits). Can somebody pinpoint where can I find such algorithms? Or at least write one here?
For clarification: I take the signal from an oscilloscope, and I only have one cycle. This cycle looks a little bit like:
125 * (x > 3 ? exp(-(x - 3) / 2) : exp(5*(x - 3)))
Now, the signal not always have that shape and I need to compute the derivate of the signal, which is easy if not because when one zooms the signal enough (each point is 160 nano seconds appart) you can see a lot of noise. So, before computing derivatives I need to flattern the signal.
If you are asking for how to design a higher order filter than a simple first order, how about choosing a filter from here:wiki on Filter_(signal_processing)
Just hypothesizing about your question, so here are a couple of design points.
1) You probably don't want to have ripple (varying gain) in your pass band, as that would distort your signal.
2) You probably don't care about having ripple in your stop band, as the signal should be close to 0 there anyway.
3) The higher the order of the filter, the more it looks like a ideal square shaped filter.
4) The higher the rolloff the better, you want to cut down on the noise outside of your passband as quickly as possible.
5) You may or may not care about "group delay", which is a measure of the distortion caused by different frequencies taking different times to pass through the filter. For audio, you probably want a not too high group delay, as you can imagine having different frequency components undergoing different time (and thus phase) shifts will cause some distortion.
Once you select the filter you want based on these (and possibly other) considerations, then simply implement it using some topology, like those mentioned here
With only a vague description of your requirements it's hard to give any specific suggestions.
You need to specify the parameters of your filter: sample rate, cut-off frequency, width of transition band, pass-band ripple, minimum stop-band rejection, whether phase and group delay are an issue, etc. Once you have at least some of these parameters pinned down then you can start the process of selecting an appropriate filter design, i.e. basic filter type, number of stages, etc.
It would also be helpful to know what kind of signal you want to filter - is it audio, or something else ? How many bits per sample ?
You need a good definition of your signal, a good analysis of your noise, and a clear understanding of the difference between the two, in order to determine what algorithms might be appropriate for removing one and not eliminating information in the other. Then you need to define the computational environment (integer or float ALU, add and multiply cycles?), and set a computational budget. There's a big difference between a second-order IIR and a giga-point FFT.
Some very commonly used 2nd-order digital filters are described in RBJ's biquad cookbook.

Aging a dataset

For reasons I'd rather not go into, I need to filter a set of values to reduce jitter. To that end, I need to be able to average a list of numbers, with the most recent having the greatest effect, and the least recent having the smallest effect. I'm using a sample size of 10, but that could easily change at some point.
Are there any reasonably simple aging algorithms that I can apply here?
Have a look at the exponential smoothing. Fairly simple, and might be sufficient for your needs. Basically recent observations are given relatively more weight than the older ones.
Also (depending on the application) you may want to look at various reinforcement learning techniques, for example Q-Learning or TD-Learning or generally speaking any method involving the discount.
I ran into something similar in an embedded control application.
The simplest option that I came across was a 3/4 filter. This gets applied continuously over the entire data set:
current_value = (3*current_value + new_value)/4
I eventually decided to go with a 16-tap FIR filter instead:
Overview
FIR FAQ
Wikipedia article
Many weighted averaging algorithms could be used.
For example, for items I(n) for n = 1 to N in sequence (newest to oldest):
(SUM(I(n) * (N + 1 - n)) / SUM(n)
It's not exactly clear from the question whether you're dealing with fixed-length
data or if data is continuously coming in. A nice physical model for the latter
would be a low pass filter, using a capacitor and a resistor (R and C). Assuming
your data is equidistantly spaced in time (is it?), this leads to an update prescription
U_aged[n+1] = U_aged[n] + deltat/Tau (U_raw[n+1] - U_aged[n])
where Tau is the time constant of the filter. In the limit of zero deltat, this
gives an exponential decay (old values will be reduced to 1/e of their value after
time Tau). In an implementation, you only need to keep a running weighted sum U_aged.
deltat would be 1 and Tau would specify the 'aging constant', the number of steps
it takes to reduce a sample's contribution to 1/e.

Resources