I would like to run the Kalman smoother from the R package imputeTS to impute the missing values of several univariate time series. From the literature it seems that the initialization of the first value might have significant effects on the inference results. But from the documentation of the R package imputeTS it unfortunately does not become clear what the default initialization of the Kalman smoother in imputeTS is. I would appreciate any hint on this.
Thanks in advance,
Marco
Related
I am currently using CONOPT4 solver to solve nonlinear programming problem.The nonlinearity is in the form of z=x*y and z=x/y, all variables are continuous. I specify some scaling factors and solving performance improves a lot. However, when I further refine some scaling factors, to project the value into the range from 0.01 to 100. The solving time becomes longer, which is really weird. I cannot provide my code here and I know it's impossible to give specific reason without the code. Could you talk about your experience on generally tuning the scaling factors when using CONOPT sovler. Thanks a lot.
I'm implementing a Kalman Filter which fuses 3d position data (provided from 2 different computer vision algorithms). I am modeling the problem with a 9-dimensional state vector (position, velocity, and acceleration). However, the data from each sensor does not come at the same time. Since I compute the velocity by considering the time step between the reception of the previous data and the current data point, two consecutive data points can be quite different but separated by only a very small time step, thus making it seem like the position has changed rapidly.
I am wondering if anyone has insight or direction on the best way to approach this problem- will the kalman filter itself be tolerant of this behaviour? Or should I place all data received within a time window into a bin, and perform the update/predict cycle less frequently on a batch of data? The resources I've seen for utilizing kalman filter in object tracking have used only one camera (i.e. synchronous data), so I'm having trouble finding information related to my use case.
Any help is very much appreciated! Thank you!
From all what I got to know from your question and our conversation in the comments let me first shortly describe the issue and suggest the solution.
A quick recap
You have a system with two independent sensors, which take measurements with different rates (30Hz and 5Hz) (and maybe have some time jitter). The good news is that each such measurement is completely sufficient to proceed an update step of your kalman filter. Each measurement have a time stamp.
Another important point is, that the measurements (maybe) have poor precision, so that the change in position looks not plausible.
A possible solution
Define a smallest time interval for calling your kalman filter, so that none of the recieved measurements has to wait too long to be processed. It looks for me like a 100Hz rate could be a good first choice. In this case your dt would be 0.01s.
Design your F and Q matrices based on the chosen dt (they both strongly depend on this value).
In each call without measurement execute the prediction step. As soon as a measurement comes, do update. So your call sequence would look like:
call sequence:
init()
predict()
predict()
predict()
predict()
update(sensor1)
predict()
update(sensor2)
update(sensor1)
predict()
predict()
update(sensor1)
predict()
and so on...
To deal with the precision issue you could use a reference signal (the ground truth). Analyze the error in each sensor reading for each signal (x, y, z) compared to the reference. A kalman filter can work well ONLY with readings, whose error is normally distributed with a zero mean. If you see some systematical offset, may be you can get rid of it. From the observed error you can calculate the standard deviation (and the variance), so you can tell your filter how good the measurements are. It will be your R matrix.
If you don't have a reference you can take some measurements while standing still on the same place. So your reference position would be constant and you could have a look at the dispersion of the readings.
Tune elements of your Q matrix and describe the possible dynamic of your state elements. A smaller Q element for position would tell the filter not to change it too fast. So the (possible) poor performance of your sensors will be partially eliminated (think of a low pass filter as intuition).
I hope it can help you. Please correct me if I understood something wrong.
It would be helpful to see a plot of your sensor readings (and if possible of the reference trajectory).
I have a stream of data measurements with an initial increasing phase that is followed by a plateau. The measurements are noisy without clear bound. I would like to stop ingesting the stream when the plateau is detected:
while (not_const)
{
add_measurement( stream.get() );
not_const = !is_const();
}
Is there a well-known algorithm for dealing with such problem? I know about Kalman-Filters, but not so sure if they are specifically made for this.
The Kalman filter will cover your noise, so long as the variance is calculable. Yes, it can help in this situation. Depending on your application, you may find that the first derivative of a moving average will do as well for you. Kalman merely optimizes some linear parameters to give a "best" prediction of actual (vs observed-through-noise) values.
You still need to handle your interpretation of that prediction series. You need to define what constitutes a "plateau". How close to 0 do you need the computable slope? Does that figure depend on the preceding input? How abrupt is the transition between the increase and the plateau? The latter considerations would suggest looking at the second derivative as well: a quick-change detector of some ilk.
I am having a problem statement which requires that if a particular error/event happens on
1-Jan-2017 and then on
22-Feb-2017,
3-April-2017,
9-July-2017
so i have to predict that when the next event is gonna occur , I am planning to try it with Kalman Filter theorm but it has very statistical terms and on the internet I didn't found any easy explanation or easy programming example for Kalman filter algorithm which estimates the next event dates . Can someone explain in simple terms or any parellel algorithm which can be used for the same purpose ?
Let Ei be the ith event, and let IETi = Ei+1 - Ei be the ith Inter-Event Time, i.e., the time between one event and the next. Then Ei+1 = Ei + IETi — the next event can be forecast from the most recent event based on IET.
Since the past is already determined the only thing random when you're projecting the next event is IET, so E[Ei+1] = Ei + E[IETi] (where E[] denotes a common notation for expected value). You don't need to know the distribution of IET to estimate its expected value, you only need to assume that the IETs are identically distributed. (They don't even need to be independent.) In other words, if IETs are identically distributed then the average of historical IETs is an unbiased estimator of their expected value.
There is a simple Kalman filter estimator to update estimates of an average as you obtain new data. See equations (2) & (3) from this post on math.stackexchange.
Note that this approach just gives a point predictor for the expected value. It won't allow you to make any probability statements about how likely it is the next event happens before or after some specified date. To do that you would need distributional information about the IETs.
Edit: Thanks to #pjs for his remarks. I will update my answer accordingly as soon as I can. However, many authors in the robotics/computer vision communities (e.g. Thrun et al) seem to directly define Kalman filters as Gaussian filters (and for those that are familiar with the computer vision/SLAM litterature, some computer vision works seem to discard standard EKF-based SLAM Since the Gaussian assumption doesn't hold for 3d points). In #pmj's answer, the Gaussian filter is actually nothing more than a running average and doesn't provide covariance (which can in some applications, be considered as the only justification for using a Kalman filter instead of non-linear minimizations on an equivalent cost-function) , so it seems pretty useless without an assumption on the distribution... So I wonder if this is what motivates said authors choices, or if it is just to simplify the discussion.
I think that Kalman filtering has very little to do with what you want to achieve. I will detail why after briefly explaining, in simple terms, what a Kalman filter does.
A Kalman filter estimates the current state x_t of a dynamic system based on all the previous observations, or in more mathematical terms, it models the probability distribution
p(x_t|z_1,...,z_t)
where the z_i are you observations (i.e measurements). Moreover, it is designed with a Gaussian assumption in mind. That is, it assumes that the distribution of your state/errors, including the one above, are Gaussian. Furthermore, it requires a model that links the measurements to the states, something like
z_t=f(x_t)+some_gaussian_noise
and you alos need a transition model, that links the previous state with the current one, e.g.
x_t=g(x_{t-1})+some_gaussian_noise
This comes with the assumption of having a "complete state": the knowledge of the current state is taken to be enough to predict the next one.
So, this is why I think it won't work with your model:
Given the information you've given, I see no sign that you can assume the distribution of the events is Gaussian. Its is probably not.
You don't have any transition equation, I don't even think it is possible to define one for your problem. Moreover, state completion doesn't hold.
Your state, as well as its observations, seem to be discrete, while Kalman filters are designed with continuous parameter spaces in mind.
Unfortunately, you haven't provided much information, so I can only suggest that you model your problem as a Markov chain, which I think you already had thought about.
Hope this helps a bit.
I am looking for a way to combine data from a compass and gyro in order to determine attitude after the fact. I will be working with a complete data set in which the 3D compass and gyro readings have been recorded at regular intervals, but I want to recover an estimate of attitude in post-processing.
I've considered simply using a Kalman filter, since they are so well documented, but would rather use something more appropriate to a case where the complete data set is known. I have a feeling the solution is "simply" a least squares problem, but I'm hoping someone here can point me in the direction of a paper or two dealing with this problem (or problems like it).
At this point, I'm not even sure what this filter would be called, so I'm having a hard time finding useful search terms. Any help would be appreciated.
Thanks so much!
If you understand the Kalman filter in details, you can also implement the so-called Kalman smoother which operates on the complete data set.
However, let me warn you about one thing. There is no such thing as Kalman Filter for programmers. Kalman filter is difficult to understand. You won't be able to implement and use it correctly if you do not understand it.
My implementation is almost what you are looking for. I used accelerometer and gyroscopes but no compasses. It is based on this manuscript, read it first. The most detailed description I have at the moment is slides 29-32 in my presentation on sensor fusion. It is an open source project, and I plan to release an updated version of the solver in the upcoming weeks.