Kalman filter with acceleration. State or Control vector? - filter

I have a basic understanding question in Kalman filter which I haven't found an answer yet.
Assume I want to implement a Kalman filter with a constant acceleration dynamic.
I can either add the acceleration the state vector and F matrix - Xt = X(t-1) +Vt+0.5at^2
OR, I can add the acceleration to the U control vector.
What is the profind difference between these two methods and given I have accel measurements, what is the best policy?
You can find these two approaches in google.
All the best,
Roi

U refers to input to the system in standard control theory state space representation (for more information see here) and therefore what you do depends on the context of your specific problem.
It sounds like you are trying to estimate position of a target moving with constant acceleration. This means that the target's position has a defined model of motion. This model of motion will be encoded in F. Think of a bicyclist looking at the speedometer while rolling down a hill and not pedaling at all (bicyclist has no input to the system). If the acceleration is not constant (the hill has a varying slope) and you have access to real time measurements of the acceleration then you can modify your system to estimate acceleration as well. If the acceleration is unknown then read the following stackoverflow post at here.
If you were tracking the target while having direct control of the acceleration then you would put it in U. Think of YOU being the bicyclist looking at the speedometer (estimating the speed) and deciding whether you should pedal faster or slower.

Related

Kalman Filter implementation to estimate position with IMU under high impacts and acceleration

I am trying to implement a Kalman Filter to estimate the position of my arm moving in the sagittal plane (2d). To do this, I have an IMU which as usually done, I use the gyro as input to my state model and the accelerometer as my observation.
Regarding the bias, I used 0.001 for the variances of my covariance matrix of the state estimation equation and 0.03 for the variance of the accelerometer (measurement).
This filter works really well if I move my arm slowly from 0 to 90º. But if I perform sudden movements, the accelerometer makes my estimation move downward and it is not very precise (i'm off about 15º), once I move slowly it works well again. But the response under high acceleration/sudden movement is not good.
For this reason, I've thought of having a variance switch which tracks the variance of the last 10-20 values of my accelerometer angle measurements and if the variance is above a certain level I would increase the variance of the accelerometer in the covariance matrix.
Would this be an accurate approach in a system with very high accelerations? What would be a more correct way to estimate the angle under sudden movements? As I mentioned, the result I get when the accelerometer has low variance is very good, but not when "shaken fast".
Also, I would assume that due to this behavior, the accelerometer's variance does not behave according to a gaussian distribution, but I would not know how to model this behavior.
You can run a "Bank of Filters", that is independent filters with different noise levels for the variance, and then compute a weighted average the estimates based on their likelihoodlink to a reference. You can find several references in literature, during my recent work I discovered Y.Bar-Shalom has documented such an approach.
In scientific terms what you are describing is an adaptive-stochastic state estimation problem΄ long story short there exist methods to change the modelled measurement noise on-line depending on performance indications from the filter.
All the best,
D.D.
Denmark

What is the best beta value in Madgwick filter?

When I use default value which 0.0041 or 0.033, rotations are weird when I send quaternion data to Unity 3D. When I changed beta value to 0.001, rotations are good but there is slight drift over time. I am using LSMD9S0 IMU sensor.
Here is the my code Madgwick_Arduino
Looking at the original article written by Sebastian Madgwick, we can find the following paragraph about beta value:
β is the divergence rate of {^S_E}q_ω expressed as the magnitude of a
quaternion derivative corresponding to the gyroscope measurement error.
Later, it says:
The filter computes {^S_E}q_est as the rate of change of orientation measured by the gyroscopes, {^S_E}q_ω, with the magnitude of the gyroscope measurement error, β, removed in the direction of the estimated error, {^S_E}q_e, computed from accelerometer and magnetometer measurements.
So, beta magnitude is directly related with the error of gyroscope (I understand bias as the most important source of error here), but expressed directly over the components of a quaternion gradient. This means that it does not have an understandable unit, nor an intuitive optimal magnitude.
On the other hand, Madgwick filter assumes that the accelerometer measures gravity. This means that it is affected by horizontal accelerations. Filter parameters (the two it has) need to be adjusted for your specific case, achieving a tradeoff between gyro bias correction and sensitivity to horizontal accelerations.
As a rule of thumb: increasing beta leads to (a) faster bias corrections, (b) higher sensitiveness to lateral accelerations.
My previous experience with this filter required a few hours of experiments + manual tuning until we reached a satisfactory result. We didn't need to touch those values ever again.

Filtering rotational acceleration (Appropriate use for Kalman filter?)

I'm working on a project in which a rod is attached at one end to a rotating shaft. So, as the shaft rotates from 0 to ~100 degrees back-and-forth (in the xy plane), so does the rod. I mounted a 3-axis accelerometer at the end of the moving rod, and I measured the distance of the accelerometer from the center of rotation (i.e., the length of the rod) to be about 38 cm. I have collected a lot of data, but I'm in need of help to find the best method to filter it. First, here's a plot of the raw data:
I think the data makes sense: if it's ramping up, then then I think at that point the acceleration should be linearly increasing, and then when it's ramping down, it should linearly decrease. If its moving constantly, the acceleration will be ~zero. Keep in mind though that sometimes the speed changes (is higher) from one "trial" to the other. In this case, there were ~120 "trials" or movements/sweeps, data sampled at 148 Hz.
For filtering, I've tried a low pass filter and then an exponentially decreasing moving average, and both plots weren't too hot. And although I'm not good at interpreting these: here is what I got when coding a power frequency plot:
What I was hoping to get help with here is, attain a really good method by which I can filter this data. The one thing that keeps coming up again time and time again (especially on this site) is the Kalman filter. While there's lots of code online that helps implementing these in MATLAB, I haven't been able to actually understand it that great, and therefore neglect to include my work on it here. So, is a kalman filter appropriate here, for rotational acceleration? If so, can someone help me implement one in matlab and interpret it? Is there something I'm not seeing that may be just as good/better that is relatively simple?
Here's the data I'm talking about. Looking at it more closely/zooming in gives a better appreciation for what's going on in the movement, I think:
http://cl.ly/433B1h3m1L0t?_ga=1.81885205.2093327149.1426657579
Edit: OK, here is the plot of both relavent dimensions collected from the accelerometer. I am neglecting to include the up and down dimension as the accelerometer shows a near constant ~1 G, so I think its safe to say its not capturing much rotational motion. Red is what I believe is the centripetal component, and blue is tangential. I have no idea how to combine them though, which is why I (maybe wrongfully?) ignored it in my post.
And here is the data for the other dimension:
http://cl.ly/1u133033182V?_ga=1.74069905.2093327149.1426657579
Forget the Kalman filter, see the note at the end of the answer for the reason why.
Using a simple moving average filter (like I showed you on an earlier reply if i recall) which is in essence a low-pass filter :
n = 30 ; %// length of the filter
kernel = ones(1,n)./n ;
ysm = filter( kernel , 1 , flipud(filter( kernel , 1 , flipud(y) )) ) ;
%// assuming your data "y" are in COLUMN (otherwise change 'flipud' to 'fliplr')
note: if you have access to the curvefit toolbox, you can simply use: ys = smooth(y,30) ; to get nearly the same result.
I get:
which once zoomed look like:
You can play with the parameter n to increase or decrease the smoothing.
The gray signal is your original signal. I strongly suspect that the noise spikes you are getting are just due to the vibrations of your rod. (depending on the ratio length/cross section of your rod, you can get significant vibrations at the end of your 38 cm rod. These vibrations will take the shape of oscillations around the main carrier signal, which definitely look like what I am seeing in your signal).
Note:
The Kalman filter is way overkill to do a simple filtering of noisy data. Kalman filter is used when you want to calculate a value (a position if I follow your example) based on some noisy measurement, but to refine the calculations, the Kalman filter will also use a prediction of the position based on the previous state (position) and the inertial data (how fast you were rotating for example). For that prediction you need a "model" of the behavior of your system, which you do not seem to have.
In your case, you would need to calculate the acceleration seen by the accelerometer based on the (known or theoretical) rotation speed of the shaft at any point of time, the distance of the accell to the center of rotation, and probably to make it more precise, a dynamic model of the main vibration modes of your rod. Then for each step, compare that to the actual measurement... seems a bit heavy for your case.
Look at the quick figure explaining the Kalman filter process in this wikipedia entry : Kalman filter, and read on if you want to understand it more.
I will propose for you low-pass filter, but ordinary first-order inertial model instead of Kalman. I designed filter with pass-band till 10 Hz (~~0,1 of your sample frequency). Discrete model has following equation:
y[k] = 0.9418*y[k-1] + 0.05824*u[k-1]
where u is your measured vector, and y is vector after filtering. This equation starts at sample number 1, so you can just assign 0 to the sample number 0.

iOS: CoreMotion Acceleration Values

We can retrieve the acceleration data from CMAcceleration.
It provides 3 values, namely x, y and , z.
I have been reading up on this and I seem to have gotten different explanation for these values.
Some say they are the acceleration values in respect to gravity.
Others have said they are not, they are the acceleration values in respect to the axis as they turn around on its axis.
Which is the correct version here? For example, does x represent the acceleration rate for pitch or does it for from left to right?
In addition, let say if we want to get the acceleration rate (how fast) for yaw, how could we be able to derive that value when the call back is feeding us constantly with values? Would we need to set up another timer for the calculation?
Edit (in response to #Kay):
Yes, it was basically it - I just wanted to make sure x, y, z and respectively pitch, roll and yaw and represented differently by the frame.
1.)
How are these related in certain situations? Would there be a need that besides getting a value, for example, for yaw that needs addition information from the use of x, y, z?
2.)
Can you explain a little more on this:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Would we need to use a timer for the time values? And how would making use of the above generate an angular acceleration? I thought angular acceleration entail more complex maths.
3.)
In a real world situation, we can barely only rely on a single value from pitch, roll and yaw because that would be impossible to for us to make a rotation only on one axis (our hand is not that "stable". Especially after 5 cups of coffee...)
Let say I would like to get the values of yaw (yes, rotation on the z-axis) but at the time as yaw spins I wanted to check it against pitch (x-axis).
Yes, 2 motions combine here (imagine the phone is rotating around z with slight movement going towards and away from the user's face).
So: Is there is mathematical model (or one that is from your own personal experience) to derive a value from calculating values of different axis? (sample case: if the user is spinning on z-axis and at the same time also making a movement of x-axis - good. If not, not a good motion we need). Sample case just off the top of my head.
I hope my sample case above with both yaw and pitch makes sense to you. If not, please feel free to cite a better use case for explanation.
4.)
Lastly time. How can we get time as a reference frame to check how fast a movement is since the last? Should we provide a tolerance (Example: "less than 1/50 of a second since last movement - do something. If not, do nothing.")? Where and when do we set a timer?
The class reference of CMAccelerometerData says:
X-axis acceleration in G's (gravitational force)
The acceleration is measured in local coordinates like shown in figure 4-1 in the Event Handling Guide. It's always a translation und must not be confused with radial or circular motions which are measured in angles.
Anyway, every rotation even with a constant angular velocity is related to a change in the direction and thus an acceleration is reported as well s. Circular Motion
What do you mean by get the acceleration rate (how fast) for yaw?
Based on figure 4-2 in Handling Rotation Rate Data the yaw rotation occurs around the Z axis. That means there is a continuous linear acceleration in the X,Y plane. If you are interested in angular acceleration, you need to take CMDeviceMotion.rotationRate and divide it by the time delta e.g.:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Update:
It depends on what you want to do and which motions you are interested in to track. I hope you don't want to get the exact device position in x,y,z when doing a translation as this is impossible. The orientation i.e. the rotation relativ to g can be determined very well of course.
I think in >99% of all cases you won't need additional information from accelerations when working with angles.
Don't use your own timer. CMDeviceMotion inherits from CMLogItem and thus provides a perfect matching timestamp of the sensor data or respectivly the interpolated time for the result of the sensor fusion algorithm.
I assume that you don't need angular acceleration.
You are totally right even without coffee ;-) If you look at the motions shown in this video there is exactly the situation you describe. Maths and algorithms were the result of some heavy R&D and I am bound to NDA.
But the most use cases are covered with the properties available in CMAttitude. Be cautious with Euler angles when doing calculation because of Gimbal Lock
Again this totally depends on what you are up to.

what algorithm does DeviceMotion adopted to calculate attitude from accelerometer and gyro?

According to the reference,
The deviceMotion property is only available on devices having both an accelerometer and a gyroscope. This is because its sub-properties are the result of a sensor fusion algorithm i.e. both signals are evaluated together in order to decrease the estimation errors.
Emm, my question is where is the internal implementation, or algorithm that CMMotionManager use to do the calculation. I want some detail about this so called "senser fusion algorithm"
Popular fusion algorithms are for instance the Kalman filter and derivatives but I guess the CMMotionManager's internal implementation is based on simpler and thus faster algorithms. I expect some simple but good enough math calculation upon the senser data from accelerometer and gyroscope to finally calculate the roll, yaw and pitch
It is not clear what is actually implemented in Core Motion.
As for filters other than the Kalman filter: I have implemented sensor fusion for Shimmer 2 devices based on this manuscript.
You may find this answer on Complementrary Filters also helpful, see especially filter.pdf
I would not use roll, pitch and yaw for two reasons: (1) it messes up the stability of your app and (2) you cannot use it for interpolation.

Resources