iOS: CoreMotion Acceleration Values - core-motion

We can retrieve the acceleration data from CMAcceleration.
It provides 3 values, namely x, y and , z.
I have been reading up on this and I seem to have gotten different explanation for these values.
Some say they are the acceleration values in respect to gravity.
Others have said they are not, they are the acceleration values in respect to the axis as they turn around on its axis.
Which is the correct version here? For example, does x represent the acceleration rate for pitch or does it for from left to right?
In addition, let say if we want to get the acceleration rate (how fast) for yaw, how could we be able to derive that value when the call back is feeding us constantly with values? Would we need to set up another timer for the calculation?
Edit (in response to #Kay):
Yes, it was basically it - I just wanted to make sure x, y, z and respectively pitch, roll and yaw and represented differently by the frame.
1.)
How are these related in certain situations? Would there be a need that besides getting a value, for example, for yaw that needs addition information from the use of x, y, z?
2.)
Can you explain a little more on this:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Would we need to use a timer for the time values? And how would making use of the above generate an angular acceleration? I thought angular acceleration entail more complex maths.
3.)
In a real world situation, we can barely only rely on a single value from pitch, roll and yaw because that would be impossible to for us to make a rotation only on one axis (our hand is not that "stable". Especially after 5 cups of coffee...)
Let say I would like to get the values of yaw (yes, rotation on the z-axis) but at the time as yaw spins I wanted to check it against pitch (x-axis).
Yes, 2 motions combine here (imagine the phone is rotating around z with slight movement going towards and away from the user's face).
So: Is there is mathematical model (or one that is from your own personal experience) to derive a value from calculating values of different axis? (sample case: if the user is spinning on z-axis and at the same time also making a movement of x-axis - good. If not, not a good motion we need). Sample case just off the top of my head.
I hope my sample case above with both yaw and pitch makes sense to you. If not, please feel free to cite a better use case for explanation.
4.)
Lastly time. How can we get time as a reference frame to check how fast a movement is since the last? Should we provide a tolerance (Example: "less than 1/50 of a second since last movement - do something. If not, do nothing.")? Where and when do we set a timer?

The class reference of CMAccelerometerData says:
X-axis acceleration in G's (gravitational force)
The acceleration is measured in local coordinates like shown in figure 4-1 in the Event Handling Guide. It's always a translation und must not be confused with radial or circular motions which are measured in angles.
Anyway, every rotation even with a constant angular velocity is related to a change in the direction and thus an acceleration is reported as well s. Circular Motion
What do you mean by get the acceleration rate (how fast) for yaw?
Based on figure 4-2 in Handling Rotation Rate Data the yaw rotation occurs around the Z axis. That means there is a continuous linear acceleration in the X,Y plane. If you are interested in angular acceleration, you need to take CMDeviceMotion.rotationRate and divide it by the time delta e.g.:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Update:
It depends on what you want to do and which motions you are interested in to track. I hope you don't want to get the exact device position in x,y,z when doing a translation as this is impossible. The orientation i.e. the rotation relativ to g can be determined very well of course.
I think in >99% of all cases you won't need additional information from accelerations when working with angles.
Don't use your own timer. CMDeviceMotion inherits from CMLogItem and thus provides a perfect matching timestamp of the sensor data or respectivly the interpolated time for the result of the sensor fusion algorithm.
I assume that you don't need angular acceleration.
You are totally right even without coffee ;-) If you look at the motions shown in this video there is exactly the situation you describe. Maths and algorithms were the result of some heavy R&D and I am bound to NDA.
But the most use cases are covered with the properties available in CMAttitude. Be cautious with Euler angles when doing calculation because of Gimbal Lock
Again this totally depends on what you are up to.

Related

DJIWaypointMissionHeadingTowardPointOfInterest, but the gimbal pitch angle?

using DJIWaypointMissionHeadingTowardPointOfInterest as heading mode in DJIWaypointMission i can automatically rotate the drone to heading vs a POI, but is there a way to also automatically tilt the camera to frame the POI?
(unlucky also the "pointOfInterest" have no altitude property)
also i think that would be better change/define the heading mode in the DJIWaypoint instead to be a property of the entire DJIWaypointMission, is it possible?
I'm adding to what Ken said.
When you use 'isGimbalPitchRotationEnabled' you can set a pitch angle in each waypoint. The drone will change the pitch angle in a linear motion between each 2 waypoints.
Of course this will not work when during a flight between the 2 points the drone gets closer to the POI and than backs away.
What I'm doing in my app is dividing the straight line between the 2 points into several straight sections and calculating the correct pitch at each point. As I divide the original line I calculate the error (the difference between the calculated pitch and the linear extrapolated pitch). If the error is greater than some value (5 degrees, for example) I divide the line recursively and re-calculate the pitch, until the error at each point is small enough. It takes some geometric calculations when preparing the mission, but it produces amazing fly-by shots.
Take a look at these settings:
isGimbalPitchRotationEnabled and gimbalPitch
This isn't a 'true' location though because the pitch divides equally based on the distance between the waypoints but it's close.
In my app I manually control the gimbal but that means a connection must be maintained or the gimbal stops moving.
Yes, it would be nice if the POI in waypoints was a true POI functionality and I've made the suggestion many times but to no avail so far. I'd like to see the POI be a per waypoint (as a LocationCoordinate 3D) rather than the current, 1 POI for the whole mission.
I'd also like to see the location in waypoint to be consistent and be of type LocationCoordinate3D.

How use raw Gryoscope Data °/s for calculating 3D rotation?

My question may seem trivial, but the more I read about it - the more confused I get... I have started a little project where I want to roughly track the movements of a rotating object. (A basketball to be precise)
I have a 3-axis accelerometer (low-pass-filtered) and a 3-axis gyroscope measuring °/s.
I know about the issues of a gyro, but as the measurements will only be several seconds and the angles tend to be huge - I don't care about drift and gimbal right now.
My Gyro gives me the rotation speed of all 3 axis. As I want to integrate the acceleration twice to get the position at each timestep, I wanted to convert the sensors coordinate-system into an earthbound system.
For the first try, I want to keep things simple, so I decided to go with the big standard rotation matrix.
But as my results are horrible I wonder if this is the right way to do so. If I understood correctly - the matrix is simply 3 matrices multiplied in a certain order. As rotation of a basketball doesn't have any "natural" order, this may not be a good idea. My sensor measures 3 angular velocitys at once. If I throw them into my system "step by step" it will not be correct since my second matrix calculates the rotation around the "new y-axis" , but my sensor actually measured an angular velocity around the "old y-axis". Is that correct so far?
So how can I correctly calculate the 3D rotation?
Do I need to go for quaternoins? but how do I get one from 3 different rotations? And don't I have the same issue here again?
I start with a unity-matrix ((1, 0, 0)(0, 1, 0)(0, 0, 1)) multiplied with the acceleration vector to give me the first movement.
Then I want use the Rotation matrix to find out, where the next acceleration is really heading so I can simply add the accelerations together.
But right now I am just too confused to find a proper way.
Any suggestions?
btw. sorry for my poor english, I am tired and (obviously) not a native speaker ;)
Thanks,
Alex
Short answer
Yes, go for quaternions and use a first order linearization of the rotation to calculate how orientation changes. This reduces to the following pseudocode:
float pose_initial[4]; // quaternion describing original orientation
float g_x, g_y, g_z; // gyro rates
float dt; // time step. The smaller the better.
// quaternion with "pose increment", calculated from the first-order
// linearization of continuous rotation formula
delta_quat = {1, 0.5*dt*g_x, 0.5*dt*g_y, 0.5*dt*g_z};
// final orientation at start time + dt
pose_final = quaternion_hamilton_product(pose_initial, delta_quat);
This solution is used in PixHawk's EKF navigation filter (it is open source, check out formulation here). It is simple, cheap, stable and accurate enough.
Unit matrix (describing a "null" rotation) is equivalent to quaternion [1 0 0 0]. You can get the quaternion describing other poses using a suitable conversion formula (for example, if you have Euler angles you can go for this one).
Notes:
Quaternions following [w, i, j, k] notation.
These equations assume angular speeds in SI units, this is, radians per second.
Long answer
A gyroscope describes the rotational speed of an object as a decomposition in three rotational speeds around the orthogonal local axes XYZ. However, you could equivalently describe the rotational speed as a single rate around a certain axis --either in reference system that is local to the rotated body or in a global one.
The three rotational speeds affect the body simultaneously, continously changing the rotation axis.
Here we have the problem of switching from the continuous-time real world to a simpler discrete-time formulation that can be easily solved using a computer. When discretizing, we are always going to introduce errors. Some approaches will lead to bigger errors, while others will be notably more accurate.
Your approach of concatenating three simultaneous rotations around orthogonal axes work reasonably well with small integration steps (let's say smaller than 1/1000 s, although it depends on the application), so that you are simulating the continuous change of rotation axis. However, this is computationally expensive, and error grows as you make time steps bigger.
As an alternative to first-order linearization, you can calculate pose increments as a small delta of angular speed gradient (also using quaternion representation):
quat_gyro = {0, g_x, g_y, g_z};
q_grad = 0.5 * quaternion_product(pose_initial, quat_gyro);
// Important to normalize result to get unit quaternion!
pose_final = quaternion_normalize(pose_initial + q_grad*dt);
This technique is used in Madgwick rotation filter (here an implementation), and works pretty fine for me.

Filtering rotational acceleration (Appropriate use for Kalman filter?)

I'm working on a project in which a rod is attached at one end to a rotating shaft. So, as the shaft rotates from 0 to ~100 degrees back-and-forth (in the xy plane), so does the rod. I mounted a 3-axis accelerometer at the end of the moving rod, and I measured the distance of the accelerometer from the center of rotation (i.e., the length of the rod) to be about 38 cm. I have collected a lot of data, but I'm in need of help to find the best method to filter it. First, here's a plot of the raw data:
I think the data makes sense: if it's ramping up, then then I think at that point the acceleration should be linearly increasing, and then when it's ramping down, it should linearly decrease. If its moving constantly, the acceleration will be ~zero. Keep in mind though that sometimes the speed changes (is higher) from one "trial" to the other. In this case, there were ~120 "trials" or movements/sweeps, data sampled at 148 Hz.
For filtering, I've tried a low pass filter and then an exponentially decreasing moving average, and both plots weren't too hot. And although I'm not good at interpreting these: here is what I got when coding a power frequency plot:
What I was hoping to get help with here is, attain a really good method by which I can filter this data. The one thing that keeps coming up again time and time again (especially on this site) is the Kalman filter. While there's lots of code online that helps implementing these in MATLAB, I haven't been able to actually understand it that great, and therefore neglect to include my work on it here. So, is a kalman filter appropriate here, for rotational acceleration? If so, can someone help me implement one in matlab and interpret it? Is there something I'm not seeing that may be just as good/better that is relatively simple?
Here's the data I'm talking about. Looking at it more closely/zooming in gives a better appreciation for what's going on in the movement, I think:
http://cl.ly/433B1h3m1L0t?_ga=1.81885205.2093327149.1426657579
Edit: OK, here is the plot of both relavent dimensions collected from the accelerometer. I am neglecting to include the up and down dimension as the accelerometer shows a near constant ~1 G, so I think its safe to say its not capturing much rotational motion. Red is what I believe is the centripetal component, and blue is tangential. I have no idea how to combine them though, which is why I (maybe wrongfully?) ignored it in my post.
And here is the data for the other dimension:
http://cl.ly/1u133033182V?_ga=1.74069905.2093327149.1426657579
Forget the Kalman filter, see the note at the end of the answer for the reason why.
Using a simple moving average filter (like I showed you on an earlier reply if i recall) which is in essence a low-pass filter :
n = 30 ; %// length of the filter
kernel = ones(1,n)./n ;
ysm = filter( kernel , 1 , flipud(filter( kernel , 1 , flipud(y) )) ) ;
%// assuming your data "y" are in COLUMN (otherwise change 'flipud' to 'fliplr')
note: if you have access to the curvefit toolbox, you can simply use: ys = smooth(y,30) ; to get nearly the same result.
I get:
which once zoomed look like:
You can play with the parameter n to increase or decrease the smoothing.
The gray signal is your original signal. I strongly suspect that the noise spikes you are getting are just due to the vibrations of your rod. (depending on the ratio length/cross section of your rod, you can get significant vibrations at the end of your 38 cm rod. These vibrations will take the shape of oscillations around the main carrier signal, which definitely look like what I am seeing in your signal).
Note:
The Kalman filter is way overkill to do a simple filtering of noisy data. Kalman filter is used when you want to calculate a value (a position if I follow your example) based on some noisy measurement, but to refine the calculations, the Kalman filter will also use a prediction of the position based on the previous state (position) and the inertial data (how fast you were rotating for example). For that prediction you need a "model" of the behavior of your system, which you do not seem to have.
In your case, you would need to calculate the acceleration seen by the accelerometer based on the (known or theoretical) rotation speed of the shaft at any point of time, the distance of the accell to the center of rotation, and probably to make it more precise, a dynamic model of the main vibration modes of your rod. Then for each step, compare that to the actual measurement... seems a bit heavy for your case.
Look at the quick figure explaining the Kalman filter process in this wikipedia entry : Kalman filter, and read on if you want to understand it more.
I will propose for you low-pass filter, but ordinary first-order inertial model instead of Kalman. I designed filter with pass-band till 10 Hz (~~0,1 of your sample frequency). Discrete model has following equation:
y[k] = 0.9418*y[k-1] + 0.05824*u[k-1]
where u is your measured vector, and y is vector after filtering. This equation starts at sample number 1, so you can just assign 0 to the sample number 0.

how to make navigation with 3-axis accelerometer and 3-axis gyroscope and gps

as the title stated, I want to make Dead Reckoning with accelerometer and gyroscope. The accelerometer could apply with linear accleration and then used to get velocity by one time integration or get runing distance in the gap of sample time by twice integration. and also I could get changed angle by integration of gyroscope output value. so on the condition of initial position, I can get new postion by DR with distance and angle.
thought is perfect but fact is not so simple. acceleromter and gyroscope is unstable and always affected by temperature or unaligned axis direction. I have know there is one popular method called kalman filter to combined this sensors with gps to protect navigation from noise of output, but I think it's out of my comptent by now.
firstly, I want to know how to remove gravity force which is mixed with real accelerometer ouput?
sencond, how to correct gyroscope error?
the last, how to realize kalman filter with accelerometer, gyroscope and gps?
any suggestion is good to me, if you can give some code that's best of all!
thank you
[edit #2013/12/12]:
I have given up using accelemeter for calculating velocity and distance due to it's big drift. And also the error can become more and more under the influence of double integration. But there is a lucky thing that this work for a car in tis case. so I prefer to select receiving velocity from CAN, it was proven to be more exactly than accelerometer. By now a solution about velocity has been resolved, but the other question is still not sure in my opinion. expect more nice answer.
look here https://stackoverflow.com/a/19764828/2521214
And read the whole thing !!!
including the comments
and view also this http://www.youtube.com/watch?v=C7JQ7Rpwn2k ...
video link is from AlexWien answer in the same thread

Detecting acceleration's change frequency with accelerometer

I have an accelerometer with 3 axes and I need to detect acceleration's change frequency. It's ok when object is moving straight, without rotation - in that case I can simply ignore g (acceleration due to gravity), it would have constant direction and won't affect frequency. But what can I do when object is rotating while moving? Is it even possible to (somehow) substract g only with accelerometer, without gyroscope? May be there are some methods of rough calculation, approximation?
The following could work if your accelerometer's gain is the same in all x, y and z directions.
You don't know the orientation of the car so you cannot subtract the gravity from the sensor readings. However, when the car starts moving or there is a sharp break or acceleration then the length of the measured acceleration vector changes too.
I would use a low-pass filter to filter out the very high frequency noise that you certainly get in a car. It requires some tweaking and testing but it should not be too difficult.
Of course it is rather a workaround, the true solution would be to use gyroscopes. Good luck anyway!

Resources