Calculating angular rate - algorithm

I'm simulating a physical object, using a mass spring system. By means of deltas and cross products, I can easily calulate the up, forward and side vectors.
I want to calculate what the angular rate (how fast it's spinning), for the object space X, Y and Z axis. Calculating the world space angle first won't help, since I need the angular rate in object space (how a sensor glued to the object would see it).
Any 3D maths people out there know how to do this?

I believe you want to take the CG of all the masses. Average the velocities of all the masses (using a mass-weighted average) this is the velocity of the object. Then take the velocity of each mass minus the velocity of the CG and compute the angular velocity using this relative velocity and the position relative to the CG - I think that's a cross product. This will give you the angular velocity vector in world coordinates. This may be averaged for all the masses, since they will be slightly different as the springs allow deformation. Simply project this angular velocity vector onto the (world space) sensor axis via dot-product and you have your object-space angular velocity on that axis. Your sensor axis must be a unit vector, and you'll need 3 of them - which you say you can get.

You might use the Lagrange mechanics in order to describe the system dynamics.

Related

How to extrapolate object position and rotation into the future/past?

Lets say we have the homogenous transformation matrix of the position and rotation of a camera for n different points in time. We also have m different images taken by this camera, which weren't neccecarily taken at the same instant the htm data was recieved. For example:
We have camera htms at t=1, 3, 5
And we have images at t=1.5, 4, 6
and so on.
I want to be able to roughly guess where the camera was and what its rotation was at the time a certain image was taken. For example:
We want the htm of the camera at t=6
We have htms at t=4, 5, 5.5
Another example:
We want the htm of the camera at t=6
We have htms at t=4, 5, 8
I was thinking of using a simple angular and linear velocity calculation from the two closest htms but the angular velocity may need to be expressed in euler coordinates, which suffers from gimbal lock.
Is there any better/easier way to achieve this effect? I am trying to map my environment using these values so precision is pretty important. Any help is appreciated!
I have the same problematic. You can use linear interpolation/extrapolation. Separe interpolation into position and orientation.
Use LERP algorithm for position and SLERP algorithm for orientation.
If you want better model you can use a Spline to interpolate the trajectory but it can quick become complicated.

Scaling two meshes of the same object

I computed a mesh using SfM techniques and am able to extract a 3D mesh. However, the mesh doesn't have scale as expected with SfM techniques.
To scale the mesh, I am able to generate planes of the with real world scale. E.g.,
I tried to play around with ICP to scale and register the SfM mesh to match the scale of the planes but was not very successful. Could anyone point me in the right direction on how to solve this issue? I would like to scale the SfM mesh to match the real world scale. (I do not need to register the two meshes)
You need to relate some distance in the model to some measurable distance in the physical world. The easiest is probably the camera height above the floor plane. If that is not available, then perhaps the height of the bed or the size of the pillow.
Let's say that the physical camera height is 1.6m and in the model the camera is 800 units of length above the floor plane, then the scale factor you need to apply (to get 1 unit of length = 1 mm) is:
1600
scale_factor = ---- = 2.0
800
I ended up doing this, hope this helps someone or if anyone has a better suggestion, I will take it.
1) I used pyrender to render the two meshes from known poses in two worlds to get exact correspondences
2) I then used procustes analysis to figure out the scaling factor by computing the transformation of one mesh to another. You can procrustes from here
I am able to retrieve a scaling factor that is in acceptable range.

Initial camera intrinsic and extrinsic matrix and 3D point coordinates for Bundle Adjustment

I want to reconstruct 3d scene using multi rgb cameras. The input data has no camera calibration information so I want to use bundle adjustment algorithm (Ceres-solver) to estimate the calibration information.
Now I have already obtained pair-wise matched feature points but I find that the algorithm in bundle adjustment algorithm (Ceres-solver) also need initial camera intrinsic and extrinsic matrix and 3d point coordinates as input. However, I do not have this information and I do not know how to generate an initial guess, either.
What should I do to generate the initial camera intrinsic and extrinsic matrix and 3d point coordinates?
Thanks very much!
Initial parameters are important to help the algorithm in converging to right local minima, and therefore to obtain a good reconstruction. You have different options to find the intrinsics of your camera(s):
If you know the camera brand(s) used for taking the pictures you could try to find those intrinsics in a database. Important parameters for you are the CCD width and the focal length (mm). Try this one.
Check EXIF tags of your images. You can use tools like jhead or exiftool for that purpose.
You basically need the focal length in pixels and the lens distortion coefficients. To calculate the focal length in pixels you can use the next equation:
focal_pixels = res_x * (focal_mm / ccd_width_mm)
If in any case you can't find intrinsics parameters for your camera(s) you can use the following approximation as initial guess:
focal_pixels = 1.2 * res_x
Don't set the parameters as fixed, so focal length and distortion parameters will be optimized in the bundle adjustment step.
On the other hand, extrinsic parameters are the values of the R|T (roto-translation matrix) of every camera, calculated/optimized in the reconstruction and bundle adjustment step. Since scale is unknown in SfM scenarios, the first reconstructed pair of cameras (the ones with higher score in the cross-matching step) are generated from points projected on a random depth value (Z towards scene). You don't need any initial value for extrinsics or 3D point coordinates.

the imu /location algorithm used by tango?

my use case is only concerned with locationing, in fact only 2-d locationing. so a lot of the cool capabilities in tango are probably not useful to me. so I'm trying to see if i could implement the location algorithm myself.
from teardown reports it seems the 9dof sensors are pretty commodity hardware. the basic integration-based location algorithm (even with magnetic field calibration) has been mature knowledge. what algorithm does tango use?
from the description it seems that tango tries to aid in navigation by using the images it sees as a reference, sort of like the "terrain-following" mode in cruise missiles, is this right? this would be too ccomplex for me to implemente
You may easily get 2D position using the TangoPoseData with the correct coordinate system:
Project Tango uses a right-handed, local-level frame for the START_OF_SERVICE and AREA_DESCRIPTION coordinate frames. This convention sets the Z-axis aligned with gravity, with Z+ pointed upwards, and the X-Y plane is perpendicular to gravity and locally level with the ground plane. This local-level convention is based on the local east-north-up (ENU) earth-based coordinate system. Instead of true north, Project Tango uses the direction the back of the device is pointed when the service started as the Y axis, and the X axis is pointed to the right. The START_OF_SERVICE and AREA_DESCRIPTION base coordinate frames of the API will use this local-level frame convention.
Said more simply, use the pose data y/x coordinates for your space as you would latitude/longitude for the earth.
Heading data is also derived from the TangoPoseData and can be converted from quaternion to euler angles. Euler angles may be easier for you to use in your 2D location app.
Tango uses 3D to increase the confidence of its position within the space...even if you don't need 3D. I would let Tango do the hard stuff and extract the 2D position so you can focus on your app.
Tango uses the camera images to detect any change in position. And uses the IMU for device rotation and acceleration. Try blocking the camera and using the Motion Tracking app, it will fail.

Rigid Body Physics Rotations

I'm wanting to create a physics engine within Java. However it's not the code I'm bothered about. It's simply the math of rigid body physics, specifically forces and how they affect the rotation of an object.
Let's say for example that I have a square with same length sides. The square will be accelerating towards ground level due to gravity (no air resistance). This would mean that there would be a vector force of (0,-9.8)m/s on every point in the square.
Now let's say that this square is rotated slightly. When this rotated square comes into contact with the ground (a flat surface) there will be an impulse velocity vector at the point of contact (most likely a corner of the square). However, what happens to the forces of the other corners on the square? From the original force of gravity, how are they affected?
I apologize if my question isn't detailed enough. I'd love to upload a diagram but I don't yet have the reputation.
rotation is form of kinetic energy
first the analogy to movement
alpha - angular position [rad]
omega - angular speed [rad/s]
epsilon - angular acceleration [rad/s^2]
alpha(t)/(dt^2)=omega(t)/dt=epsilon(t)
now the inertia
I - quadratic rotation mass inertia [kg.m^2]
m - mass [kg]
M - torque [N.m]
and some equations to be exploited
M=epsilon*I - torque needed to achieve acceleration or vice versa [N.m]
acc=epsilon*radius - perimeter acceleration [m/s^2]
vel=omega*radius - perimeter speed [m/s^2]
equation #1 can be used to directly compute the force. Equations #2,#3 can be used to calculate friction based forces like wheels grip/drag. Do not forget about the kinetic energy Ek=0.5*m*vel^2+0.5*I*omega^2 so you can exploit the law of preserving energy.
During continuous contact of object1 with object2 in rotation happens this
Perimeter speed/acceleration create interaction force, this is slowing down the rotation of object2 creating drag force on the object2 and reacting force on the object1.
if object1 is not fixed then this force also create torque and rotates the object1
If the rotation is forced to stop suddenly then all rotational part of kinetic energy is moved to the collision reaction Force impulse.
If object is in more complicated rotation motion then you should compute the actual rotation axis and alpha,omega,epsilon and use that because object can rotate with more rotations each with different center of rotation.
Also if object is rotating and another rotation is applied in different axis then this creates gyroscopic torque creating also rotation in the third axis perpendicular to both.
So when yo put all these together you have a idea of what structures you need. Sorry can not be more specific than this without further info about the structures and properties of your simulation ...
Applied forces do not play a role in the calculation of contact impulses because the impulses are said to occur on a time scale much smaller than the simulation time step. Basically the change is velocity during an impact because of gravity or other forces is negligible.
If I understand correctly, you worry about the different corners of the square - one with an impact, three without.
However, since you want to do rigid body dynamics, it is more helpful to think about the rigid body as having a center of mass (in this case, the square's center), a position, a rotation, and a geometry (in this case the square, but it could be anything).
The corners of the vertices are in constant position and rotation with regards to the center of mass - it's only the rigid body's position and rotation which change all four corners position in the world at once. An advantage of this view is that it is independent of the geometry - you could have 10 or 20 corners, and the approach would be the same.
With regard to computing the rotation:
Gravity is working as before. However, you now have another force (from the impulse over the time it acts) - and you have to add the effects of the two in order to get the complete outcome of the system.
The impulse will be due to one of the corners being in collision in the case you describe. It has to be computed at the contact point, with a contact normal - in this case the normal of the flat surface.
If the normal points in a different direction than the center of mass, this will lead to a rotation (as well as a position change).
The amount of the position change is due to how you model the contact computation and resolution, material properties, numerical stepper, impact velocity, time step, ...
As others mentioned, reading up on physics (rigid body dynamics) and physics simulations might be a good starting point to understand the concepts better.

Resources