How to extrapolate object position and rotation into the future/past? - rotation

Lets say we have the homogenous transformation matrix of the position and rotation of a camera for n different points in time. We also have m different images taken by this camera, which weren't neccecarily taken at the same instant the htm data was recieved. For example:
We have camera htms at t=1, 3, 5
And we have images at t=1.5, 4, 6
and so on.
I want to be able to roughly guess where the camera was and what its rotation was at the time a certain image was taken. For example:
We want the htm of the camera at t=6
We have htms at t=4, 5, 5.5
Another example:
We want the htm of the camera at t=6
We have htms at t=4, 5, 8
I was thinking of using a simple angular and linear velocity calculation from the two closest htms but the angular velocity may need to be expressed in euler coordinates, which suffers from gimbal lock.
Is there any better/easier way to achieve this effect? I am trying to map my environment using these values so precision is pretty important. Any help is appreciated!

I have the same problematic. You can use linear interpolation/extrapolation. Separe interpolation into position and orientation.
Use LERP algorithm for position and SLERP algorithm for orientation.
If you want better model you can use a Spline to interpolate the trajectory but it can quick become complicated.

Related

Scaling two meshes of the same object

I computed a mesh using SfM techniques and am able to extract a 3D mesh. However, the mesh doesn't have scale as expected with SfM techniques.
To scale the mesh, I am able to generate planes of the with real world scale. E.g.,
I tried to play around with ICP to scale and register the SfM mesh to match the scale of the planes but was not very successful. Could anyone point me in the right direction on how to solve this issue? I would like to scale the SfM mesh to match the real world scale. (I do not need to register the two meshes)
You need to relate some distance in the model to some measurable distance in the physical world. The easiest is probably the camera height above the floor plane. If that is not available, then perhaps the height of the bed or the size of the pillow.
Let's say that the physical camera height is 1.6m and in the model the camera is 800 units of length above the floor plane, then the scale factor you need to apply (to get 1 unit of length = 1 mm) is:
1600
scale_factor = ---- = 2.0
800
I ended up doing this, hope this helps someone or if anyone has a better suggestion, I will take it.
1) I used pyrender to render the two meshes from known poses in two worlds to get exact correspondences
2) I then used procustes analysis to figure out the scaling factor by computing the transformation of one mesh to another. You can procrustes from here
I am able to retrieve a scaling factor that is in acceptable range.

Rigid Body Physics Rotations

I'm wanting to create a physics engine within Java. However it's not the code I'm bothered about. It's simply the math of rigid body physics, specifically forces and how they affect the rotation of an object.
Let's say for example that I have a square with same length sides. The square will be accelerating towards ground level due to gravity (no air resistance). This would mean that there would be a vector force of (0,-9.8)m/s on every point in the square.
Now let's say that this square is rotated slightly. When this rotated square comes into contact with the ground (a flat surface) there will be an impulse velocity vector at the point of contact (most likely a corner of the square). However, what happens to the forces of the other corners on the square? From the original force of gravity, how are they affected?
I apologize if my question isn't detailed enough. I'd love to upload a diagram but I don't yet have the reputation.
rotation is form of kinetic energy
first the analogy to movement
alpha - angular position [rad]
omega - angular speed [rad/s]
epsilon - angular acceleration [rad/s^2]
alpha(t)/(dt^2)=omega(t)/dt=epsilon(t)
now the inertia
I - quadratic rotation mass inertia [kg.m^2]
m - mass [kg]
M - torque [N.m]
and some equations to be exploited
M=epsilon*I - torque needed to achieve acceleration or vice versa [N.m]
acc=epsilon*radius - perimeter acceleration [m/s^2]
vel=omega*radius - perimeter speed [m/s^2]
equation #1 can be used to directly compute the force. Equations #2,#3 can be used to calculate friction based forces like wheels grip/drag. Do not forget about the kinetic energy Ek=0.5*m*vel^2+0.5*I*omega^2 so you can exploit the law of preserving energy.
During continuous contact of object1 with object2 in rotation happens this
Perimeter speed/acceleration create interaction force, this is slowing down the rotation of object2 creating drag force on the object2 and reacting force on the object1.
if object1 is not fixed then this force also create torque and rotates the object1
If the rotation is forced to stop suddenly then all rotational part of kinetic energy is moved to the collision reaction Force impulse.
If object is in more complicated rotation motion then you should compute the actual rotation axis and alpha,omega,epsilon and use that because object can rotate with more rotations each with different center of rotation.
Also if object is rotating and another rotation is applied in different axis then this creates gyroscopic torque creating also rotation in the third axis perpendicular to both.
So when yo put all these together you have a idea of what structures you need. Sorry can not be more specific than this without further info about the structures and properties of your simulation ...
Applied forces do not play a role in the calculation of contact impulses because the impulses are said to occur on a time scale much smaller than the simulation time step. Basically the change is velocity during an impact because of gravity or other forces is negligible.
If I understand correctly, you worry about the different corners of the square - one with an impact, three without.
However, since you want to do rigid body dynamics, it is more helpful to think about the rigid body as having a center of mass (in this case, the square's center), a position, a rotation, and a geometry (in this case the square, but it could be anything).
The corners of the vertices are in constant position and rotation with regards to the center of mass - it's only the rigid body's position and rotation which change all four corners position in the world at once. An advantage of this view is that it is independent of the geometry - you could have 10 or 20 corners, and the approach would be the same.
With regard to computing the rotation:
Gravity is working as before. However, you now have another force (from the impulse over the time it acts) - and you have to add the effects of the two in order to get the complete outcome of the system.
The impulse will be due to one of the corners being in collision in the case you describe. It has to be computed at the contact point, with a contact normal - in this case the normal of the flat surface.
If the normal points in a different direction than the center of mass, this will lead to a rotation (as well as a position change).
The amount of the position change is due to how you model the contact computation and resolution, material properties, numerical stepper, impact velocity, time step, ...
As others mentioned, reading up on physics (rigid body dynamics) and physics simulations might be a good starting point to understand the concepts better.

Is it possible to use GIS terrain vector data in three.js?

I'm new to three.js and WebGL in general.
The sample at http://css.dzone.com/articles/threejs-render-real-world shows how to use raster GIS terrain data in three.js
Is it possible to use vector GIS data in a scene? For example, I have a series of points representing locations (including height) stored in real-world coordinates (meters). How would I go about displaying those in three.js?
The basic sample at http://threejs.org/docs/59/#Manual/Introduction/Creating_a_scene shows how to create a geometry using coordinates - could I use a similar approach with real-world coordinates such as
"x" : 339494.5,
"y" : 1294953.7,
"z": 0.75
or do I need to convert these into page units? Could I use my points to create a surface on which to drape an aerial image?
I tried modifying the simple sample but I'm not seeing anything (or any error messages): http://jsfiddle.net/slead/KpCfW/
Thanks for any suggestions on what I'm doing wrong, or whether this is indeed possible.
I did a number of things to get the JSFiddle show something.. here: http://jsfiddle.net/HxnnA/
You did not specify any faces in your geometry. In this case I just hard-coded a face with all three of your data points acting as corner. Alternatively you can look into using particles to display your data as points instead of faces.
Set material to THREE.DoubleSide. This is not usually needed or recommended, but helps debugging in early phases, when you can see both sides of a face.
Your camera was probably looking in a wrong direction. Added a lookAt() to point it to the center and made the field of view wider (this just makes it easier to find things while coding).
Your camera near and far planes were likely off-range for the camera position and terrain dimensions. So I increased the far plane distance.
Your coordinate values were quite huge, so I just modified them by hand a bit to make sense in relation to the camera, and to make sure they form a big enough triangle for it to be seen in camera. You could consider dividing your coordinates with something like 100 to make the units smaller. But adjusting the camera to account for the huge scale should be enough too.
Nothing wrong with your approach, just make sure you feed the data so that it makes sense considering the camera location, direction and near + far planes. Pay attention to how you make the faces. The parameters to Face3 is the index of each point in your vertices array. Later on you might need to take winding order, normals and uvs into account. You can study the geometry classes included in Three.js for reference.
Three.js does not specify any meaning to units. Its just floating point numbers, and you can decide yourself what a unit (1.0) represents. Whether it's 1mm, 1 inch or 1km, depends on what makes the most sense considering the application and the scale of it. Floating point numbers can bring precision problems when the actual numbers are extremely small or extremely big. My own applications typically deal with stuff in the range from a couple of centimeters to couple hundred meters, and use units in such a way that 1.0 = 1 meter, that has been working fine.

Orthographic 3D Backface Culling using Surface Normals

I'm creating an HTML5 canvas 3D renderer, and I'd say I've gotten pretty far without the help of SO, but I've run into a showstopper of sorts. I'm trying to implement backface culling on a cube with the help of some normals calculations. Also, I've tagged this as WebGL, as this is a general enough question that it could apply to both my use case and a 3D-accelerated one.
At any rate, as I'm rotating the cube, I've found that the wrong faces are being hidden. Example:
I'm using the following vertices:
https://developer.mozilla.org/en/WebGL/Creating_3D_objects_using_WebGL#Define_the_positions_of_the_cube%27s_vertices
The general procedure I'm using is:
Create a transformation matrix by which to transform the cube's vertices
For each face, and for each point on each face, I convert these to vec3s, andn multiply them by the matrix made in step 1.
I then get the surface normal of the face using Newell's method, then get a dot-product from that normal and some made-up vec3, e.g., [-1, 1, 1], since I couldn't think of a good value to put in here. I've seen some folks use the position of the camera for this, but...
Skipping the usual step of using a camera matrix, I pull the x and y values from the resulting vectors to send to my line and face renderers, but only if they have a dot-product above 0. I realize it's rather arbitrary which ones I pull, really.
I'm wondering two things; if my procedure in step 3 is correct (it most likely isn't), and if the order of the points I'm drawing on the faces is incorrect (very likely). If the latter is true, I'm not quite sure how to visualize the problem. I've seen people say that normals aren't pertinent, that it's the direction the line is being drawn, but... It's hard for me to wrap my head around that, or if that's the source of my problem.
It probably doesn't matter, but the matrix library I'm using is gl-matrix:
https://github.com/toji/gl-matrix
Also, the particular file in my open source codebase I'm using is here:
http://code.google.com/p/nanoblok/source/browse/nb11/app/render.js
Thanks in advance!
I haven't reviewed your entire system, but the “made-up vec3” should not be arbitrary; it should be the “out of the screen” vector, which (since your projection is ⟨x, y, z⟩ → ⟨x, y⟩) is either ⟨0, 0, -1⟩ or ⟨0, 0, 1⟩ depending on your coordinate system's handedness and screen axes. You don't have an explicit "camera matrix" (that is usually called a view matrix), but your camera (view and projection) is implicitly defined by your step 4 projection!
However, note that this approach will only work for orthographic projections, not perspective ones (consider a face on the left side of the screen, facing rightward and parallel to the view direction; the dot product would be 0 but it should be visible). The usual approach, used in actual 3D hardware, is to first do all of the transformation (including projection), then check whether the resulting 2D triangle is counterclockwise or clockwise wound, and keep or discard based on that condition.

Calculating angular rate

I'm simulating a physical object, using a mass spring system. By means of deltas and cross products, I can easily calulate the up, forward and side vectors.
I want to calculate what the angular rate (how fast it's spinning), for the object space X, Y and Z axis. Calculating the world space angle first won't help, since I need the angular rate in object space (how a sensor glued to the object would see it).
Any 3D maths people out there know how to do this?
I believe you want to take the CG of all the masses. Average the velocities of all the masses (using a mass-weighted average) this is the velocity of the object. Then take the velocity of each mass minus the velocity of the CG and compute the angular velocity using this relative velocity and the position relative to the CG - I think that's a cross product. This will give you the angular velocity vector in world coordinates. This may be averaged for all the masses, since they will be slightly different as the springs allow deformation. Simply project this angular velocity vector onto the (world space) sensor axis via dot-product and you have your object-space angular velocity on that axis. Your sensor axis must be a unit vector, and you'll need 3 of them - which you say you can get.
You might use the Lagrange mechanics in order to describe the system dynamics.

Resources