I computed a mesh using SfM techniques and am able to extract a 3D mesh. However, the mesh doesn't have scale as expected with SfM techniques.
To scale the mesh, I am able to generate planes of the with real world scale. E.g.,
I tried to play around with ICP to scale and register the SfM mesh to match the scale of the planes but was not very successful. Could anyone point me in the right direction on how to solve this issue? I would like to scale the SfM mesh to match the real world scale. (I do not need to register the two meshes)
You need to relate some distance in the model to some measurable distance in the physical world. The easiest is probably the camera height above the floor plane. If that is not available, then perhaps the height of the bed or the size of the pillow.
Let's say that the physical camera height is 1.6m and in the model the camera is 800 units of length above the floor plane, then the scale factor you need to apply (to get 1 unit of length = 1 mm) is:
1600
scale_factor = ---- = 2.0
800
I ended up doing this, hope this helps someone or if anyone has a better suggestion, I will take it.
1) I used pyrender to render the two meshes from known poses in two worlds to get exact correspondences
2) I then used procustes analysis to figure out the scaling factor by computing the transformation of one mesh to another. You can procrustes from here
I am able to retrieve a scaling factor that is in acceptable range.
Related
I'm wanting to create a physics engine within Java. However it's not the code I'm bothered about. It's simply the math of rigid body physics, specifically forces and how they affect the rotation of an object.
Let's say for example that I have a square with same length sides. The square will be accelerating towards ground level due to gravity (no air resistance). This would mean that there would be a vector force of (0,-9.8)m/s on every point in the square.
Now let's say that this square is rotated slightly. When this rotated square comes into contact with the ground (a flat surface) there will be an impulse velocity vector at the point of contact (most likely a corner of the square). However, what happens to the forces of the other corners on the square? From the original force of gravity, how are they affected?
I apologize if my question isn't detailed enough. I'd love to upload a diagram but I don't yet have the reputation.
rotation is form of kinetic energy
first the analogy to movement
alpha - angular position [rad]
omega - angular speed [rad/s]
epsilon - angular acceleration [rad/s^2]
alpha(t)/(dt^2)=omega(t)/dt=epsilon(t)
now the inertia
I - quadratic rotation mass inertia [kg.m^2]
m - mass [kg]
M - torque [N.m]
and some equations to be exploited
M=epsilon*I - torque needed to achieve acceleration or vice versa [N.m]
acc=epsilon*radius - perimeter acceleration [m/s^2]
vel=omega*radius - perimeter speed [m/s^2]
equation #1 can be used to directly compute the force. Equations #2,#3 can be used to calculate friction based forces like wheels grip/drag. Do not forget about the kinetic energy Ek=0.5*m*vel^2+0.5*I*omega^2 so you can exploit the law of preserving energy.
During continuous contact of object1 with object2 in rotation happens this
Perimeter speed/acceleration create interaction force, this is slowing down the rotation of object2 creating drag force on the object2 and reacting force on the object1.
if object1 is not fixed then this force also create torque and rotates the object1
If the rotation is forced to stop suddenly then all rotational part of kinetic energy is moved to the collision reaction Force impulse.
If object is in more complicated rotation motion then you should compute the actual rotation axis and alpha,omega,epsilon and use that because object can rotate with more rotations each with different center of rotation.
Also if object is rotating and another rotation is applied in different axis then this creates gyroscopic torque creating also rotation in the third axis perpendicular to both.
So when yo put all these together you have a idea of what structures you need. Sorry can not be more specific than this without further info about the structures and properties of your simulation ...
Applied forces do not play a role in the calculation of contact impulses because the impulses are said to occur on a time scale much smaller than the simulation time step. Basically the change is velocity during an impact because of gravity or other forces is negligible.
If I understand correctly, you worry about the different corners of the square - one with an impact, three without.
However, since you want to do rigid body dynamics, it is more helpful to think about the rigid body as having a center of mass (in this case, the square's center), a position, a rotation, and a geometry (in this case the square, but it could be anything).
The corners of the vertices are in constant position and rotation with regards to the center of mass - it's only the rigid body's position and rotation which change all four corners position in the world at once. An advantage of this view is that it is independent of the geometry - you could have 10 or 20 corners, and the approach would be the same.
With regard to computing the rotation:
Gravity is working as before. However, you now have another force (from the impulse over the time it acts) - and you have to add the effects of the two in order to get the complete outcome of the system.
The impulse will be due to one of the corners being in collision in the case you describe. It has to be computed at the contact point, with a contact normal - in this case the normal of the flat surface.
If the normal points in a different direction than the center of mass, this will lead to a rotation (as well as a position change).
The amount of the position change is due to how you model the contact computation and resolution, material properties, numerical stepper, impact velocity, time step, ...
As others mentioned, reading up on physics (rigid body dynamics) and physics simulations might be a good starting point to understand the concepts better.
Suppose I have a 3D model:
The model is given in the form of vertices, faces (all triangles) and normal vectors. The model may have holes and/or transparent parts.
For an arbitrarily placed light source at infinity, I have to determine:
[required] which triangles are (partially) shadowed by other triangles
Then, for the partially shadowed triangles:
[bonus] what fraction of the area of the triangle is shadowed
[superbonus] come up with a new mesh that describe the shape of the shadows exactly
My final application has to run on headless machines, that is, they have no GPU. Therefore, all the standard things from OpenGL, OpenCL, etc. might not be the best choice.
What is the most efficient algorithm to determine these things, considering this limitation?
Do you have single mesh or more meshes ?
Meaning if the shadow is projected on single 'ground' surface or on more like room walls or even near objects. According to this info the solutions are very different
for flat ground/wall surfaces
is usually the best way a projected render to this surface
camera direction is opposite to light normal and screen is the render to surface. Surface is not usually perpendicular to light so you need to use projection to compensate... You need 1 render pass for each target surface so it is not suitable if shadow is projected onto near mesh (just for ground/walls)
for more complicated scenes
You need to use more advanced approach. There are quite a number of them and each has its advantages and disadvantages. I would use Voxel map but if you are limited by space than some stencil/vector approach will be better. Of course all of these techniques are quite expensive and without GPU I would not even try to implement them.
This is how Voxel map looks like:
if you want just self shadowing then voxel map size can be only some boundig box around your mesh and in that case you do not incorporate whole mesh volume instead just projection of each pixel into light direction (ignore first voxel...) to avoid shadow on lighted surface
I've got a fairly simple implementation of normal map lighting working for 2D sprites in webgl (GLSL shaders) which I was able to adapt & optimize from an example. It uses just one directional light and works fine for my purposes. Sprites are rendered flat (2D), only the light direction and normals are 3D vectors. Vertex rotation only happens around the z axis, so it's fairly easy-peasy.
I was hoping to add a bump (height) map to cast shadows. There are 3D bump map shadow casting examples and papers available online, but they're more complex than I need and the math goes over my head; I haven't found an example or explanation of how one might do a simple 2D case.
My first inclination is as follows: for the current pixel in the fragment shader, trace back along the direction of the light and check the altitude of the neighbouring bump map pixel. If it's higher than the light direction vector at that point, then that pixel is in the shade. However since "tall" pixels on the bump map may cast shadow across > 1 pixel distance, I'd have to keep testing pixel by pixel in that direction until I find one tall enough to cast a shadow (or reach the edge of the texture, or reach some arbitrary limit.)
This doesn't sound very optimal, especially for larger textures. I've read that if statements in shaders aren't so fast. Is there a faster/better method?
What you are looking for is called parallax (occlusion) mapping.
It's a technique that does exactly what you described, and it can be understood as on-bumpmap ray tracing in tangent space.
Here are some articles:
nVidia - Per-Pixel displacement (w/ sphere tracing)
nVidia - Cone Tracing for PM
AMD - POM
The ways to optimize search are similar to ordinary raytracing and include: sphere tracing, cone tracing, binary search and similar, instead of constant stepping function.
P. S. If you know the name of some rendering technique, it's generally good idea to Google it adding 'nVidia', 'crytek' or 'gpu' in front of the name, it will show you much more relevant results.
Hope this helps.
I'm new to three.js and WebGL in general.
The sample at http://css.dzone.com/articles/threejs-render-real-world shows how to use raster GIS terrain data in three.js
Is it possible to use vector GIS data in a scene? For example, I have a series of points representing locations (including height) stored in real-world coordinates (meters). How would I go about displaying those in three.js?
The basic sample at http://threejs.org/docs/59/#Manual/Introduction/Creating_a_scene shows how to create a geometry using coordinates - could I use a similar approach with real-world coordinates such as
"x" : 339494.5,
"y" : 1294953.7,
"z": 0.75
or do I need to convert these into page units? Could I use my points to create a surface on which to drape an aerial image?
I tried modifying the simple sample but I'm not seeing anything (or any error messages): http://jsfiddle.net/slead/KpCfW/
Thanks for any suggestions on what I'm doing wrong, or whether this is indeed possible.
I did a number of things to get the JSFiddle show something.. here: http://jsfiddle.net/HxnnA/
You did not specify any faces in your geometry. In this case I just hard-coded a face with all three of your data points acting as corner. Alternatively you can look into using particles to display your data as points instead of faces.
Set material to THREE.DoubleSide. This is not usually needed or recommended, but helps debugging in early phases, when you can see both sides of a face.
Your camera was probably looking in a wrong direction. Added a lookAt() to point it to the center and made the field of view wider (this just makes it easier to find things while coding).
Your camera near and far planes were likely off-range for the camera position and terrain dimensions. So I increased the far plane distance.
Your coordinate values were quite huge, so I just modified them by hand a bit to make sense in relation to the camera, and to make sure they form a big enough triangle for it to be seen in camera. You could consider dividing your coordinates with something like 100 to make the units smaller. But adjusting the camera to account for the huge scale should be enough too.
Nothing wrong with your approach, just make sure you feed the data so that it makes sense considering the camera location, direction and near + far planes. Pay attention to how you make the faces. The parameters to Face3 is the index of each point in your vertices array. Later on you might need to take winding order, normals and uvs into account. You can study the geometry classes included in Three.js for reference.
Three.js does not specify any meaning to units. Its just floating point numbers, and you can decide yourself what a unit (1.0) represents. Whether it's 1mm, 1 inch or 1km, depends on what makes the most sense considering the application and the scale of it. Floating point numbers can bring precision problems when the actual numbers are extremely small or extremely big. My own applications typically deal with stuff in the range from a couple of centimeters to couple hundred meters, and use units in such a way that 1.0 = 1 meter, that has been working fine.
So in general, when we think of Single View Reconstruction we think of working with planes, simple textures and so on... Generally, simple objects from nature's point of view. But what about such thing as wet beach stones? I wonder if there are any algorithms that could help with reconstructing 3d from single picture of stones?
Shape from shading would be my first angle of attack.
Smooth wet rocks, such as those in the first image, may exhibit predictable specular properties allowing one to estimate the surface normal based only on the brightness value and the relative angle between the camera and the light source (the sun).
If you are able to segment individual rocks, like those in the second photo, you could probably estimate the parameters of the ground plane by making some assumptions about all the rocks in the scene being similar in size and lying on said ground plane.