3D mesh generation: How to choose up-axis when extruding 2D shape along 3D curve? - matrix

I have a 2D shape (a circle) that I want to extrude along a 3D curve to create a 3D tube mesh.
Currently the way I generate cross-sections along the curve (which form the basis of the resulting mesh) is to take every control point along the curve, create a 3D transform matrix for it, then multiply the 2D points of my circle by those curve-point matrices to determine their location in 3D space along the curve.
To create the matrix (from 3 vectors), I use the tangent on the curve as the up vector, world-up ([0,1,0]) as the forward vector, and the cross product of the up/forward vectors as the right vector. All three vectors are also orthogonalized during the process to create the final matrix.
The problem comes when my curve tangent is identical to the world-up axis. Ie, my tangent vector is [0,1,0] and the world-up is [0,1,0]....since the cross product of two parallel vectors is not explicit....the resulting extruded mesh has artifacts along those areas of the curve (pinching, twisting, etc).
I thought a potential solution would be to use the dot product of the curve tangent and the world-up as an interpolation value to shift my forward vector from world-up to world-right...in other words, as a curve tangent approaches [0,1,0], my forward vector approaches [1,0,0]...but that results in unwanted twisting along the final mesh as well.
How can I extrude my shape along a curve in a consistent manner that has no flipping/artifacts/twisting? I know it's possible since various off-the-shelf 3D applications can do it...I'm just not sure how.

One way I would approach this is to consider my tangent vector to the 3D curve as actually being a normal vector of the plane I am interested into.
Let's say, the tangent vector is
All you need now is two other vectors that are othoghonal to it, so let's.
Let's construct v like so:
(rotating the coordinates). Because v is the result of the cross product of u and something else, you know that v is orthogonal to u.
(This method will not work if u have equal x,y,z coordinates, in that case, construct the other vector by adding random numbers to at least two variables, rince&repeat).
Then you can simply construct w like before:
normalize and go.

Related

Transforming a 3D plane onto a 2D coordinate system

Say I have a set of points from a sensor which are all within a margin of error on a 2D plane somewhere in the 3D space. How would I go on about transforming the coordinates of the points onto a 2d coordinate system, so that for example the convex hulls of the points or the distances between the points don't change?
Assuming you know the equation of the plane (otherwise you can fit it by least-square or other), construct a new coordinate frame as follows:
get the normal vector,
form the cross product with an arbitrary vector having a different direction;
form the cross product of the normal and the second vector,
normalize all three and name the new axis z, x, y.
This creates an orthonormal basis to which you will transform the points. This corresponds to a rigid transform, that preserves all distances. You can drop the z to get the orthogonal projections of the points to the plane.

Triangle pattern GLSL shader

Is there any simple algorithm like Voronoi diagram to divide any rectangular plane to triangles, eventually, using # of pre-defined points.
To be honest, I have to write a very simple fragment shader like this.
Theoretically, this Voronoii shader could be 'upgraded' by Delaunay triangulation
but wanna find the more elegant solution.
The first thing that comes to my mind is to create n random points (with specific seed) to fill a cylinder volume. The triangle points will be intersection of lines between those points and plane going through the axis of cylinder. The animation would be simply done by rotating the plane ...
I see it something like this:
So the neighboring points should be interconnected with each other. Forming tetrahedrons that fills the volume of the cylinder. So create uniform tetrahedron grid and add random noise to the points position (with specific seed).
This whole task is very similar to rendering cross section of 4D mesh see:
4D rendering techniques
As the 4D simplex is also tetrahedron. The only diference is you are in 3D and cutting by 3D plane.
You can reverse-engineer this example shadertoy.com/view/MdfBzl
like I did. Thanks to mattz.

Can I use a 3-dimensional simplex noise implementation to generate noise over a spherical surface?

Say, I want to generate noise over a sphere.
I want to do this to procedurally generate three-dimensional 'blobs'. And use these blobs to generate low poly trees, somewhat like this:
Can I accomplish this as follows?
First define a sphere that consists of a certain number of vertices, each of them defined by known (x,y,z) coordinates
Then generate an additional entropy (or noise) value e as follows:
var e = simplex.noise3d(x,y,z)
then use scalar multiplication to offset, or extrude the original point into 3D space, by entropy value e:
point.position.multiplyScalar(e)
Then finally reconstruct a new mesh from these newly computed offset points.
Can I define a sphere that consists of a certain number of vertices, each of them defined by known (x,y,z) coordinates, and then generate an entropy or noise value
I consider this approach because it is widely used to generate terrain meshes using two-dimensional noise on a two dimensional plane, thus resulting in a three dimensional plane:
Looking at examples I understand this concept of terrain generation using two-dimensional noise as follows:
You define a two-dimensional grid of points, essentially a plane. Thus each point has two known coordinates and is defined in three-dimensional space as ( X, Y = 0, Z ). In this case Y represents the height that will be computed by a noise generator.
You feed the X and Z coordinates of each point in the grid to a Simplex noise generator, that returns noise value Y.
var point.y = simplex.noise2d(x, z);
Now our grid of points has been displaced across the Y axis of our three-dimensional space, and we can create a natural-looking terrain mesh from them.
Can I use the same approach to generate noise on a spherical surface using three-dimensional noise. Is this even a good idea? And is there a simpler way?
I am implementing this in WebGL and Three.js.
If you want something to look like a tree, you should use a tree-growth algorithm to first simulate a tree branching pattern, then poly the outer surface of the tree. Different types of trees have what are called "habits" or chiral patterns that determine how they grow. One paper that describes some basic equations for modeling branch/leaf grown is:
http://www.math.washington.edu/~morrow/mcm/16647.pdf

Camera homography

I am learning camera matrix stuff. I already known that I can get the homography of the camera (3*3 matrix) by using four points in a plane in object space. I want to know if we can get the homagraphy with four points not in a plane? If yes, how can I get the matrix? What formulas should I look at?
I also confused homography with another concept: I only need to know three points if I want to convert from points from one coordinate to another coordinate system. So why we need four points in computing homography?
Homography maps points
1. On plane to points at another plane
2. Projections of points in 3D (no obligatory lying on the same plane) during a pure camera rotation or zoom.
The latter can be easily verified if you look at the rays that connect points while sensor plane rotates: green are two sensor positions and black is a 3d object
Since Homography is between projections and not between objects in 3D you don’t care what these projections represent. But this can be confusing, I agree. For example you can point your camera at 3D scene (that is not flat!), then rotate your camera and the two resulting pictures of the scene will be related by homography. This is, by the way, a foundation for image panoramas.
Three point correspondences you mentioned may be reladte to a transformation called Affine (happens during large zooms when a perspective effects disappears) or to the finding a rigid rotation and translation in 3D space. Both require 3 point correspondences but the former needs only 2D points while the latter needs 3D points. The latter case has 6DOF ( 3 for rotation and 3 for translation) while each correspondence provides 2DOF, hence 6/2=3 correspondences. Homography has 8 DOF so there should be 8/2=4 correspondences;
Below is a little diagram that explains the difference between affine and homographs transformation when the original square tilts forward. In affine case the perspective effect is negligible that is far side has the same length as a near one. In the case of Homography the far side is shorter.
If you only have 4 points - and they're not on the same plane - then computing a homography will not work.
If you have a loads of points, and 4 of them do lie on a plane but some don't, there are filters you can use to try to remove the ones not lying on a plane. The filters implemented by OpenCV are called RANSAC and LMeDs.
Also as Hammer says in a comment under your question - The 4th point is there to figure out perspective.
Homography is a 3X3 matrix, which consists of 8 independent unknowns which means it requires 4 equations to solve these unknowns. So, in order to calculate homography we need at least 4 points.
In homography we assume that Z=0 in world scene, so the image projected is assumed as 2D. In a very famous journal named ORB-SLAM, the author formulated a scene-selective approach depending on motion parallax in scene.
Homography is the relation between two planes and the degree of freedom in case of homography transform is 7; hence you need minimum 4 corresponding points.
4 points will give you 4 pair of (x,y) hence you can calculate 7 variables. Homography is homogines transfrom hence the (3,3) value in homography matrix is always 1.
So your first question that can you calculate homography with 3 points in the plane and 4th not on the plane : it's not possible. You need projection of that point on the plane and then you can calculate the homography.
Your 2nd question about how to calculate homography matrix, you can see implemetation of findHomography() in opencv.

How to transform a projected 3D rectangle into a 2D axis aligned rectangle

I have an image of a 3D rectangle (which due to the projection distortion is not a rectangle in the image). I know the all world and image coordinates of all corners of this rectangle.
What I need is to determine the world coordinate of a point in the image inside this rectangle. To do that I need to compute a transformation to unproject that rectangle to a 2D rectangle.
How can I compute that transform?
Thanks in advance
This is a special case of finding mappings between quadrilaterals that preserve straight lines. These are generally called homographic transforms. Here, one of the quads is a rectangle, so this is a popular special case. You can google these terms ("quad to quad", etc) to find explanations and code, but here are some sites for you.
Perspective Transform Estimation
a gaming forum discussion
extracting a quadrilateral image to a rectangle
Projective Warping & Mapping
ProjectiveMappings for ImageWarping by Paul Heckbert.
The math isn't particularly pleasant, but it isn't that hard either. You can also find some code from one of the above links.
If I understand you correctly, you have a 2D point in the projection of the rectangle, and you know the 3D (world) and 2D (image) coordinates of all four corners of the rectangle. The goal is to find the 3D coordinates of the unique point on the interior of the (3D, world) rectangle which projects to the given point.
(Do steps 1-3 below for both the 3D (world) coordinates, and the 2D (image) coordinates of the rectangle.)
Identify (any) one corner of the rectangle as its "origin", and call it "A", which we will treat as a vector.
Label the other vertices B, C, D, in order, so that C is diagonally opposite A.
Calculate the vectors v=AB and w=AD. These form nice local coordinates for points in the rectangle. Points in the rectangle will be of the form A+rv+sw, where r, s, are real numbers in the range [0,1]. This fact is true in world coordinates and in image coordinates. In world coordinates, v and w are orthogonal, but in image coordinates, they are not. That's ok.
Working in image coordinates, from the point (x,y) in the image of your rectangle, calculate the values of r and s. This can be done by linear algebra on the vector equations (x,y) = A+rv+sw, where only r and s are unknown. It will boil down to a 2x2 matrix equation, which you can solve generally in code using Cramer's rule. (This step will break if the determinant of the required matrix is zero. This corresponds to the case where the rectangle is seen edge-on. The solution isn't unique in that case. If that's possible, make special exception.)
Using the values of r and s from 4, compute A+rv+sw using the vectors A, v, w, for world coordinates. That's the world point on the rectangle.

Resources