Triangle pattern GLSL shader - algorithm

Is there any simple algorithm like Voronoi diagram to divide any rectangular plane to triangles, eventually, using # of pre-defined points.
To be honest, I have to write a very simple fragment shader like this.
Theoretically, this Voronoii shader could be 'upgraded' by Delaunay triangulation
but wanna find the more elegant solution.

The first thing that comes to my mind is to create n random points (with specific seed) to fill a cylinder volume. The triangle points will be intersection of lines between those points and plane going through the axis of cylinder. The animation would be simply done by rotating the plane ...
I see it something like this:
So the neighboring points should be interconnected with each other. Forming tetrahedrons that fills the volume of the cylinder. So create uniform tetrahedron grid and add random noise to the points position (with specific seed).
This whole task is very similar to rendering cross section of 4D mesh see:
4D rendering techniques
As the 4D simplex is also tetrahedron. The only diference is you are in 3D and cutting by 3D plane.

You can reverse-engineer this example shadertoy.com/view/MdfBzl
like I did. Thanks to mattz.

Related

How do I find the corners of a plane in 3d space if I know three points

Apologies in advance for my feeble maths.
I'm trying to be able to find the corners of a plane in space based on the equation of that plane. Here's what I know. I know three points on the plane and I know where they fall in the 2d coordinate space of the plane (x,y) and where they are in 3d space. I know the width and height of the plane and I can now calculate the equation of the plane. The plane sits on the inside of a large sphere that surrounds the origin so, in theory, it should more or less face where the camera is (though in my diagram it doesn't face the origin as it's just for illustrative purposes)
But it's not clear to me how I can use that to figure out another point. One thought I had was to find the transform that moves the plane parallel to the xy axis and rotate it round one of the points (so it stays in the same place), find the position of the new point, and then rotate it by the inverse of that transform. But it's not clear to me how I would find that transform matrix or how to use it. Could I do this using the normal and vector maths? I understand what normals are, but I'm fuzzy about how to use them.

3D mesh generation: How to choose up-axis when extruding 2D shape along 3D curve?

I have a 2D shape (a circle) that I want to extrude along a 3D curve to create a 3D tube mesh.
Currently the way I generate cross-sections along the curve (which form the basis of the resulting mesh) is to take every control point along the curve, create a 3D transform matrix for it, then multiply the 2D points of my circle by those curve-point matrices to determine their location in 3D space along the curve.
To create the matrix (from 3 vectors), I use the tangent on the curve as the up vector, world-up ([0,1,0]) as the forward vector, and the cross product of the up/forward vectors as the right vector. All three vectors are also orthogonalized during the process to create the final matrix.
The problem comes when my curve tangent is identical to the world-up axis. Ie, my tangent vector is [0,1,0] and the world-up is [0,1,0]....since the cross product of two parallel vectors is not explicit....the resulting extruded mesh has artifacts along those areas of the curve (pinching, twisting, etc).
I thought a potential solution would be to use the dot product of the curve tangent and the world-up as an interpolation value to shift my forward vector from world-up to world-right...in other words, as a curve tangent approaches [0,1,0], my forward vector approaches [1,0,0]...but that results in unwanted twisting along the final mesh as well.
How can I extrude my shape along a curve in a consistent manner that has no flipping/artifacts/twisting? I know it's possible since various off-the-shelf 3D applications can do it...I'm just not sure how.
One way I would approach this is to consider my tangent vector to the 3D curve as actually being a normal vector of the plane I am interested into.
Let's say, the tangent vector is
All you need now is two other vectors that are othoghonal to it, so let's.
Let's construct v like so:
(rotating the coordinates). Because v is the result of the cross product of u and something else, you know that v is orthogonal to u.
(This method will not work if u have equal x,y,z coordinates, in that case, construct the other vector by adding random numbers to at least two variables, rince&repeat).
Then you can simply construct w like before:
normalize and go.

Kabsch Algorithm for 2d to 3d Rotation and Translation

My problem involves matching a set of 2d points to a set of 3d points, with known correspondence between the two. Basically I have points on an image, and I need the optimal translation and rotation to fit the points to a known 3d point cloud. Kabsch algorithm is originally meant for finding the best fit of 3d points to another point cloud, and there are implementations out there for 2d to 2d, but not something I can use. I do know it's possible, but just don't know how to go about it. I searched for code out there and came up empty. I'm programming in matlab at the moment, but any language would do.
Thank you.
Edit: The goal is getting a rotation and translation of the 3d point cloud to best match the 2d points when it is projected onto an image plane.
I should also mention that the 3d to 2d projection is done using a weak perspective.
So basically, you have a "plane" or a "line" of points, like the third dimension was 0. You could threat them like this, and use the tipicall kabsh algorithm of squared distance minimisation, don't you?
EDIT: maybe it's a nonsense, but what about projecting the 3d body to 2d coordinates, and do a 2d comparison? Computationally is expensive, so it includes exploring all the angles of the 3d object + projection, but it's easier losing one dimension by applying a projection, that adding a new dimenssion to a 2d point.

rendering n-point polygon with face in 3D

Newbie to three.js. I have multiple n-sided polygons to be displayed as faces (I want the polygon face to be opaque). Each polygon is facing a different direction in 3D space (essentially theses faces are part of some building).
Here are a couple of methods I tried, but they do not fit the bill:
Used Geometry object and added the n-vertices and used line mesh. It created the polygon as a hollow polygon. As my number of points are not just 3 or 4, I could not use the Face3 or Face4 object. Essentially a Face-n object.
I looked at the WebGL geometric shapes example. The shape object works in 2D and extrusion. All the objects in the example are on one plane. While my requirement is each polygon has a different 3D normal vector. Should I use 2D shape and also take note of the face normal and rotate the 2D shape after rendering.
Or is there a better way to render multiple 3D flat polygons with opaque faces with just x, y, z vertices.
As long as your polygons are convex you can still use the Face3 object. If you take one n-sided polygon, lets say a hexagon, you can create Face3 polygons by taking vertices numbered (0,1,2) as one face, vertices (0,2,3) as another face, vertices (0,3,4) as other face and vertices (0,4,5) as last face. I think you can get the idea if you draw it on paper. But this works only for convex polygons.

Merge overlapping triangles into a polygon

I've got a bunch of overlapping triangles from a 3D model projected into a 2D plane. I need to merge each island of touching triangles into a closed, non-convex polygon.
The resultant polygons shouldn't have any holes in them (since the source data doesn't).
Many of the source triangles share (floating point identical) edges with other triangles in the source data.
What's the easiest way to do this? Performance isn't particularly important, since this will be done at design time.
Try gpc, or the General Polygon Clipper Library.
Imagine the projection onto a plane as a "view" of the model (i.e. the direction of projection is the line of sight, and the projection is what you see). In that case, the borders of the polygons you want to compute correspond to the silhouette of the model.
The silhouette, in turn, is a set of edges in the model. For each edge in the silhouette, the adjacent faces will have normals that either point away from the plane or toward the plane. You can check this be taking the dot product of the face normal with the plane normal -- look for edges whose adjacent face normals have dot products of opposite signs with the projection direction.
Once you have found all the silhouette edges you can join them together into the boundaries of the desired polygons.
Generally, you can find more about silhouette detection and extraction by googling terms like mesh silouette finding detection. Maybe a good place to start is here.
I've also found this[1] approach, which I will be trying next.
[1] 2d outline algorithm for projected 3D mesh

Resources