Can someone give me a pointer on where should start?
I am trying to port this code → http://glennchun.github.io/free-form-deformation/
to the latest threejs Version.
the major challenge I’m facing is how to divide Geometry into multiple Faces.
Since SubdivisionModifier is removed from the latest threejs version what should I do how do I create a subDivision on geometry so I can attach the related face to my transform controls and deform them?
Reference
SubdivisionModifier works on THREE.Geometry() and now THREE.Geometry() is not present in threeJS r-136.
What I've done I started converting THREE.Geometry() Meshes to THREE.BufferGeometry() but SubdivisionModifier does not work on BufferGeometry.
So can anyone please point me to any new Library which replaced SubdivisionModifier or any new threeJS library which I can use?
The original subdivision modifier in three.js was based on the Catmull-Clark subdivision surface algorithm, which works best for geometry with convex coplanar n-gon faces. This modifier last appeared in release 59 (available here). To use the Catmull-Clark algorithm with the current triangle based BufferGeometry, it would be best to separate convex coplanar faces. The general idea would be go through triangles one by one, first gathering neighboring triangles with the same normals (to ensure coplanar) and then doing edge traversal to determine convex polygon faces. You would then run through the Catmull-Clark algorithm, and rebuild a new triangle based BufferGeometry.
To simplify subdivision with triangle meshes, three.js switched to the Loop subdivision surface algorithm in r60 until it was removed in r125 (available here). This algorithm generally doesn't look as nice on some geometries as it heavily weights corners with shared vertices and the modifier was eventually removed.
I have recently implemented a new modifier using Loop subdivision for modern three.js BufferGeometry and released it on GitHub under the MIT license. It includes a pre-subdivision pass to evenly split coplanar faces, smoothing sharp geometries more evenly.
Here is a demo of it in action.
Related
I'm new to WebGL and OpenGL. I've worked with an OpenGL library elsewhere that gives me the options of choosing how my geometry is "drawn". For example I can select triangles, quads, line_loop, points, etc.
My question is: based on my research so far, Three.js removed the option for quads due to rendering issues. Are there any other options on how geometric shapes are drawn?
Here's a graph depicting what I mean: http://www.opentk.com/files/tmp/persistent/opentk/files/GeometricPrimitiveTypes.gif
Polygons and Quads are pretty much useless, because your typical rasterizer (software or GPU) can deal only with convex, coplanar primitives. It's easy to construct a quad or polygon that's concave or not coplanar or both. Hence quads and polygons have been removed from modern OpenGL. Triangles, in any form are safe though, there's no way for a triangle to be concave or not coplanar.
I am new to the WebGL and shaders world, and I was wondering what the best way for me to paint only the pixels within a path. I have the positions 2d of each point and I would like to fill with a color inside the path.
2D Positions
Fill
Could someone give me a direction? Thanks!
Unlike the canvas 2d API to do this in WebGL requires you to triangulate the path. WebGL only draws points (squares), lines, and triangles. Everything else (circles, paths, 3d models) is up to you to creatively use those 3 primitives.
In your case you need turn your path into a set of triangles. There are tons of algorithms to do that. Each one has tradeoffs, some only handle convex paths, some don't handle holes, some add more points in the middle and some don't. Some are faster than others. There are also libraries that do it like this one for example
It's kind of a big topic arguably too big to go into detail here. Other SO questions about it already have answers.
Once you do have the path turned into triangles then it's pretty straightforward to pass those triangles into WebGL and have them drawn.
Plenty of answers on SO already cover that as well. Examples
Drawing parametric shapes in webGL (without three.js)
Or you might prefer some tutorials
There is a simple triangulation (mesh generation) for your case. First sort all your vertices into CCW order. Then calculate the middle point of all vertices. Then iterate over your sorted vertices, and push a triangle made of the middle point, the point at vertices[index] and the point at vertices[index+1] to the mesh.
I'm working on some simple building planning editor. For 3D preview I'm using Three.js library for Dart (from GitHib). So far algorithm is pretty simple: it converts single lines to rectangles and then extrude it (based on thickness and height).
Is it possible to normalize vertex position depending on adjacent walls? Technically I store list of walls, within can query adjacent walls and can calculate Vector2 list for mesh generation for each wall. I have to apply changes to each wall separately due to extrusion.
Thanks in advance!
Maybe you could instead try to properly tessellate the 2D thickened walls, and then only extrude them (instead of extruding, tessellating and then trying to fix the joints). For simple polylines, joint tessellation can be handled like described in this article: http://www.codeproject.com/Articles/226569/Drawing-polylines-by-tessellation.
I am attempting to use Three.js to morph one geometry into another. Here's what I've done so far (see http://stemkoski.github.io/Three.js/Morph-Geometries.html for a live example).
I am attempting to morph from a small polyhedron to a larger cube (both triangulated and centered at the origin). The animating is done via shaders. Each vertex on the smaller polyhedron has two associated attributes, its final position and its final UV coordinate. To calculate the final position of each vertex, I raycasted from the origin through each vertex of the smaller polyhedron and found the point of intersection with the larger cube. To calculate the final UV value, I used barycentric coordinates and the UV values at the vertices of the intersected face of the larger cube.
That led to a not awful but not great first attempt. Since (usually) none of the vertices of the larger cube were the final position of any of the vertices of the smaller polyhedron, big chunks of the surface of the cube were missing. So next I refined the smaller polyhedron by adding more vertices as follows: for each vertex of the larger cube, I raycasted toward the origin, and where each ray intersected a face of the smaller polyhedron, I removed that triangular face and added the point of intersection and three smaller faces to replace it. Now the morph is better (this is the live example linked to above), but the morph still does not fill out the entire volume of the cube.
My best guess is that in addition to projecting the vertices of the larger cube onto the smaller polyhedron, I also need to project the edges -- if A and B are vertices connected by an edge on the larger cube, then the projections of these vertices on the smaller polyhedron should also be connected by an edge. But then, of course it is possible that the projected edge will cross over multiple pre-existing triangles in the mesh of the smaller polyhedron, requiring multiple new vertices be added, retriangularization, etc. It seems that what I actually need is an algorithm to calculate a common refinement of two triangular meshes. Does anyone know of such an algorithm and/or examples (with code) of morphing (between two meshes with different triangularizations) as described above?
As it turns out, this is an intricate question. In the technical literature, the algorithm I am interested in is sometimes called the "map overlay algorithm"; the mesh I am constructing is sometimes called the "supermesh".
Some useful works I have been reading about this problem include:
Morphing of Meshes: The State of the Art and Concept.
PhD. Thesis by Jindrich Parus
http://herakles.zcu.cz/~skala/MSc/Diploma_Data/REP_2005_Parus_Jindrich.pdf
(chapter 4 especially helpful)
Computational Geometry: Algorithms and Applications (book)
Mark de Berg et al
(chapter 2 especially helpful)
Shape Transformation for Polyhedral Objects (article)
Computer Graphics, 26, 2, July 1992
by James R. Kent et al
http://www.cs.uoi.gr/~fudos/morphing/structural-morphing.pdf
I have started writing a series of demos to build up the machinery needed to implement the algorithms discussed in the literature referenced above to solve my original question. So far, these include:
Spherical projection of a mesh # http://stemkoski.github.io/Three.js/Sphere-Project.html
Topological data structure of a THREE.Geometry # http://stemkoski.github.io/Three.js/Topology-Data.html
There is still more work to be done; I will update this answer periodically as I make additional progress, and still hope that others have information to contribute!
I am trying to make a quadrilateral mesh from a surface mesh (which is mostly triangular) generated by Mathematica. I am not looking for high quality mesher but a simple work around algorithm. I use GMSH for doing it externally. We can make use of Mathematic's CAD import capabilities to generate 3D geometries that are understood by the Mathematica kernel.
We can see the imported Geometry3D objects and the plots of number of sides in each polygons they consist of. It become visible that the polygons that form the mesh are not always triangles.
Name3D=RandomChoice[ExampleData["Geometry3D"][[All,2]],6];
AllPic=
Table[
Vertex=ExampleData[{"Geometry3D",Name3D[[i]]},"VertexData"];
Polygons=ExampleData[{"Geometry3D",Name3D[[i]]},"PolygonData"];
GraphicsGrid[
{{ListPlot[#,Frame-> True,PlotLabel->Name3D[[i]] ]&#(Length[#]&/#Polygons),
Graphics3D[GraphicsComplex[Vertex,Polygon[Polygons]],Boxed-> False]}}
,ImageSize-> 300,Spacings-> {0,0}],
{i,1,Length#Name3D}];
GraphicsGrid[Partition[AllPic,2],Spacings-> {0,0}]
Now what I am looking for is an algorithm to form a quadrilateral mesh from that polygon information available to MMA. Any easy solution is very much welcome. By easy solution I mean which is not going to work in a very general setting (where mesh constitutes of polygons with sides more than 5 or 6) and which might be quite inefficient compared to commercial software. But one can see that there are not many quadrilateral surface mesh generator available other than few expensive commercial one.
BR
this will produce quads regardless of the input topology:
insert one vertex in the center of each face
insert one vertex at the midpoint of each edge
insert edges connecting each face's center vertex with it's edges' midpoint vertices