Face is not drawn - three.js

I try to draw a mesh with several faces.
Some of the faces are drawn some of them not.
When instantiating a mesh which is normally not drawn,
with the index in reverse it is drawn.
The following doesnt work:
geom.faces.push(new THREE.Face3(k,k+1,k+2,myface.normal));
This works:
geom.faces.push(new THREE.Face3(k+2,k+1,k,myface.normal));
This for me means that the order of the vertices is
wrong and so the normal is drawn in the opposite direction,
but I pass the correct normal to the face (which I calculate myself)
Even if I try to negate the normal, the face is not drawn.
So if I pass the correct normal as I understand it, it would make
no difference if the indices are put in reverse or otherwise.
Where am I wrong?

The face front is determined by the winding order, not the face normal. The default winding order is counter-clockwise (CCW).
Have a look at the source of Geometry.computeFaceNormals(), and notice how the computed face normal is consistent with a CCW winding order.
three.js r.58

Related

How to detect reversed faces (flipped normals)?

I need to load several STL files in a scene, some of which have reversed/flipped faces. This can be easily fixed (visually) using a double-sided material, but what I try is to visualize the wrong faces, switching the material index from blue to red.
If I understand, a mirrored/flipped/reversed face is being drawn clockwise instead of counter-clockwise. But are we talking about the three vectors that form the face? If so, the winding order is the order set at face.a, face.b and face.c? Or should I have to compare the face index value to know the order the face is drawn?
I set up a JSFiddle to ilustrate the problem: this STL model has 10 faces, and 3 of them are reversed or flipped. Using a FaceNormalsHelper you can see that those 3 normal helpers point to the inside of the model, instead outside. I´ve tried 6 different ways to check this, some of them are extracted from the code at computeFaceNormals(), with no luck. I´ve also examined the FaceNormalsHelper to see how it works, but it simply draws a line from the face centroid in the normals direction, but it never gets to calculate if it´s pointing inside or outside.
(Note: in order to change from one method to another, you must change the variable at the start of the script. Method 1 manually overrides these three faces material index, just to visualize the wrong ones (as a validation of all the other methods). No other one gets the good results.. But method 2 and 3 are really really close to it, although somehow two other faces are detected as flipped. I wonder if the face angle has something to do with this)
EDIT: The method suggested by #manthrax works perfectly, comparing edges of each face with all the other ones. If the edge is drawn the same way (example: face 1 edge a-b matches face 2 edge a-b, instead of b-a), it´s flipped. Basically, if a face has two or more edges marked as flipped, we can say the whole face is flipped. Check the JSFiddle to see it working.
function paintFlippedFaces(faceCheck){
for(var x=0; x<Object.keys(faceCheck).length; x++){
var flipCounter = 0;
if(faceCheck[x]['a-b']) flipCounter++;
if(faceCheck[x]['b-c']) flipCounter++;
if(faceCheck[x]['c-a']) flipCounter++;
console.log("face " + x + " has " +flipCounter+ " flipped edges" );
if(flipCounter >= 2){
geometry.faces[x].materialIndex = 1;
//geometry.faces.elementsNeedUpdate = true;
}
}
}
First off, "flipped" faces are totally subjective. You need to decide on some parameters.
For a convex object, there is a non-subjective definition.. for instance that the face normal, dotted with one of the face vertices, is > 0.
For non-convex objects you can make some guesses based on the winding order of the vertices, and whether any 2 edges contains the same ordering, implying that you have 1 front facing and 1 back facing triangle sharing an edge. (Which of those triangles is flipped, isn't necessarily identifiable in the non-convex case though.)
First, you'll have to .mergeVertices on your geometry, so that vertices are shared across faces... I think stl defines unique vertices for every triangle, so there's no concept of an "edge" that is shared by two faces.
Then you can loop through each face.. and basically any 2 edges that share the same 2 vertices, should have opposition ordering, or that faces are facing opposite each other..
So if 2 faces have an edge like 3,2 and 2,3 then they are both facing the same direction.. and the faces are consistent.
If two different faces both have an edge that are 3,2 and 3,2, then they are facing opposite directions, and you could flag both faces as possibly being flipped. The faces that get flagged the most times are the ones that are probably actually flipped.
Those are the best guess tests I can think of to determine inside/outside, but it's a tricky problem.
Blender has a couple "recalculate normals" methods for recomputing "inside"/"outside" that I think operate on a similar principle.. so you could also look at those.

UV-mapping a BufferGeometrys indices in Three.js

I'm having trouble UV mapping each side of a cube. The cube was made as a BufferGeometry where each side is a rotated copy of one side (sort of a working template) and rotated accordingly by applying a quarternion. UVs are copied as well. I'll skip any vertex coordinate I've seen before and rely on indices.
This leaves me with a total of 8 vertices and 12 faces. But I think I'm running short on vertices when I have to set all of my UVs. As obvious on the screenshot I've "correctly" mapped each side of the cube. But the top and bottom is lacking. I don't know how to set the vertex UV top and bottom faces.
Can I in some way apply several UVs on the same vertex depending on which face it is used in or have I completely lost the plot?
I could solve the problem by applying 6 PlaneBufferGeometry but that would leave me with 4*6=24 vertices. That is a whole lot more than 8.
I haven't been able to figure this one out. Either I've complete misunderstood how it works or what I'm trying to accomplish is impossible given my constraints.
With BufferGeometry, vertices can only be reused if all the attributes for that vertex match. Since each corner of the cube has 3 perpendicular normals, there must be 3 copies of that vertex.
If you have uvs, it is the same issue -- vertices must be duplicated if the uvs are different.
Study BoxBufferGeometry, which is implemented as "indexed-BufferGeometry".
three.js r.90

Tiled Terrain Normals

I am trying to create a terrain solution in ThreeJS and I'm running into some trouble with the generation of the normals. I am approaching the problem by creating a number of mesh objects using the THREE.PlaneGeometry class. Once all of the tiles have been created I go through each and set the UV's so that each tile represents a part of the whole. I also generate a height value of the vertex Y positions to create some hills. I then call the geometry functions
geometry.computeFaceNormals();
geometry.computeVertexNormals();
This is just so that I have some default face and vertex normals for each tile.
I then go through each tile and try to average out the normals on each corner.
The problem is (I think) with the normals, but I don't really know what to call this problem. Each of the normals on the plane's corners point in the same direction as the face when created. This makes the terrain look like a flat shaded object. To prevent this I thought perhaps what I needed to do was make sure each vertext normal (each corner) had the same averaged normal as its immediate neighbours normals. I.E each corner of each tile has the same normal as all the immediate normals around it from the adjacent planes.
figure A
Here I am visualising each of the 4 normals on the mesh. You can see that at each corner the normals are the same (On top of eachother)
figure B
EDIT
figure C
EDIT
Figure D
Except even when the verts all share the same normals it still comes up all blocky <:/
I don't know how to do this... I think my understanding of what needs to be done is incorrect...?
Any help would be greatly appreciated.
You're basically right about what should happen. The shading you're getting is not consistent with continuous normals. If each all the vertex-faces at a given location have the same normal you should not see the clear shading discontinuities in your second image. However the image doesn't look like simple face normals either, at least not to my eye.
A couple of things to look at:
1) I note that your quads themselves are not planar. Is it possible your algorithm is assuming that they are? the non-planar quad meshes don't have real 'face normal' to use as a base.
2) Are your normalized normalized after you average them? That is, do they have a vector length of 1?
3) Are you confident that the normal averaging code is actually using the correct normals to average? The shading in this does not look like completely flat shaded image where each vertex-face normal in a quad is the same - if that were the case you'd get consistent shading across each quad although the quads would not be continuous. This it possible your original vertex-face normals are not in fact lined up with the face normals?
4) Try turning off the bump maps to debug. Depending on how the bump is being done in your shader you may have incorrect binormals/bitangents rather than bad vert normals.
Instead of averaging at each vertex / corner the neighborhood normals you should average the four normals that each vertex has (4 tiles meet at each vertex).

2D geometry outline shader

I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all.
I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter.
I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore.
The following image shows what this looks like for one object when it is fully intact.
The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case.
This image shows the same object with a few fixtures removed.
Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated.
Thanks :)
Assuming you're able to do something about those awkward points that are slightly inset from the corners (eg, if you numbered the points in English-reading order, with the first being '1', point 6 would be one)...
If a point is interior then if you list all the polygon edges connected to it in clockwise order, each pair of edges in sequence will have a polygon in common. If any two edges don't have a polygon in common then it's an exterior point.
Starting from any exterior point you can then get the whole outline by first walking in any direction and subsequently along any edge that connects to an exterior point you haven't visited yet (or, alternatively, that isn't the edge you walked along just now).
Starting from an existing outline and removing some parts, you can obviously start from either exterior point that used to connect to another but no longer does and just walk from there until you get to the other.
You can't handle this stuff in a shader under ES because you don't get connectivity information.
I think the best you could do in a shader is to expand the geometry by pushing vertices outward along their surface normals. Supposing that your data structure is a list of rectangles, each described by, say, a centre, a width and a height, you could achieve the same thing by drawing each with the same centre but with a small amount added to the width and height.
To be completely general you'd need to store normals at vertices, but also to update them as geometry is removed. So there'd be some pushing of new information from the CPU but it'd be relatively limited.

Is a closed polygonal mesh flipped?

I have a 3d modeling application. Right now I'm drawing the meshes double-sided, but I'd like to switch to single sided when the object is closed.
If the polygonal mesh is closed (no boundary edges/completely periodic), it seems like I should always be able to determine if the object is currently flipped, and automatically correct.
Being flipped means that my normals point into the object instead of out of the object. Being flipped is a result of a mismatch between my winding rules and the current frontface setting, but I compute the normals directly from the geometry, so looking at the normals is a simple way to detect it.
One thing I was thinking was to take the bounding box, find the highest point, and see if its normal points up or down - if it's down, then the object is flipped.
But it seems like this solution might be prone to errors with degenerate geometry, or floating point error, as I'd only be looking at a single point. I guess I could get all 6 axis-aligned extents, but that seems like a slightly better kludge, and not a proper solution.
Is there a robust, simple way to do this? Robust and hard would also work.. :)
This is a robust, but slow way to get there:
Take a corner of a bounding box offset from the centroid (force it to be guaranteed outside your closed polygonal mesh), then create a line segment from that to the center point of any triangle on your mesh.
Measure the angle between that line segment and the normal of the triangle.
Intersect that line segment with each triangle face of your mesh (including the tri you used to generate the segment).
If there are an odd number of intersections, the angle between the normal and the line segment should be <180. If there are an even number, it should be >180.
If there are an even number of intersections, the numbers should be reversed.
This should work for very complex surfaces, but they must be closed, or it breaks down.
"I'm drawing the meshes double-sided"
Why are you doing that? If you're using OpenGL then there is a much better way to go and save yourself all the work. use:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, 1);
With this, all the polygons are always two sided.
The only reason why you would want to use one-sided lighting is if you have an open or partially inverted mesh and you want to somehow indicate what part belong to the inside by having them unlighted.
Generally, the problem you're posing is an open problem in geometry processing and AFAIK there is no sure-fire general way that can always determine the orientation. As you suggest, there are heuristics that work almost always.
Another approach is reminiscent of a famous point-in-polygon algorithm: choose a vertex on the mesh and shoot a ray from it in the direction of the normal. If the ray hits an even number of faces then the normal is pointed to the outside, if it is odd then the normal is towards to inside. notice not to count the point of origin as an intersection. This approach will work only if the mesh is a closed manifold and it can reach some edge cases if the ray happens to pass exactly between two polygons so you might want to do it a number of times and take the leading vote.

Resources