OBJ format and Flat vs Smooth Shading - opengl-es

I'm using OpenGL ES 1.1 and working on converting an OBJ export from Blender into some binary files containing vertex data. I actually already have a working tool, but I'm working on changing some things and came across a question.
Even with Smooth shading, it seems that with correct normals (perpendicular to the face plane) it achieves a flat appearance for faces. With Smooth shading enabled and the proper normals (simply via edges marked as sharp in Blender and an edge-split modifier applied), I can get the affect of smooth parts and sharp edges.
Where I'm going with this brings 2 questions.
Are the "s 1" or "s off" lines where smooth or flat shading is denoted in the OBJ file completely unnecessary from a smooth shading and use of normals standpoint?
When actually set to Flat shading in OpenGL, are normals completely ignored (or just assumed to all be perpendicular to the faces)?

For a vertex to look smooth, its normal has to be the average of the adjacent face normals (or something the like), but not perpendicular to the face plane (except if you meaned the average plane of all its adjacent faces).
GL_FLAT means, the color of a face is not interpolated over the triangle, but taken from a single triangle corner (don't know which, first or last). This color comes either from vertex colors or vertex lighting, so in fact you get per-face normals, but this is not neccessarily the faces direction, but the normal of a corner vertex.
If you got per vertex normals in the OBJ file you do not need the s parts. But you can use these to compute vertex normals. The s parts are the smoothing groups and are to be interpreted as 32bit bitfields. So there are actually 32 different smoothing groups and every face can be part of more than one. So all faces after an "s 5" line are part of smoothing groups 1 and 3 (first and third bits set). When two neighbouring faces are part of the same smoothing group, the edge between them is smooth (vertices share normals). This way you can reconstruct the neccessary per-vertex normals.

Changing the mode between gl_flat and gl_smooth doesn't seem to affect my rendering when I'm using per vertex normals. Your problem from what I can tell is that each face only has one normal. For smooth shading, each face should have three normals, one for each vertex, and they should all be different. For example, the normals of a cylinder, if projected inside of the cylinder, should all intersect at the axis of the cylinder. If your model has smooth normals, then an OBJ export should export per vertex normals. It sounds like you are probably assigning per face normals. As far as rendering in OpenGL-ES, the smoothing groups aren't used, only normals.

Related

how can I adapt a geometry to a surface?

How can I adapt a geometry (a box geometry to start with) to another one? I am looking for an effect like the one in the picture
where the cyan part was originally a box and then it got "adapted" to the plane and over the red part.
This is possible in some software packages (Modo, for example) but I'd like to do it in webGL/three.js
Consider modifying mesh geometry.
That implies for good results mesh will need to have high polygon count.
If you want to hug a simple shape (box, sphere) - vertex displacement can be sufficient:
Pass your red shape's parameters as uniforms when drawing blue shape
For any blue shape vertex find if it is inside or red shape and offset vertex position if needed
Choosing offset direction as closest face normal of red shape should be ok
That will give just visuals, if you need more robust solution - generate new geometry entirely on cpu on demand.
For example:
Loop through all vertices and offset them, mark offseted vertices
Additionally loop to relax hard edges
I suspect real algorithms from 3d modelling software are more complex.

UV-mapping a BufferGeometrys indices in Three.js

I'm having trouble UV mapping each side of a cube. The cube was made as a BufferGeometry where each side is a rotated copy of one side (sort of a working template) and rotated accordingly by applying a quarternion. UVs are copied as well. I'll skip any vertex coordinate I've seen before and rely on indices.
This leaves me with a total of 8 vertices and 12 faces. But I think I'm running short on vertices when I have to set all of my UVs. As obvious on the screenshot I've "correctly" mapped each side of the cube. But the top and bottom is lacking. I don't know how to set the vertex UV top and bottom faces.
Can I in some way apply several UVs on the same vertex depending on which face it is used in or have I completely lost the plot?
I could solve the problem by applying 6 PlaneBufferGeometry but that would leave me with 4*6=24 vertices. That is a whole lot more than 8.
I haven't been able to figure this one out. Either I've complete misunderstood how it works or what I'm trying to accomplish is impossible given my constraints.
With BufferGeometry, vertices can only be reused if all the attributes for that vertex match. Since each corner of the cube has 3 perpendicular normals, there must be 3 copies of that vertex.
If you have uvs, it is the same issue -- vertices must be duplicated if the uvs are different.
Study BoxBufferGeometry, which is implemented as "indexed-BufferGeometry".
three.js r.90

Tiled Terrain Normals

I am trying to create a terrain solution in ThreeJS and I'm running into some trouble with the generation of the normals. I am approaching the problem by creating a number of mesh objects using the THREE.PlaneGeometry class. Once all of the tiles have been created I go through each and set the UV's so that each tile represents a part of the whole. I also generate a height value of the vertex Y positions to create some hills. I then call the geometry functions
geometry.computeFaceNormals();
geometry.computeVertexNormals();
This is just so that I have some default face and vertex normals for each tile.
I then go through each tile and try to average out the normals on each corner.
The problem is (I think) with the normals, but I don't really know what to call this problem. Each of the normals on the plane's corners point in the same direction as the face when created. This makes the terrain look like a flat shaded object. To prevent this I thought perhaps what I needed to do was make sure each vertext normal (each corner) had the same averaged normal as its immediate neighbours normals. I.E each corner of each tile has the same normal as all the immediate normals around it from the adjacent planes.
figure A
Here I am visualising each of the 4 normals on the mesh. You can see that at each corner the normals are the same (On top of eachother)
figure B
EDIT
figure C
EDIT
Figure D
Except even when the verts all share the same normals it still comes up all blocky <:/
I don't know how to do this... I think my understanding of what needs to be done is incorrect...?
Any help would be greatly appreciated.
You're basically right about what should happen. The shading you're getting is not consistent with continuous normals. If each all the vertex-faces at a given location have the same normal you should not see the clear shading discontinuities in your second image. However the image doesn't look like simple face normals either, at least not to my eye.
A couple of things to look at:
1) I note that your quads themselves are not planar. Is it possible your algorithm is assuming that they are? the non-planar quad meshes don't have real 'face normal' to use as a base.
2) Are your normalized normalized after you average them? That is, do they have a vector length of 1?
3) Are you confident that the normal averaging code is actually using the correct normals to average? The shading in this does not look like completely flat shaded image where each vertex-face normal in a quad is the same - if that were the case you'd get consistent shading across each quad although the quads would not be continuous. This it possible your original vertex-face normals are not in fact lined up with the face normals?
4) Try turning off the bump maps to debug. Depending on how the bump is being done in your shader you may have incorrect binormals/bitangents rather than bad vert normals.
Instead of averaging at each vertex / corner the neighborhood normals you should average the four normals that each vertex has (4 tiles meet at each vertex).

2D geometry outline shader

I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all.
I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter.
I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore.
The following image shows what this looks like for one object when it is fully intact.
The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case.
This image shows the same object with a few fixtures removed.
Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated.
Thanks :)
Assuming you're able to do something about those awkward points that are slightly inset from the corners (eg, if you numbered the points in English-reading order, with the first being '1', point 6 would be one)...
If a point is interior then if you list all the polygon edges connected to it in clockwise order, each pair of edges in sequence will have a polygon in common. If any two edges don't have a polygon in common then it's an exterior point.
Starting from any exterior point you can then get the whole outline by first walking in any direction and subsequently along any edge that connects to an exterior point you haven't visited yet (or, alternatively, that isn't the edge you walked along just now).
Starting from an existing outline and removing some parts, you can obviously start from either exterior point that used to connect to another but no longer does and just walk from there until you get to the other.
You can't handle this stuff in a shader under ES because you don't get connectivity information.
I think the best you could do in a shader is to expand the geometry by pushing vertices outward along their surface normals. Supposing that your data structure is a list of rectangles, each described by, say, a centre, a width and a height, you could achieve the same thing by drawing each with the same centre but with a small amount added to the width and height.
To be completely general you'd need to store normals at vertices, but also to update them as geometry is removed. So there'd be some pushing of new information from the CPU but it'd be relatively limited.

Vertex normals for a cube

A cube has 8 unique vertices. Is it true that each of these 8 vertex normals (unit vectors) is making 135 degree angle to each of the edges which shares that vertex? And the vertex normal pointing outward/out of the cube? Your answer should be technically correct. Or it depends on how the cube is defined (drawn) like using triangle strips or indices that define 2 triangles for each side of the cube? The purpose of the vertex normal is smooth shading and lighting in OpenGL ES application.
If the cube is defined by 8 unique vertices, then the normals will likely be making a 135 degree angle to each edge, as you mentioned.
However, a cube is often defined using 24 vertices for exactly this reason. This allows you to have vertex normals that are perpendicular to each face, by "duplicating" vertices at each corner. Defining a cube this way is, effectively, just defining 6 individual faces, each pointing outwards appropriately.
There is no point in smoothing the cube with 8 vertices in order to make it look like a sphere. You'll get an extremely ugly sphere this way. The only reasonable way to draw the cube is using 24 unique vertices.
The center-oriented normals of the eight corner vertices of a cube will actually form an angle of 125 degrees, 16 minutes with each connected edge.
There's a good discussion of this topic elsewhere on SO.
The 135 degree can be explained visually by the normal vector of each vertices point outwards and must share the same angle with each edge the vertex is part of. Since the inner angle is 90 degree 270 degrees are outward of this corner. Therefore 270 degree /2 = 135 degree.
The normal vectors of each vertex are used to calculate the normal vector of the triangle. For your 3d model being a collection of flat triangles, having only a single normal to calculate the lighting from it would result in flat shading (thou being physically correct if the object really would be that edgy). Using the vertex normals to interpolate the 'normals' for each point of the triangle gives a smooth lighting reassembling a smooth surface.
The problem with this approach is using only a single normal per vertex results in the cube to have a shading like a sphere while still being a cube.
That is basically the reason why one want to define a cube with 24 (= 6x4) vertices rather than 6. This way one can have a cube with all faces (and therefore each two of its triangles) to have correct (flat) normals.
Having 24 vertices and therefore 24 normals provide the possibility to define only forward facing normals for each triangle/face so that the normals point always in a 90 degree angle away from the triangle/face and therefore provide a flat shading throughout every triangle/face which is more correct for a cube as its surfaces are really flat.
Usually one does not want to shade a steep angle like 90 (270) degree in a smooth continuous way. The normal interpolation is only used to mimic 'organic'/'smooth' surfaces. Since these organic / smooth surfaces are the norm (think about the tea pot or a 3d-figure) the decision was made to store the vertex normals with the position and UV coordinates as it is the norm in most of the 'continuous' 3d surfaces. Normally you add more triangles to represent a smooth topology in a model. Having vertex normals is, therefore, a trade-off to minimize the amount of information for the average model.
A cube model with its all flat triangles is, therefore, the worst case. This is why each of the cube's corner needs three vertex normals, one for each face it is a vertex of.
PS: Today those 'smooth' surfaces are further defined by using normal maps baked from a higher resolution model. Using normal maps, each point in the face gets its own normal vector (or the normal vector of each point can be interpolated from the normal vector samples provided by the mapped normal map).

Resources