A cube has 8 unique vertices. Is it true that each of these 8 vertex normals (unit vectors) is making 135 degree angle to each of the edges which shares that vertex? And the vertex normal pointing outward/out of the cube? Your answer should be technically correct. Or it depends on how the cube is defined (drawn) like using triangle strips or indices that define 2 triangles for each side of the cube? The purpose of the vertex normal is smooth shading and lighting in OpenGL ES application.
If the cube is defined by 8 unique vertices, then the normals will likely be making a 135 degree angle to each edge, as you mentioned.
However, a cube is often defined using 24 vertices for exactly this reason. This allows you to have vertex normals that are perpendicular to each face, by "duplicating" vertices at each corner. Defining a cube this way is, effectively, just defining 6 individual faces, each pointing outwards appropriately.
There is no point in smoothing the cube with 8 vertices in order to make it look like a sphere. You'll get an extremely ugly sphere this way. The only reasonable way to draw the cube is using 24 unique vertices.
The center-oriented normals of the eight corner vertices of a cube will actually form an angle of 125 degrees, 16 minutes with each connected edge.
There's a good discussion of this topic elsewhere on SO.
The 135 degree can be explained visually by the normal vector of each vertices point outwards and must share the same angle with each edge the vertex is part of. Since the inner angle is 90 degree 270 degrees are outward of this corner. Therefore 270 degree /2 = 135 degree.
The normal vectors of each vertex are used to calculate the normal vector of the triangle. For your 3d model being a collection of flat triangles, having only a single normal to calculate the lighting from it would result in flat shading (thou being physically correct if the object really would be that edgy). Using the vertex normals to interpolate the 'normals' for each point of the triangle gives a smooth lighting reassembling a smooth surface.
The problem with this approach is using only a single normal per vertex results in the cube to have a shading like a sphere while still being a cube.
That is basically the reason why one want to define a cube with 24 (= 6x4) vertices rather than 6. This way one can have a cube with all faces (and therefore each two of its triangles) to have correct (flat) normals.
Having 24 vertices and therefore 24 normals provide the possibility to define only forward facing normals for each triangle/face so that the normals point always in a 90 degree angle away from the triangle/face and therefore provide a flat shading throughout every triangle/face which is more correct for a cube as its surfaces are really flat.
Usually one does not want to shade a steep angle like 90 (270) degree in a smooth continuous way. The normal interpolation is only used to mimic 'organic'/'smooth' surfaces. Since these organic / smooth surfaces are the norm (think about the tea pot or a 3d-figure) the decision was made to store the vertex normals with the position and UV coordinates as it is the norm in most of the 'continuous' 3d surfaces. Normally you add more triangles to represent a smooth topology in a model. Having vertex normals is, therefore, a trade-off to minimize the amount of information for the average model.
A cube model with its all flat triangles is, therefore, the worst case. This is why each of the cube's corner needs three vertex normals, one for each face it is a vertex of.
PS: Today those 'smooth' surfaces are further defined by using normal maps baked from a higher resolution model. Using normal maps, each point in the face gets its own normal vector (or the normal vector of each point can be interpolated from the normal vector samples provided by the mapped normal map).
Related
Newbie to three.js. I have multiple n-sided polygons to be displayed as faces (I want the polygon face to be opaque). Each polygon is facing a different direction in 3D space (essentially theses faces are part of some building).
Here are a couple of methods I tried, but they do not fit the bill:
Used Geometry object and added the n-vertices and used line mesh. It created the polygon as a hollow polygon. As my number of points are not just 3 or 4, I could not use the Face3 or Face4 object. Essentially a Face-n object.
I looked at the WebGL geometric shapes example. The shape object works in 2D and extrusion. All the objects in the example are on one plane. While my requirement is each polygon has a different 3D normal vector. Should I use 2D shape and also take note of the face normal and rotate the 2D shape after rendering.
Or is there a better way to render multiple 3D flat polygons with opaque faces with just x, y, z vertices.
As long as your polygons are convex you can still use the Face3 object. If you take one n-sided polygon, lets say a hexagon, you can create Face3 polygons by taking vertices numbered (0,1,2) as one face, vertices (0,2,3) as another face, vertices (0,3,4) as other face and vertices (0,4,5) as last face. I think you can get the idea if you draw it on paper. But this works only for convex polygons.
We are given a set of triangles. Each triangle is a triplet of points. Each point is a triplet of real numbers. We can calculate surface normal for each triangle. For Gouraud shading however, we need vertex normals. Therefore we have to visit each vertex and look at the triangles that share that vertex, average their surface normals and we get vertex normal.
What is the most efficient algorithm and data structure to achieve this?
A naive approach is this (pseudo python code):
MAP = dict()
for T in triangles:
for V in T.vertices:
key = hash(V)
if MAP.has(key):
MAP[key].append(T)
else:
MAP[key] = []
MAP[key].append(T)
VNORMALS = dict()
for key in MAP.keys():
VNORMALS[key] = avg([T.surface_normal for T in MAP[key]])
Is there a more efficient approach?
Visit each triangle, calculate the normal for each triangle, ADD those to the vertex normal for each corner vertex.
Then at the end, normalise the normals for each vertex.
Then at least you only have to traverse the triangles once and you only store one normal/vertex.
Each vertex belongs to one or more faces (usually triangles, sometimes quads -- I'll use triangles in this answer).
A triangle that is not attached to any other triangles cannot be 'smoothed'. It is flat. Only when a face has neighbours can you reason about smoothing them together.
For a vertex where multiple faces meet, calculate the normals for each of these faces. The cross product of two vectors returns a perpendicular (normal) vector, which is what we want.
A --- B
\ /
C
v1 = B - A
v2 = C - A
normal = v1 cross v2
Be careful to calculate these vectors consistently across all faces, otherwise your normal may be in a negative direction to that you require.
So at a vertex where multiple faces meet, sum the normals of the faces, normalise the resulting vector, and apply it to the vertex.
Sometimes you have a mesh where some parts of it are to be smoothed, and others not. An easy to picture example is a cylinder made of triangles. The round surface of the cylinder would smooth well, but if you consider triangles from the flat ends at the vertices around the sharp ridge, it will look strange. To avoid this, you can introduce a rule that ignore normals from faces which deviate too far from the normal of the face you're calculating for.
EDIT there's a really good video showing technique for calculating Gourad shading, though it doesn't discuss an actual algorithm.
You might like to take a look at the source of of Three.js. Specifically, the computeVertexNormals function. It does not support maintaining sharp edges. The efficiency of your algorithm depends to a large extent upon the way in which you are modelling your primitives.
I'm using OpenGL ES 1.1 and working on converting an OBJ export from Blender into some binary files containing vertex data. I actually already have a working tool, but I'm working on changing some things and came across a question.
Even with Smooth shading, it seems that with correct normals (perpendicular to the face plane) it achieves a flat appearance for faces. With Smooth shading enabled and the proper normals (simply via edges marked as sharp in Blender and an edge-split modifier applied), I can get the affect of smooth parts and sharp edges.
Where I'm going with this brings 2 questions.
Are the "s 1" or "s off" lines where smooth or flat shading is denoted in the OBJ file completely unnecessary from a smooth shading and use of normals standpoint?
When actually set to Flat shading in OpenGL, are normals completely ignored (or just assumed to all be perpendicular to the faces)?
For a vertex to look smooth, its normal has to be the average of the adjacent face normals (or something the like), but not perpendicular to the face plane (except if you meaned the average plane of all its adjacent faces).
GL_FLAT means, the color of a face is not interpolated over the triangle, but taken from a single triangle corner (don't know which, first or last). This color comes either from vertex colors or vertex lighting, so in fact you get per-face normals, but this is not neccessarily the faces direction, but the normal of a corner vertex.
If you got per vertex normals in the OBJ file you do not need the s parts. But you can use these to compute vertex normals. The s parts are the smoothing groups and are to be interpreted as 32bit bitfields. So there are actually 32 different smoothing groups and every face can be part of more than one. So all faces after an "s 5" line are part of smoothing groups 1 and 3 (first and third bits set). When two neighbouring faces are part of the same smoothing group, the edge between them is smooth (vertices share normals). This way you can reconstruct the neccessary per-vertex normals.
Changing the mode between gl_flat and gl_smooth doesn't seem to affect my rendering when I'm using per vertex normals. Your problem from what I can tell is that each face only has one normal. For smooth shading, each face should have three normals, one for each vertex, and they should all be different. For example, the normals of a cylinder, if projected inside of the cylinder, should all intersect at the axis of the cylinder. If your model has smooth normals, then an OBJ export should export per vertex normals. It sounds like you are probably assigning per face normals. As far as rendering in OpenGL-ES, the smoothing groups aren't used, only normals.
If I have cube whose edges are parallel to the axes and is centered at the origin, is it correct that the normals are parallel to the axes or in other words only one component in normal vector can be non-zero and the other two components must be zero? IF x,y,z, is normal vector then if x is not zero then y and z must be zero?
In OpenGL ES application how many normals are needed for proper lighting? Do We need one normal per vertex, or one normal per triangle or one normal per surface?
These 2 lines of code are related to this question:
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
gl.glNormalPointer(GL10.GL_FLOAT, 0, mNormalBuffer);
How OpenGL ES knows which normal corresponds with which triangle, or vertex or surface of the mesh being drawn?
Normals are specified per vertex and do not have to be parallel to an axis (although they will be in your cube's case), they must be of unit length and perpendicular to the surface that your mesh is approximating.
Check out this answer to a similar question.
I'm drawing a simple cube using 8 vertices and 36 indices. No problem as long as I don't try to texture it.
However I want to texture it. Can I do that with only 8 vertices? It seems like I get some strange texture behaviour. Do I need to set up the cube with 24 vertices and 36 indices to be able to texture the cube correctly?
It just doesn't make sence to use vertices AND indices to draw then. I could just as well use vertices only.
One index refers to one set of attributes (vertex, normal, color, edge flag, etc). If you are willing to have the texture mirrored on adjacent faces of the sides of your cube, you could share both texture and vertex coordinates for the sides. However, the top and bottom faces sharing those same coordinates would not work -- one axis of the texture coordinate would not vary. Once you add other attributes (normal in particular) then a cube would need 24 individual indexes (each with vertex, texture and normal) to have "flat" sides.
Another approach that may work for you is texture coordinate generation. However, needing 24 individual vertices for a cube is perfectly normal.