Surface Normals OpenGL - opengl-es

So, I am working on an OpenGL ES 2.0 terrain rendering program still.
I have weird drawing happening at the tops of ridges. I am guessing this is due to surface normals not being applied.
So, I have calculated normals.
I know that in other versions of OpenGL you can just activate the normal array and it will be used for culling.
To use normals in OpenGL ES can I just activate the normals or do I have to use a lighting algorithm within the shader?
Thanks

OpenGL doesn't use normals for culling. It culls based on whether the projected triangle has its vertices arranged clockwise or anticlockwise. The specific decision is based on (i) which way around you said was considered to be front-facing via glFrontFace; (ii) which of front-facing and/or back-facing triangles you asked to be culled via glCullFace; and (iii) whether culling is enabled at all via glEnable/glDisable.
Culling is identical in both ES 1.x and 2.x. It's a fixed hardware feature. It's external to the programmable pipeline (and, indeed, would be hard to reproduce within the ES 2.x programmable pipeline because there's no shader with per-triangle oversight).
If you don't have culling enabled then you are more likely to see depth-buffer fighting at ridges as the face with its back to the camera and the face with its front to the camera have very similar depths close to the ridge and limited precision can make them impossible to distinguish correctly.
Lighting in ES 1.x is calculated from the normals. Per-vertex lighting can produce weird problems at hard ridges because normals at vertices are usually the average of those at the faces that join at that vertex, so e.g. a fixed mesh shaped like \/\/\/\/\ ends up with exactly the same normal at every vertex. But if you're not using 1.x then that won't be what's happening.
To implement lighting in ES 2.x you need to do so within your shader. As a result of that, and of normals not being used for any other purpose, there is no formal way to specify normals as anything special. They're just another vertex attribute and you can do with them as you wish.

Related

Barycentric wireframes with full disclosure of back faces

I've implemented a barycentric coordinates wireframe shader something like this, and in general it is working nicely.
But like Florian Boesch's WebGL demo, some of the wire faces on the far side of the mesh are obscured (maybe has to do with the order in which the GPU is constructing faces).
I've set the following in the hopes that they would clear things up:
glCullFace(GL_NONE);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
...but no go so far. Is this possible in OpenGL ES 2.0?
I had forgotten to discard on a transparent output, so the depth buffer was being written in spite of the apparently transparent geometry, thus the mesh was self-obscuring due to depth tests failing.
This would be the problem in Florian's demo too, though it may be that he explicitly avoids discard for mobile performance reasons.

Hooking into hidden surface removal/backface culling to swap textures in WebGL?

I want to swap textures on the faces of a rotating cube whenever they face away from the camera; detecting these faces is not entirely equivalent to, but very similar to hidden surface removal. Is it possible to hook into the builtin backface culling function/depth buffer to determine this, particularly if I want to extend this to more complex polygons?
There is a simple solution using dot products described here but I am wondering if it's possible to hook into existing functions.
I don't think you can hook into internal WebGL processing but even if it's possible it wouldn't be the best way for you. Doing this would need your GPU to switch current texture on triangle-by-triangle basis, messing up with internal caches, etc. and in general - our GPUs don't like if commands.
You can however render your mesh with first texture and face culling enabled, then set your cull face direction to opposite and render your mesh with 2nd texture. This way you'll get different texture on front- and back-facing triangles.
As I think of it - if you have a correct mesh that uses face culling you shouldn't get any difference, because you'll never see a back-face, so the only ways it could be useful is for transparent meshes or not-so-correctly closed ones, like billboards. If you want to use this approach with transparency then you'll need to carefully pick the correct rendering order.

OpenGL ES 2.0 - glDrawElements and different indexing for vertex and other buffers (normals, colors for ex.)

My question is closely related with this one
So this seems like I can't use glDrawElements when I want flat shading for instance(or making one face in the mesh with specific color for ex.), coz I need different normals(colors) for the same vertex (one normal value for each triange(face) in which this vertex participate), right? So glDrawArrays(banch of tris) - the only way for such things, right?
Thx.
You need to repeat the vertices or remove that face from your glDrawArrays and run a second one with only just that face (but with different normals). There is not flat shading in OpenGL ES 2.0

OpenGL ES 2.0 Per Fragment Lighting for Untextured Generated Geometry

I'm currently generating geometry rather than importing it as a model. This makes it necessary to calculate all normals within the application.
I've implemented Gouraud shading (per vertex lighting) successfully, and now wish to implement Phong shading (per fragment/pixel).
I've had a look at relevant tutorials online and there are two camps: one offers a simple Gouraud-to-Phong reshuffling of shader code which, while offering improved lighting, isn't truly per-pixel. The second does things the right way by utilising normal maps embedded within textures, but these are generated within a modelling toolkit such as RenderMonkey.
My questions are:
How I should go about programmatically generating normals for my
generated geometry, considered it as a vertex set? In other words, given a set of discrete polygonal points, will it be necessary to manually calculated interpolated normals?
Should I store generated normals within a texture as exemplified
online, and if so how would I go about doing this within code rather
than through modelling software?
Computing the lighting in the fragment shader on the intepolated per-vertex normals will definitely yield better results (assuming you properly re-normalize the interpolated normals in the fragment shader) and it is truly per-pixel. Although the strength of the difference may very depending on the model tessellation and the lighting variation. Have you just tried it out?
As long as you don't have any changing normals inside a face (think of bump mapping) and only interpolate the per-vertex normals, a normal map is completely unneccessary, as you get interpolated normals from the rasterizer anyway. Whereas normal mapping can give nicer effects if you really have per-pixel normal variations (like a very rough surface), it is not neccessarily the right way to do per-pixel lighting.
It makes a huge difference if you compute the lighting per-vertex and interpolate the colors or if you compute the lighting per fragment (even if you just interpolate the per-vertex normals, that's what classical Phong shading is about), especially when you have quite large triangles or very shiny surfaces (very high frequency lighting variation).
Like said, if you don't have high-frequency normal variations (changing normals inside a triangle), you don't need a normal map and neither interpolate the per-vertex normals yourself. You just generate per-vertex normals like you did for the per-vertex lighting (e.g. by averaging adjacent face normals). The rasterizer does the interpolation for you.
You should first try out simple per-pixel lighting before delving into techniques like normal mapping. If you got not so finely tessellated geometry or very shiny surfaces, you will surely see the difference to simple per-vertex lighting. Then when this works you can try normal mapping techniques, but for them to work you surely need to first understand the meaning of per-pixel lighting and Phong shading in contrast to Gouraud shading.
Normal maps are not a requirement for per-pixel lighting.
The only requirement, by definition, is that the lighting solution is evaluated for every output pixel/fragment. You can store the normals on the vertexes just as well (and more easily).
Normal maps can either provide full normal data (rgb maps) or simply modulate the stored vertex normals (du/dv maps, appear red/blue). The latter form is perhaps more common and relies on vertex normals to function.
To generate the normals depends on your code and geometry. Typically, you use dot products and surrounding faces or vertexes for smooth normals, or just create a unit vector pointing in whatever is "out" for your geometry.

How to draw a colored rectangle in OpenGL ES?

Is this easy to do? I don't want to use texture images. I want to create a rectangle, probably of two polygons, and then set a color on this. A friend who claims to know OpenGL a little bit said that I must always use triangles for everything and that I must use textures for everything when I want it colored. Can't imagine that is true.
You can set per-vertex colors (which can all be the same) and draw quads. The tricky part about OpenGL ES is that they don't support immediate mode, so you have a much steeper initial learning curve compared to OpenGL.
This question covers the differences between OpenGL and ES:
OpenGL vs OpenGL ES 2.0 - Can an OpenGL Application Be Easily Ported?
With OpenGL ES 2.0, you do have to use a shader, which (among other things) normally sets the color. As long as you want one solid color for the whole thing, you can do it in the vertex shader.

Resources