OpenGL ES 2.0 Per Fragment Lighting for Untextured Generated Geometry - opengl-es

I'm currently generating geometry rather than importing it as a model. This makes it necessary to calculate all normals within the application.
I've implemented Gouraud shading (per vertex lighting) successfully, and now wish to implement Phong shading (per fragment/pixel).
I've had a look at relevant tutorials online and there are two camps: one offers a simple Gouraud-to-Phong reshuffling of shader code which, while offering improved lighting, isn't truly per-pixel. The second does things the right way by utilising normal maps embedded within textures, but these are generated within a modelling toolkit such as RenderMonkey.
My questions are:
How I should go about programmatically generating normals for my
generated geometry, considered it as a vertex set? In other words, given a set of discrete polygonal points, will it be necessary to manually calculated interpolated normals?
Should I store generated normals within a texture as exemplified
online, and if so how would I go about doing this within code rather
than through modelling software?

Computing the lighting in the fragment shader on the intepolated per-vertex normals will definitely yield better results (assuming you properly re-normalize the interpolated normals in the fragment shader) and it is truly per-pixel. Although the strength of the difference may very depending on the model tessellation and the lighting variation. Have you just tried it out?
As long as you don't have any changing normals inside a face (think of bump mapping) and only interpolate the per-vertex normals, a normal map is completely unneccessary, as you get interpolated normals from the rasterizer anyway. Whereas normal mapping can give nicer effects if you really have per-pixel normal variations (like a very rough surface), it is not neccessarily the right way to do per-pixel lighting.
It makes a huge difference if you compute the lighting per-vertex and interpolate the colors or if you compute the lighting per fragment (even if you just interpolate the per-vertex normals, that's what classical Phong shading is about), especially when you have quite large triangles or very shiny surfaces (very high frequency lighting variation).
Like said, if you don't have high-frequency normal variations (changing normals inside a triangle), you don't need a normal map and neither interpolate the per-vertex normals yourself. You just generate per-vertex normals like you did for the per-vertex lighting (e.g. by averaging adjacent face normals). The rasterizer does the interpolation for you.
You should first try out simple per-pixel lighting before delving into techniques like normal mapping. If you got not so finely tessellated geometry or very shiny surfaces, you will surely see the difference to simple per-vertex lighting. Then when this works you can try normal mapping techniques, but for them to work you surely need to first understand the meaning of per-pixel lighting and Phong shading in contrast to Gouraud shading.

Normal maps are not a requirement for per-pixel lighting.
The only requirement, by definition, is that the lighting solution is evaluated for every output pixel/fragment. You can store the normals on the vertexes just as well (and more easily).
Normal maps can either provide full normal data (rgb maps) or simply modulate the stored vertex normals (du/dv maps, appear red/blue). The latter form is perhaps more common and relies on vertex normals to function.
To generate the normals depends on your code and geometry. Typically, you use dot products and surrounding faces or vertexes for smooth normals, or just create a unit vector pointing in whatever is "out" for your geometry.

Related

What mesh / acceleration structure representation to use for raytracing in fragment shader?

I want to do some raytracing from a fragment shader. I don't want to use a compute shader as I need to do rasterization anyway and want to do some simple raytracing as part of the fragment shader evaluation in a single pass. Simple in the sense as the mesh I want to raytrace against consists of only a few hundred triangles.
I have written plenty of shaders and also wrote (cpu based) raytracers, so I'm familiar with all the concepts. What I wonder though is what's the best representation for the mesh plus acceleration structure (some kd-tree probably) to pass it to the fragment shader? most likely they'd be converted to textures in some way (like three pixels for example to represent the positions of a triangle).

rendering millions of voxels using 3D textures with three.js

I am using three.js to render a voxel representation as a set of triangles. I have got it render 5 million triangles comfortably but that seems to be the limit. you can view it online here.
select the Dublin model at resolution 3 to see a lot of triangles being drawn.
I have used every trick to get it this far (buffer geometry, voxel culling, multiple buffers) but I think it has hit the maximum amount that openGL triangles can accomplish.
Large amounts of voxels are normally rendered as a set of images in a 3D texture and while there are several posts on how to hack 2d textures into 3D textures but they seem to have a maximum limit on the texture size.
I have searched for tutorials or examples using this approach but haven't found any. Has anyone used this approach before with three.js
Your scene is render twice, because SSAO need depth texture. You could use WEBGL_depth_texture extension - which have pretty good support - so you just need a single render pass. You can stil fallback to low-perf-double-pass if extension is unavailable.
Your voxel's material is double sided. It's may be on purpose, but it may create a huge overdraw.
In your demo, you use a MeshPhongMaterial and directional lights. It's a needlessly complex material. Your geometries don't have any normals so you can't have any lighting. Try to use a simpler unlit material.
Your goal is to render a huge amount of vertices, so assuming the framerate is bound by vertex shader :
try stuff like https://github.com/GPUOpen-Tools/amd-tootle to preprocess your geometries. Focusing on prefetch vertex cache and vertex cache.
reduce the bandwidth used by your vertex buffers. Since your vertices are aligned on a "grid", you could store vertices position as 3 Shorts instead of 3 floats, reducing your VBO size by 2. You could use a same tricks if you had normals since all normals should be Axis aligned (cubes)
generally reduce the amount of varyings needed by fragment shader
if you need more attributes than just vec3 position, use one single interleaved VBO instead of one per attrib.

Surface Normals OpenGL

So, I am working on an OpenGL ES 2.0 terrain rendering program still.
I have weird drawing happening at the tops of ridges. I am guessing this is due to surface normals not being applied.
So, I have calculated normals.
I know that in other versions of OpenGL you can just activate the normal array and it will be used for culling.
To use normals in OpenGL ES can I just activate the normals or do I have to use a lighting algorithm within the shader?
Thanks
OpenGL doesn't use normals for culling. It culls based on whether the projected triangle has its vertices arranged clockwise or anticlockwise. The specific decision is based on (i) which way around you said was considered to be front-facing via glFrontFace; (ii) which of front-facing and/or back-facing triangles you asked to be culled via glCullFace; and (iii) whether culling is enabled at all via glEnable/glDisable.
Culling is identical in both ES 1.x and 2.x. It's a fixed hardware feature. It's external to the programmable pipeline (and, indeed, would be hard to reproduce within the ES 2.x programmable pipeline because there's no shader with per-triangle oversight).
If you don't have culling enabled then you are more likely to see depth-buffer fighting at ridges as the face with its back to the camera and the face with its front to the camera have very similar depths close to the ridge and limited precision can make them impossible to distinguish correctly.
Lighting in ES 1.x is calculated from the normals. Per-vertex lighting can produce weird problems at hard ridges because normals at vertices are usually the average of those at the faces that join at that vertex, so e.g. a fixed mesh shaped like \/\/\/\/\ ends up with exactly the same normal at every vertex. But if you're not using 1.x then that won't be what's happening.
To implement lighting in ES 2.x you need to do so within your shader. As a result of that, and of normals not being used for any other purpose, there is no formal way to specify normals as anything special. They're just another vertex attribute and you can do with them as you wish.

What is the fastest shadowing algorithm (CPU only)?

Suppose I have a 3D model:
The model is given in the form of vertices, faces (all triangles) and normal vectors. The model may have holes and/or transparent parts.
For an arbitrarily placed light source at infinity, I have to determine:
[required] which triangles are (partially) shadowed by other triangles
Then, for the partially shadowed triangles:
[bonus] what fraction of the area of the triangle is shadowed
[superbonus] come up with a new mesh that describe the shape of the shadows exactly
My final application has to run on headless machines, that is, they have no GPU. Therefore, all the standard things from OpenGL, OpenCL, etc. might not be the best choice.
What is the most efficient algorithm to determine these things, considering this limitation?
Do you have single mesh or more meshes ?
Meaning if the shadow is projected on single 'ground' surface or on more like room walls or even near objects. According to this info the solutions are very different
for flat ground/wall surfaces
is usually the best way a projected render to this surface
camera direction is opposite to light normal and screen is the render to surface. Surface is not usually perpendicular to light so you need to use projection to compensate... You need 1 render pass for each target surface so it is not suitable if shadow is projected onto near mesh (just for ground/walls)
for more complicated scenes
You need to use more advanced approach. There are quite a number of them and each has its advantages and disadvantages. I would use Voxel map but if you are limited by space than some stencil/vector approach will be better. Of course all of these techniques are quite expensive and without GPU I would not even try to implement them.
This is how Voxel map looks like:
if you want just self shadowing then voxel map size can be only some boundig box around your mesh and in that case you do not incorporate whole mesh volume instead just projection of each pixel into light direction (ignore first voxel...) to avoid shadow on lighted surface

what is the use of computeFaceNormals, computeVertexNormals and computeMorphNormals

What are the uses of computeFaceNormals(), computeVertexNormals() and computeMorphNormals().
I commented the geometry.computeVertexNormals() in an example and the model was appearing like it was given flat shading. when i added geometry.computeVertexNormals() the model was appearing like it was given smooth shading. Is this the uses geometry.computeVertexNormals() ?
Yes, it is. As the name implies, it computes normals. Look up what vertex and face normals are. The computeVertexNormals-function is quite simple in nature and as you cannot use any parameter for the smoothing (other than "weighted" to give somewhat better visual results), the function will smooth the whole object. Search for this topic on three-js github, I implemented a basic method (not 100% correct) to define the smoothing by angle so for example a 45° threshold will only smooth the vertex normals of 2 adjacent faces if their face-normals do not differ more than 45 degrees. Thus, you can achieve way better smoothing results for anorganic objects.
Concerning the three.js method, computeVertexNormals is just for when you really want to have your whole object smoothed or when your importer does not correctly import vertex normals, you can apply some better shading to your model instead of flat shading which in most cases, looks quite ugly^^

Resources