what is the difference between invariance and polygon offset in OpenGL - opengl-es

what is the difference between invariance and polygon offset in OpenGL. I am getting confused with both. Since both are related to low precession problems.

From the GLES 2.0 spec:
[...] variance refers to the possibility of getting different values
from the same expression in different shaders. For example, say two
vertex shaders each set gl_Position with the same expression in both
shaders, and the input values into that expression are the same when
both shaders run.
It is possible, due to independent compilation of the two shaders,
that the values assigned to gl_Position are not exactly the same when
the two shaders run. In this example, this can cause problems with
alignment of geometry in a multi-pass algorithm. In general, such
variance between shaders is allowed. To prevent variance, variables
can be declared to be invariant, either individually or with a global
setting.
In other words, invariant is a mechanism provided by gles for you (the programmer) to tell the implementation that when a certain shader code is compiled, the gpu code generated must be the same every time.
Polygon offset is, ummm, completely unrelated. I refer you to the official FAQ https://www.opengl.org/archives/resources/faq/technical/polygonoffset.htm

Related

which shader stage is more effitient for matrix transformation

I have 2 questions about the glsl efficiency.
In the fully user-defined shader pipeline
vs -> tcs -> tes -> gs -> fs
the first 4 stages are able to be used for the operation like this:
gl_Position = MPV_matrices * vec4(in_pos, 1);
Which stage is more efficient for this? Is it hardware or version dependent?
Many tutorials about using GLSL are showing examples which are passing a vertex position between the shaders instead of using in-built variable gl_Position only.
Does it make sense in terms of efficiency?
Thank you!
Such Transforms are commonly used in VS
That is because geometry and teselation is not usually used for basic shaders. And in Fragment it would mean that you need to multiply on per fragment basis which is much much more occurent than in per vertex hence performance drops... So people are used to place such transforms into VS and do not think about it too much.
custom input/output variables
We sometimes need vertexes in more than one coordinate system and it is usually faster to use inbuild interpolators than transform on per fragment basis.
For example I sometimes need 3 coordinate systems at once (screen, world, TBN) for proper computations in FS.
Another thing to consider is accuracy see:
How to correctly linearize depth in OpenGL
ray and ellipsoid intersection accuracy improvement

Issues refactoring GLES30 shader?

I'm currently rewriting a shader written in GLES30 for the GLES20 shader language.
I've hit a snag where the shader I need to convert makes a call to the function textureLod, which samples the currently bound texture using a specific level-of-detail. This call is made within the fragment shader, which can only be called within the vertex shader when using GLES20.
I'm wondering, if I replace this with a call with the function texture2D, will I be likely to compromise the function of the shader, or just reduce it's performance? All instances where the textureLod call is made within the original shader uses a level of detail of zero.
If you switch calls from textureLod to texture2D, you will lose control over which mip-level is being sampled.
If the texture being sampled only has a single mip-level, then the two calls are equivalent, regardless of the lod parameter passed to textureLod, because there is only one level that could be sampled.
If the original shader always samples the top mip level (=0), it is unlikely that the change could hurt performance, as sampling lower mip-levels would more likely give better texture cache performance. If possible, you could have your sampled texture only include a top level to guarantee equivalence (unless the mip levels are required somewhere else). If this isn't possible, then the execution will be different. If the sample is used for 'direct' texturing, it is likely that the results will be fairly similar, assuming a nicely generated mip-chain. If it is used for other purposes (eg. logic within the shader), then the divergence might be larger. It's difficult to predict without seeing the actual shader.
Also note that, if the texture sample is used within a loop or conditional, and has been ported to/from a DirectX HLSL shader at any point in its lifetime, the call to textureLod may be an artifact of HLSL not allowing gradient instructions within dynamic loops (of which the HLSL equivalent of texture2D is, but equivalent of textureLod is not). This is required in HLSL, even if the texture only has a single mip-level.

Why no access to texture lod in fragment shader

I'm trying to come to terms with the level of detail of a mipmapped texture in an OpenGL ES 2.0 fragment shader.
According to this answer it is not possible to use the bias parameter to texture2D to access a specific level of detail in the fragment shader. According to this post the level of detail is instead automatically computed from the parallel execution of adjacent fragments. I'll have to trust that that's the way how things work.
What I cannot understand is the why of it. Why isn't it possible to access a specific level of detail, when doing so should be very simple indeed? Why does one have to rely on complicated fixed functionality instead?
To me, this seems very counter-intuitive. After all, the whole OpenGL related stuff evolves away from fixed functionality. And OpenGL ES is intended to cover a broader range of hardware than OpenGL, therefore only support the simpler versions of many things. So I would perfectly understand if developers of the specification had decided that the LOD parameter is mandatory (perhaps defaulting to zero), and that it's up to the shader programmer to work out the appropriate LOD, in whatever way he deems appropriate. Adding a function which does that computation automagically seems like something I'd have expected in desktop OpenGL.
Not providing direct access to a specific level doesn't make any sense to me at all, no matter how I look at it. Particularly since that bias parameter indicates that we are indeed allowed to tweak the level of detail, so apparently this is not about fetching data from memory only for a single level for a bunch of fragments processed in parallel. I can't think of any other reason.
Of course, why questions tend to attract opinions. But since opinion-based answers are not accepted on Stack Overflow, please post your opinions as comments only. Answers, on the other hand, should be based on verifiable facts, like statements by someone with definite knowledge. If there are any records of the developers discussing this fact, that would be perfect. If there is a blog post by someone inside discussing this issue, that would still be very good.
Since Stack Overflow questions should deal with real programming problems, one might argue that asking for the reason is a bad question. Getting an answer won't make that explicit lod access suddenly appear, and therefore won't help me solve my immediate problems. But I feel that the reason here might be due to some important aspect of how OpenGL ES works which I haven't grasped so far. If that is the case, then understanding the motivation behind this one decision will help me and others to better understand OpenGL ES as a whole, and therefore make better use of it in their programs, in terms of performance, exactness, portability and so on. Therefore I might have stated this question as “what am I missing?”, which feels like a very real programming problem to me at the moment.
texture2DLod (...) serves a very important purpose in vertex shader texture lookups, which is not necessary in fragment shaders.
When a texture lookup occurs in a fragment shader, the fragment shader has access to per-attribute gradients (partial derivatives such as dFdx (...) and dFdy (...)) for the primitive currently being shaded, and it uses this information to determine which LOD to fetch neighboring texels from during filtering.
At the time vertex shaders run, no information about primitives is known and there is no such gradient. The only way to utilize mipmaps in a vertex shader is to explicitly fetch a specific LOD, and that is why that function was introduced.
Desktop OpenGL has solved this problem a little more intelligently, by offering a variant of texture lookup for vertex shaders that actually takes a gradient as one of its inputs. Said function is called textureGrad (...), and it was introduced in GLSL 1.30. ESSL 1.0 is derived from GLSL 1.20, and does not benefit from all the same basic hardware functionality.
ES 3.0 does not have this limitation, and neither does desktop GL 3.0. When explicit LOD lookups were introduced into desktop GL (3.0), it could be done from any shader stage. It may just be an oversight, or there could be some fundamental hardware limitation (recall that older GPUs used to have specialized vertex and pixel shader hardware and embedded GPUs are never on the cutting edge of GPU design).
Whatever the original reason for this limitation, it has been rectified in a later OpenGL ES 2.0 extension and is core in OpenGL ES 3.0. Chances are pretty good that a modern GL ES 2.0 implementation will actually support explicit LOD lookups in the fragment shader given the following extension:
GL_EXT_shader_texture_lod
Pseudo-code showing explicit LOD lookup in a fragment shader:
#version 100
#extension GL_EXT_shader_texture_lod : require
attribute vec2 tex_st;
uniform sampler2D sampler;
void main (void)
{
// Note the EXT suffix, that is very important in ESSL 1.00
gl_FragColor = texture2DLodEXT (sampler, tex_st, 0);
}

Variable number of lights in glsl-es shader

I'm working on a 3d engine, that should work for mobile platforms. Currently I just want to make a prototype that will work on iOS and use forward rendering. In the engine a scene can have a variable number of lights of different types (directional, spot etc). When rendering, for each object (mesh) an array of lights that affect this object is constructed. The array will always have 1 or more elements. I can pack the light source information into 1D texture and pass to the shader. The number of lights can be put into this texture or passed as a separate uniform (I did not try it yet, but these are my thoughts after googling).
The problem is that not all glsl-es implementation support for loops with variable limits. So I can't write a shader that will loop through light sources and expect it to work on a wide range on platforms. Are there any technics to support variable number of lights in a shader if for loops with variable limits are not supported?
The idea I have:
Implement some preprocessing of shader source to unroll loops manually for different number of lights.
So in that case if I would render all objects with one type of shader and if the number of lights limits are 1 to 3, I will end-up having 3 different shaders (generated automatically) for 1, 2 and 3 lights.
Is it a good idea?
Since the source code for a shader consists of strings that you pass in at runtime, there's nothing stopping you from building the source code dynamically, depending on the number of lights, or any other parameters that control what kind of shader you need.
If you're using a setup where the shader code is in separate text files, and you want to keep it that way, you can take advantage of the fact that you can use preprocessor directives in shader code. Say you use LIGHT_COUNT for the number of lights in your shader code. Then when compiling the shader code, you prepend it with a definition for the count you need, for example:
#define LIGHT_COUNT 4
Since glShaderSource() takes an array of strings, you don't even need any string operations to connect this to the shader code your read from the file. You simply pass it in as an additional string to glShaderSource().
Shader compilation is fairly expensive, so you'll probably want to cache the shader program for each light count.
Another option is what Andon suggested in a comment. You can write the shader for the upper limit of the light count you need, and then pass in uniforms that serve as multipliers for each light source. For the lights you don't need, you set the multiplier to 0. That's not very efficient since you're doing extra calculations for light sources you don't need, but it's simple, and might be fine if it meets your performance requirements.

What are the implications of a constant vertex attribute vs. a uniform in OpenGL ES 2

When specifying a value that does not vary over each vertex to my vertex shader, I have the option of either specifying it as a uniform or as a constant vertex attribute (using glVertexAttrib1f and friends).
What are the reasons for which I should choose one over the other? Simply that there are a limited number of available vertex attributes and uniforms on any given implementation and thus I need to choose wisely, or are there also performance implications?
I've done some looking around and found a few discussions, but nothing that answers my concerns concretely:
- http://www.khronos.org/message_boards/showthread.php/7134-Difference-between-uniform-and-constant-vertex-attribute
https://gamedev.stackexchange.com/questions/44024/what-is-the-difference-between-constant-vertex-attributes-and-uniforms
I'm by no means an OpenGL guru, so my apologies if I'm simply missing something fundamental.
Well, vertex attributes can be setup to vary per-vertex if you pass a vertex attribute pointer; you can swap between a constant value and varying per-vertex on the fly simply by changing how you give data to a particular generic attribute location.
Uniforms can never vary per-vertex, they are more constant by far. Generally GLSL ES guarantees you far fewer vertex attribute slots (8, with up to 4 components each) to work with than uniform components (128 vectors, 4 components each) - most implementations exceed these requirements, but the trend is the same (more uniforms than attributes).
Furthermore, uniforms are a per-program state. These are constants that can be accessed from any stage of your GLSL program. In OpenGL ES 2.0 this means Vertex / Fragment shader, but in desktop GL this means Vertex, Fragment, Geometry, Tessellation Control, Tessellation Evaluation.

Resources