I am developing a framework using OpenGL ES to create 3D applications.
I need to deploy the framework in both PowerVR and Mali GPUs chip-sets.
Is there any aspects to be taken care in programming OpenGL ES for different GPUs (PowerVR and Mali)?
The only significant difference is that the older Mali cores (Mali-300/400 series) only support mediump in the fragment shader, so algorithms relying on highp precision won't work there.
There are surely fine tuning differences, but hard to give a succinct answer to that one. Just focus on writing good clean GL and it should work well everywhere.
Related
Is it possible to force a GPU to use highp floats in shaders when the OpenGL version being used is 2.0? I know that precision modifiers are basically a GLES concept, but floats do seem to use mediump by default on a desktop I was provided for testing, and it`s driving me nuts.
I`ve stumbled across that problem while running my GL project under Win10 with a GTX1070 GPU (driver version 23.21.13-9135). I don`t even have #version set in my shaders, so I expected the precision to be automatically maxed out.
AFAIK, OpenGL 2.x standard does not define any ways to hint the GPU that at least 20 of 24 mantissa bits are significant (passing integers through floats, since GL 2.0 cannot operate on integers natively), but that`s the first time in 3++ years that I`ve seen a desktop GPU do such a thing, so I`ve never even thought it possible.
Help needed.
http://webglstats.com/ seems to not have information on what percentage of devices/browsers support highp in the fragment shader.
Most sources report that highp won't work on older mobile hardware, and this SO post seems to indicate that most Intel GPUs (back in 2011) don't support it. I'm guessing the vast majority of hardware nowadays support it but I'm looking for some hard numbers.
Supporting highp in fragment shaders is optional in OpenGL ES 2.0, mandatory in OpenGL ES 3.0, so a quick and dirty way to be sure it to check if the device support OpenGL ES 3.0. For that reason there is still a vast amount of mid-end mobile hardware out there which doesn't support OpenGL ES 3.0 and does not implement the optional highp support (Mali-300/400/450 GPUs do not support it, for example).
Pretty much all desktop hardware can support OpenGL 4.0 so tends to have highp in fragment shaders (not aware of anything vagely recent which doesn't).
I'm trying to come to terms with the level of detail of a mipmapped texture in an OpenGL ES 2.0 fragment shader.
According to this answer it is not possible to use the bias parameter to texture2D to access a specific level of detail in the fragment shader. According to this post the level of detail is instead automatically computed from the parallel execution of adjacent fragments. I'll have to trust that that's the way how things work.
What I cannot understand is the why of it. Why isn't it possible to access a specific level of detail, when doing so should be very simple indeed? Why does one have to rely on complicated fixed functionality instead?
To me, this seems very counter-intuitive. After all, the whole OpenGL related stuff evolves away from fixed functionality. And OpenGL ES is intended to cover a broader range of hardware than OpenGL, therefore only support the simpler versions of many things. So I would perfectly understand if developers of the specification had decided that the LOD parameter is mandatory (perhaps defaulting to zero), and that it's up to the shader programmer to work out the appropriate LOD, in whatever way he deems appropriate. Adding a function which does that computation automagically seems like something I'd have expected in desktop OpenGL.
Not providing direct access to a specific level doesn't make any sense to me at all, no matter how I look at it. Particularly since that bias parameter indicates that we are indeed allowed to tweak the level of detail, so apparently this is not about fetching data from memory only for a single level for a bunch of fragments processed in parallel. I can't think of any other reason.
Of course, why questions tend to attract opinions. But since opinion-based answers are not accepted on Stack Overflow, please post your opinions as comments only. Answers, on the other hand, should be based on verifiable facts, like statements by someone with definite knowledge. If there are any records of the developers discussing this fact, that would be perfect. If there is a blog post by someone inside discussing this issue, that would still be very good.
Since Stack Overflow questions should deal with real programming problems, one might argue that asking for the reason is a bad question. Getting an answer won't make that explicit lod access suddenly appear, and therefore won't help me solve my immediate problems. But I feel that the reason here might be due to some important aspect of how OpenGL ES works which I haven't grasped so far. If that is the case, then understanding the motivation behind this one decision will help me and others to better understand OpenGL ES as a whole, and therefore make better use of it in their programs, in terms of performance, exactness, portability and so on. Therefore I might have stated this question as “what am I missing?”, which feels like a very real programming problem to me at the moment.
texture2DLod (...) serves a very important purpose in vertex shader texture lookups, which is not necessary in fragment shaders.
When a texture lookup occurs in a fragment shader, the fragment shader has access to per-attribute gradients (partial derivatives such as dFdx (...) and dFdy (...)) for the primitive currently being shaded, and it uses this information to determine which LOD to fetch neighboring texels from during filtering.
At the time vertex shaders run, no information about primitives is known and there is no such gradient. The only way to utilize mipmaps in a vertex shader is to explicitly fetch a specific LOD, and that is why that function was introduced.
Desktop OpenGL has solved this problem a little more intelligently, by offering a variant of texture lookup for vertex shaders that actually takes a gradient as one of its inputs. Said function is called textureGrad (...), and it was introduced in GLSL 1.30. ESSL 1.0 is derived from GLSL 1.20, and does not benefit from all the same basic hardware functionality.
ES 3.0 does not have this limitation, and neither does desktop GL 3.0. When explicit LOD lookups were introduced into desktop GL (3.0), it could be done from any shader stage. It may just be an oversight, or there could be some fundamental hardware limitation (recall that older GPUs used to have specialized vertex and pixel shader hardware and embedded GPUs are never on the cutting edge of GPU design).
Whatever the original reason for this limitation, it has been rectified in a later OpenGL ES 2.0 extension and is core in OpenGL ES 3.0. Chances are pretty good that a modern GL ES 2.0 implementation will actually support explicit LOD lookups in the fragment shader given the following extension:
GL_EXT_shader_texture_lod
Pseudo-code showing explicit LOD lookup in a fragment shader:
#version 100
#extension GL_EXT_shader_texture_lod : require
attribute vec2 tex_st;
uniform sampler2D sampler;
void main (void)
{
// Note the EXT suffix, that is very important in ESSL 1.00
gl_FragColor = texture2DLodEXT (sampler, tex_st, 0);
}
I have a question regarding [glPushMatrix], together with the matrix transformations, and OpenGL ES. The GLSL guide says that under OpenGL ES, the matrices have to be computed:
However, when developing applications in modern versions of OpenGL and
OpenGL ES or in WebGL, the model matrix has to be computed.
and
In some versions of OpenGL (ES), a built-in uniform variable
gl_ModelViewMatrix is available in the vertex shader
As I understood, gl_ModelViewMatrix is not available under all OpenGL ES specifications. So, are the functions like glMatrixMode, glRotate, ..., still valid there? Can I use them to calculate the model matrix? If not, how to handle those transformation matrices?
First: You shouldn't use the matrix manipulation functions in regular OpenGL as well. In old versions they're just to inflexible and also redundant, and in newer versions they've been removed entirely.
Second: The source you're mentioning is a Wikibook which means it's not a authorative source. In the case of this Wikibook it's been written to accomodate for all versions of GLSL, and some of them, mainly for OpenGL-2.1 have those variables.
You deal with those matrices by calculating them yourself (no, this is not slower, OpenGL's matrix stuff was not GPU accelerated) and pass them to OpenGL either by glLoadMatrix/glMultMatrix (old versions of OpenGL) or a shader uniform.
If you're planning on doing this in Android, then take a look at this.
http://developer.android.com/reference/android/opengl/Matrix.html
It has functions to setup view, frustum, transformation matrices as well as some matrix operations.
I just come upon an interesting effect by Chrome's use of the GLSL compiler. The statement
#define addf(index) if(weights[i+index]>0.) r+=weights[i+index]*f##index(p);
does not compile stating
preprocessor command must not be preceded by any other statement in that line
It seems that the ## syntax is unsupported.
However, on the same platform (eg. Linux 64bit, Nvidia GPU) the same shader compiles and runs fine. Why this? I thought the shader compiler is part of the GPUs driver stack and would be used in both cases. So why this different experience?
Actually WebGL is also quoted as "OpenGL ES 2.0 for the Web", so there are some differences to OpenGL.
The WebGL spec ( https://www.khronos.org/registry/webgl/specs/1.0/ ) tells us:
"A WebGL implementation must only accept shaders which conform to The OpenGL ES Shading Language, Version 1.00."
Looking into the GLSL ES 1.0 spec ( https://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf ) I found:
Section 3.4 defines the preprocessor and also states "There are no number sign based operators (no #, ##, ##, etc.), nor is there a sizeof operator."
So whatever the browser's implementation does internally, it follows the standard :)
WebGL implementations needs to conform the WebGL specifications. Many restrictions are needed for security issues. The ## issue is not, but anyway not correct by the WebGL specs.
To conform, they can either use a graphics stack that fully conform (for example by providing a wrapper to an unextended OpenGL ES profile if the driver exhibits those) or by prechecking the GLSL shader code and WebGL state itself to ensure conformity before passing the comamnds to some full OpenGL implementation.
So the WebGL behaviour may differ from the native OpenGL behaviour on the same machine.
That's because on Windows, Chrome does not use the OpenGL driver by default. It uses Direct3D, and the translation from OpenGL to Direct3D is done by the ANGLE project.
ANGLE has its own shader validator and preprocessor. And hence you can see differences between Windows and other operating systems even though you're using the same hardware. ANGLE was created because on Windows the Direct3D support is typically much better than the OpenGL support, and because it allows more control over the implementation and its conformance.