Any way to control FP precision in desktop OpenGL? - windows

Is it possible to force a GPU to use highp floats in shaders when the OpenGL version being used is 2.0? I know that precision modifiers are basically a GLES concept, but floats do seem to use mediump by default on a desktop I was provided for testing, and it`s driving me nuts.
I`ve stumbled across that problem while running my GL project under Win10 with a GTX1070 GPU (driver version 23.21.13-9135). I don`t even have #version set in my shaders, so I expected the precision to be automatically maxed out.
AFAIK, OpenGL 2.x standard does not define any ways to hint the GPU that at least 20 of 24 mantissa bits are significant (passing integers through floats, since GL 2.0 cannot operate on integers natively), but that`s the first time in 3++ years that I`ve seen a desktop GPU do such a thing, so I`ve never even thought it possible.
Help needed.

Related

What is the texture sampling precision?

In OpenGL, when sampling a texture, what is the precision or format used for the location?
To elaborate: when sampling with texture(sampler, vTextureCoordinates) in a shader, on e.g. precision highp float, two float32+ go in. However, is that precision used to sample the texture, or will it be degraded (e.g. "snapped to fixed point" like in d3d)?
While I am primarily interested in WebGL2, this would also be interesting to know for other OpenGL versions.
My current guess is, that it will be truncated to a 16-bit normalized unsigned integer, but I am not sure. Perhaps it is also unspecified, in which case, what can be depended upon?
This is related to my texture-coordinate-inaccuracy question. Now that I have several hints, that this degradation might really take place, I can ask about the specific part. Should sampling precision indeed be a 16-bit normalized integer, I could also close that one.
This is a function of the hardware, not the graphics API commanding that hardware. So it doesn't matter if you're using D3D, WebGL, Vulkan, or whatever, the precision of texture coordinate sampling is based on the hardware you're running on.
Most APIs don't actually tell you what this precision is. They will generally require some minimum precision, but hardware can vary.
Vulkan actually allows implementations to tell you the sub-texel precision. The minimum requirement is 4 bits of sub-texel precision (16 values). The Vulkan hardware database shows that hardware varies between 4 and 8, with 8 being 10x more common than 4.

gl_PointSize requires extension when shader version changed

I had GLSL shaders working fine with #version 150 core. The vertex shader outputted gl_PointSize to a triangle strip geometry shader, which uses this to indicate the size of generated objects.
I changed to #version 300 es and got this error
error C7548: 'gl_PointSize' requires "#extension GL_EXT_geometry_point_size : enable" before use
This is mildly surprising -- I thought extensions were normally something you needed in older versions to enable functionality which is provided in later versions. Now it seems like I need to recover something which was lost, but this table seems to say that I can still use it.
What has changed which means I can't use gl_PointSize any more?
Desktop OpenGL and OpenGL ES are not the same thing. That table references desktop OpenGLx, not OpenGL ES of any version. If you ask for GLSL 3.00 ES, you will get GLSL 3.00 ES.
Desktop GLSL 1.50 is not a lesser version of GLSL ES 3.00. Nor is it a greater version. They have no relationship to each other, except in the sense that the ES versions take stuff from the desktop versions. But even that is arbitrary, generally unrelated to version numbers.
The thing is... OpenGL ES 3.00 does include gl_PointSize. But it is only as an output variable from the VS. Assuming that's how your shader uses it, your implementation has a bug in its OpenGL ES support.

Difference in opengl es programming for PowerVR and Mali GPU chipsets

I am developing a framework using OpenGL ES to create 3D applications.
I need to deploy the framework in both PowerVR and Mali GPUs chip-sets.
Is there any aspects to be taken care in programming OpenGL ES for different GPUs (PowerVR and Mali)?
The only significant difference is that the older Mali cores (Mali-300/400 series) only support mediump in the fragment shader, so algorithms relying on highp precision won't work there.
There are surely fine tuning differences, but hard to give a succinct answer to that one. Just focus on writing good clean GL and it should work well everywhere.

pow2 textures in FBO

Non power of two textures are very slow in OpenGL ES 2.0.
But in every "render-to-texture" tutorial I saw, people just take screen size (which is never pow2), and just make texture from it.
Should I render to pow2 texture (with projection matrix correction), or there is some kind of magic with FBO?
I don't buy into the "non power of two textures are very slow" premise in your question. First of all, these kinds of performance characteristics can be highly hardware dependent. So saying that this is true for ES 2.0 in general does not really make sense.
I also doubt that any GPU architectures developed within the last 5 to 10 years would be significantly slower when rendering to NPOT textures. If there's data that shows otherwise, I would be very interested in seeing it.
Unless you have conclusive data that shows POT textures to be faster for your target platform, I would simply use the natural size for your render targets.
If you're really convinced that you want to use POT textures, you can use glViewport() to render to part of them, as #MaticOblak also points out in a comment.
There's one slight caveat to the above: ES 2.0 has some limitations on how NPOT textures can be used. According to the standard, they do not support mipmapping, and not all wrap modes. The GL_OES_texture_npot extension, which is supported on many devices, gets rid of these limitations.

glPushMatrix and OpenGL ES

I have a question regarding [glPushMatrix], together with the matrix transformations, and OpenGL ES. The GLSL guide says that under OpenGL ES, the matrices have to be computed:
However, when developing applications in modern versions of OpenGL and
OpenGL ES or in WebGL, the model matrix has to be computed.
and
In some versions of OpenGL (ES), a built-in uniform variable
gl_ModelViewMatrix is available in the vertex shader
As I understood, gl_ModelViewMatrix is not available under all OpenGL ES specifications. So, are the functions like glMatrixMode, glRotate, ..., still valid there? Can I use them to calculate the model matrix? If not, how to handle those transformation matrices?
First: You shouldn't use the matrix manipulation functions in regular OpenGL as well. In old versions they're just to inflexible and also redundant, and in newer versions they've been removed entirely.
Second: The source you're mentioning is a Wikibook which means it's not a authorative source. In the case of this Wikibook it's been written to accomodate for all versions of GLSL, and some of them, mainly for OpenGL-2.1 have those variables.
You deal with those matrices by calculating them yourself (no, this is not slower, OpenGL's matrix stuff was not GPU accelerated) and pass them to OpenGL either by glLoadMatrix/glMultMatrix (old versions of OpenGL) or a shader uniform.
If you're planning on doing this in Android, then take a look at this.
http://developer.android.com/reference/android/opengl/Matrix.html
It has functions to setup view, frustum, transformation matrices as well as some matrix operations.

Resources