Why does WebGL's EXT_texture_filter_anisotropic force linear interpolation when magnifying - html5-canvas

I'm beyond confused right now, I am messing with porting minecraft java to WebGL and when I set the texture parameter TEXTURE_MAX_ANISOTROPY to any value above 1.0 on the terrain texture it immediately switches to using a linear magnification filter, even though I explicitly set TEXTURE_MAG_FILTER to NEAREST in the next line as I bind the texture. Figure 1 shows the sampling when TEXTURE_MAX_ANISOTROPY is set to 1.0 and Figure 2 shows the sampling when TEXTURE_MAX_ANISOTROPY is set to any value above 1.0
of course I can waste my time making a glsl hack to round the UV to emulate NEAREST filtering when the texture is magnified but that's a cheap hack that will definitely harm preformance and my severe OCD
so can somebody explain to me why webgl behaves this way? accurate anisotropic filtering is a must for this project
gl.bindTexture(gl.TEXTURE_2D, terrainTexture);
gl.texParameterf(gl.TEXTURE_2D, ext.TEXTURE_MAX_ANISOTROPY_EXT, 16.0);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST_MIPMAP_NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
no, there is not a single line in the program calling gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, xxxxxx); while the terrain is rendered after the initial NEAREST call, it wasn't possible to begin with and I still checked many times before making this post

Related

GLSL Smoothly change temperature(color) of mesh over time

I am writing a small program using three.js.
I have rendered mesh from PLY object. And I want to heat polygons that are close to the mouse position. When I move mouse, all polygons near must smoothly change color to the red, and other polygons must smoothly return to their normal color over time.
I have succeeded in getting mouse position and changing color of the nearest polygons, but I don't know how to solve smooth fading over time for the other polygons.
Should I do it in shader or I should pass any additional data to the shader?
I would do something simple like this (in a timer):
dtemp = Vertex_temp - background_temp;
Vertex_temp -= temp_transfer*dtemp*T/dt;
where temp_transfer=<0,1> is unit-less coefficient will adjust the speed of heat transfer. The dt [sec] is time elapsed (interval of your timer or update routine) and T [sec] is time scale for the temp_transfer coefficient.
So if your mouse is far than let background_temp=0.0 [C] and if not set it to background_temp=255.0 [C] now you can use the Vertex_temp directly to compute color ... using it as Red channel <0,255>
But as you can see this is more suited to do on CPU side instead of in GLSL because you need to update the color VBO each frame using its previous values... In GLSL you would need to encode it into texture or something and render back the new values into another one and then converting it back to VBO that is too complicated... maybe compute shader could do it in single pass but I am not familiar with those.

OpenGL 2d (orthographic) rendering for GUI, the modern way

I want to render GUI components in my OpenGL program. They are still just simple textured (vbo) rectangles.
I would like to get the following things done, the right way.
Drawing using screen coordinates, or at least using a coordinate system that's not based on perspective-like floating points. For example: now the coordinate system is from -1f to 1f (left to right of the screen). It would be more logical to use screen/pixel coordinates.
If it's easy to do, I'd like that the GUI doesn't stretch when the window (viewport) resizes.
I know, previously you could do a lot using the deprecated function glOrtho. But since I want to do it the modern way, which is hopefully also better for performance, I don't know how to start.
After searching on the internet, I came to the conclusion that I have to use a shader. I'm not very familiar with shaders.
And another question: does performance increase when doing this using a shader?
What you do with modern OpenGL is essentially the same as using glOrtho to setup a orthographic projection matrix: Create a transformation (matrix) that maps coordinates 1:1 into viewport coordinates and use that to transform the coordinates.
For example you could create a vec2 uniform viewport and set that to the viewport width/height. Then in the vertex shader you can use that to transform your vertex pixel coordinate positions into the range [-1,1], like this
gl_Position = vec4(2*vpos.xy / viewport.xy - 1, 0, 1);

How can I best implement a weight-normalizing blend operation in opengl?

Suppose I have a source color in RGBA format (sr, sb, sg, sa), and similarly a destination color (dr, db, dg, da), all components assumed to be in [0.0, 1.0].
Let p = (sa)/(sa+da), and q = da/(sa+da). Note that p+q = 1.0. Do anything you want if sa and da are both 0.0.
I would like to implement blending in opengl so that the blend result =
(p*sr + q*dr, p*sg + q*dg, p*sb + q*db, sa+da).
(Or to be a smidge more rigorous, following https://www.opengl.org/sdk/docs/man/html/glBlendFunc.xhtml, I'd like f_R, f_G, and f_B to be either p for src or q for dst; and f_A = 1.)
For instance, in the special case where (sa+da) == 1.0, I could use glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); but I'm specifically attempting to deal with alpha values that do not sum to 1.0. (That's why I call it 'weight-normalizing' - I want to treat the src and dst alphas as weights that need to be normalized into linear combination coefficients).
You can assume that I have full control over the data being passed to opengl, the code rendering, and the vertex and fragment shaders. I'm targeting WebGL, but I'm also just curious in general.
The best I could think of was to blend with ONE, ONE, premultiply all src rgb values by alpha, and do a second pass in the end that divides by alpha. But I'm afraid I sacrifice a lot of color depth this way, especially if the various alpha values are small.
I don't believe standard blend equation can do this. At least I can't think of a way how.
However, this is fairly easy to do with OpenGL. Blending might just be the wrong tool for the job. I would make what you currently describe as "source" and "destination" both input textures to the fragment shader. Then you can mix and combine them any way your heart desires.
Say you have two texture you want to combine in the way you describe. Right now you might have something like this:
Bind texture 1.
Render to default framebuffer, sampling the currently bound texture.
Set up fancy blending.
Bind texture 2.
Render to default framebuffer, sampling the currently bound texture.
What you can do instead:
Bind texture 1 to texture unit 0.
Bind texture 2 to texture unit 1.
Render to default framebuffer, sampling both bound textures.
Now you have the values from both textures available in your shader code, and can apply any kind of logic and math to calculate the combined color.
The same thing works if your original data does not come from a texture, but is the result of rendering. Let's say that you have two parts in your rendering process, which you want to combine in the way you describe:
Attach texture 1 as render target to FBO.
Render first part of content.
Attach texture 2 as render target to FBO.
Render second part of content.
Bind texture 1 to texture unit 0.
Bind texture 2 to texture unit 1.
Render to default framebuffer, sampling both bound textures.

OpenGL ES render to texture bound to shader

Can I render to the same texture, which I pass to my shader?
gl.glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, currentTex, 0);
gl.glActiveTexture(GL_TEXTURE0);
gl.glBindTexture(GL_TEXTURE_2D, currentTex);
gl.glUniform1i(texGlId, 0);
// ...
// drawCall
No, you're not supposed to do that. The OpenGL specs call this a "rendering feedback loop". There are cases where you can use the same texture, for example if you render to a mipmap level that is not used for texturing. But if you use a level that is included in your texturing as a render target , the behavior is undefined.
From page 80 of the ES 2.0 spec, "Rendering Feedback Loops":
A rendering feedback loop can occur when a texture is attached to an attachment point of the currently bound framebuffer object. In this case rendering results are undefined. The exact conditions are detailed in section 4.4.4.
We should avoid it. The rendering result will be undefined, it might depend on GPU.
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glFramebufferTexture2D.xml
Notes
Special precautions need to be taken to avoid attaching a texture image to the currently bound framebuffer while the texture object is currently bound and potentially sampled by the current vertex or fragment shader. Doing so could lead to the creation of a "feedback loop" between the writing of pixels by rendering operations and the simultaneous reading of those same pixels when used as texels in the currently bound texture. In this scenario, the framebuffer will be considered framebuffer complete, but the values of fragments rendered while in this state will be undefined. The values of texture samples may be undefined as well.

Multi-pass shaders in OpenGL ES 2.0

First - Does subroutines require GLSL 4.0+? So it unavailable in GLSL version of OpenGL ES 2.0?
I quite understand what multi-pass shaders are.
Well what is my picture:
Draw group of something (e.g. sprites) to FBO using some shader.
Think of FBO as big texture for big screen sized quad and use another shader, which, for example, turn texture colors to grayscale.
Draw FBO textured quad to screen with grayscaled colors.
Or is this called else?
So multi-pass = use another shader output to another shader input? So we render one object twice or more? How shader output get to another shader input?
For example
glUseProgram(shader_prog_1);//Just plain sprite draw
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, /*some texture_id*/);
//Setting input for shader_prog_1
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//Disabling arrays, buffers
glUseProgram(shader_prog_1);//Uses same vertex, but different fragment shader program
//Setting input for shader_prog_2
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Can anyone provide simple example of this in basic way?
In general, the term "multi-pass rendering" refers to rendering the same object multiple times with different shaders, and accumulating the results in the framebuffer. The accumulation is generally done via blending, not with shaders. That is, the second shader doesn't take the output of the first. They each perform part of the computation, and the blend stage combines them into the final value.
Nowadays, this is primarily done for lighting in forward-rendering scenarios. You render each object once for each light, passing different lighting parameters and possibly using different shaders each time you render a light. The blend mode used to accumulate the results is additive, since light reflectance is an additive property.
Does subroutines require GLSL 4.0+? So it unavailable in GLSL version of OpenGL ES 2.0?
This is a completely different question from the entire rest of your post, but the answer is yes and no.
No, in the sense that ARB_shader_subroutine is an OpenGL extension, and it therefore could be implemented by any OpenGL implementation. Yes, in the practical sense that any hardware that actually could implement shader_subroutine could also implement the rest of GL 4.x and therefore would already be advertising 4.x functionality.
In practice, you won't find shader_subroutine supported by non-4.x OpenGL implementations.
It is unavailable in GLSL ES 2.0 because it's GLSL ES. Do not confuse desktop OpenGL with OpenGL ES. They are two different things, with different GLSL versions and different featuresets. They don't even share extensions (except for a very few recent ones).

Resources