Storing floats in a texture in OpenGL ES - opengl-es

In WebGL, I am trying to create a texture with texels each consisting of 4 float values. Here I attempt to to create a simple texture with one vec4 in it.
var textureData = new Float32Array(4);
var texture = gl.createTexture();
gl.activeTexture( gl.TEXTURE0 );
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
// target, level, internal format, width, height
gl.TEXTURE_2D, 0, gl.RGBA, 1, 1,
// border, data format, data type, pixels
0, gl.RGBA, gl.FLOAT, textureData
);
My intent is to sample it in the shader using a sampler like so:
uniform sampler2D data;
...
vec4 retrieved = texture2D(data, vec2(0.0, 0.0));
However, I am getting an error during gl.texImage2D:
WebGL: INVALID_ENUM: texImage2D: invalid texture type
WebGL error INVALID_ENUM in texImage2D(TEXTURE_2D, 0, RGBA, 1, 1, 0, RGBA, FLOAT,
[object Float32Array])
Comparing the OpenGL ES spec and the OpenGL 3.3 spec for texImage2D, it seems like I am not allowed to use gl.FLOAT. In that case, how would I accomplish what I am trying to do?

You can create a byte array from your float array. Each float should take 4bytes (32bit float). This array can be put into texture using a standard RGBA format with unsigned byte. This will create a texture where each texel contains a single 32bit floating number which seems to be exactly what you want.
The only problem is your floating value is split into 4 floating values when you retrieve it from texture in your fragment shader. So what you are looking for is most likely "how to convert vec4 into a single float".
You should note what you are trying to do with internal format being RGBA consisting of 32bit floats will not work as your texture will always be 32bit per texel so even somehow forcing floats into a texture should result into clamping or precision loss. And then even if the texture texel would consist of 4 RGBA 32bit floats your shader would most likely treat them as lowp using texture2D at some point.

The solution to my problem is actually quite simple! I just needed to type
var float_texture_ext = gl.getExtension('OES_texture_float');
Now WebGL can use texture floats!
This MDN page tells us why:
Note: In WebGL, unlike in other GL APIs, extensions are only available if explicitly requested.

Related

How can I properly create an array texture in OpenGL (Go)?

I have a total of two textures, the first is used as a framebuffer to work with inside a computeshader, which is later blitted using BlitFramebuffer(...). The second is supposed to be an OpenGL array texture, which is used to look up textures and copy them onto the framebuffer. It's created in the following way:
var texarray uint32
gl.GenTextures(1, &texarray)
gl.ActiveTexture(gl.TEXTURE0 + 1)
gl.BindTexture(gl.TEXTURE_2D_ARRAY, texarray)
gl.TexParameteri(gl.TEXTURE_2D_ARRAY, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.TexImage3D(
gl.TEXTURE_2D_ARRAY,
0,
gl.RGBA8,
16,
16,
22*48,
0,
gl.RGBA, gl.UNSIGNED_BYTE,
gl.Ptr(sheet.Pix))
gl.BindImageTexture(1, texarray, 0, false, 0, gl.READ_ONLY, gl.RGBA8)
sheet.Pix is just the pixel array of an image loaded as a *image.NRGBA
The compute-shader looks like this:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img;
layout(binding = 1) uniform sampler2DArray texAtlas;
void main() {
ivec2 iCoords = ivec2(gl_GlobalInvocationID.xy);
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7));
imageStore(img, iCoords, c);
}
When i run the program however, the result is just a window filled with the same color:
So my question is: What did I do wrong during the shader creation and what needs to be corrected?
For any open code questions, here's the corresponding repo
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7))
That can't work. texture samples the texture at normalized coordinates, so the texture is in [0,1] (in the st domain, the third dimension is the layer and is correct here), coordinates outside of that ar handled via the GL_WRAP_... modes you specified (repeat, clamp to edge, clamp to border). Since int % 16 is always an integer, and even with repetition only the fractional part of the coordinate will matter, you are basically sampling the same texel over and over again.
If you need the full texture sampling (texture filtering, sRGB conversions etc.), you have to use the normalized coordinates instead. But if you only want to access individual texel data, you can use texelFetch and keep the integer data instead.
Note, since you set the texture filter to GL_LINEAR, you seem to want filtering, however, your coordinates appear as if you would want at to access the texel centers, so if you're going the texture route , thenvec3(vec2(iCoords.xy)/vec2(16) + vec2(1.0/32.0) , layer) would be the proper normalization to reach the texel centers (together with GL_REPEAT), but then, the GL_LINEAR filtering would yield identical results to GL_NEAREST.

How to draw the data (from gleReadPixels) onto the default (display) farmebuffer in OpenGLES 2.0

Sorry if I am asking something which is already available. So far I could not trace. I read details about FBO and got a fair idea about off-screen buffering. http://mattfife.com/?p=2813 is a nice little article on FBOs. In all the examples, including this one, I don't see details on how to write the data, retrieved through glReadPixels call, onto default display framebuffer. Sorry if I am missing anything silly. I did my due diligence but could not get any example.
Note: I am using OpenGLES 2.0, hence I cannot use calls such as glDrawPixels, etc.
Basically my requirement is to have off-screen buffering. Because I am working on subtitle/captions wherein scrolling of the caption will have to repeat the rendering of lines till those go out of caption-display area.
I got a suggestion to use FBO and bind the texture created to the main default framebuffer.
My actual need is caption/ subtitle (which can be in scrolling mode)
Suppose the first time I had below on display,
This is Line Number - 1
This is Line Number - 2
This is Line Number - 3
After scrolling, then I want to have,
This is Line Number - 2
This is Line Number - 3
This is Line Number - 4
In the second time when I want to render, I will have to update the content in offscreen FBO? That would be re-writing line-2 and line-3 at a new position, removing line-1 and adding line-4.
Create a framebuffer with an texture attachment (see Attaching Images). Note glFramebufferTexture2D is supported by OpenGL ES 2.0.
The color plane of the framebuffer can be loaded to the CPU by glReadPixels, the same way as when you use a Renderbuffer. But the rendering is stored in a 2D Texture.
Bind the texture and the default framebuffer and render a screen space quad with the texture on it.
Render a quad (GL_TRIANLGE_FAN) with the vertex coordinates (-1, -1), (1, -1), (1, 1), (-1, 1) and use the following simple OpenGL ES 2.0 shader:
Vertex shader
attribute vec2 pos;
varying vec2 uv;
void main()
{
uv = pos * 0.5 + 0.5;
gl_Position = vec4(pos, 0.0, 1.0);
}
Fragment shader
precision mediump float;
varying vec2 uv;
uniform sampler2D u_texture;
void main()
{
gl_FragColor = texture2D(u_texture, uv);
}

Using Sampler3D to read from a 3D texture in OpenGL ES 3.x

I am using OpenGL ES 3.2 to read from a 3D texture in the fragment shader and write that value out to an FBO. I then read from the FBO attachment using glReadPixels, and print out the values obtained.
I am attaching the sampler as:
GLuint texLoc = glGetUniformLocation(shader_program_new, "input_tex");
glUniform1i(texLoc, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, texture_output);
Inside the shader, I read from the texture as:
#version 300 es
precision highp float;
precision highp sampler3D;
    
uniform sampler3D input_tex;
in vec3 tex_pos;
out vec4 fragmentColor;
    
void main() {
fragmentColor = texture(input_tex, vec3(0.0, 0.0, 0.0)); // nonzero z co-ordinate doesn't work    
}
While reading from the texture, I am only able to read from values where the z co-ordinate is 0. Reading values from any other depths gives garbage values or NANs.
Shouldn't a 3D texture allow me to use (x, y, z) values as texture co-ordinates, where x, y, and even z can be between 0.0 and 1.0?
Let me guess: You didn't initialize the texture correctly.
Please show the texture creation code.
This is most likely due to the 3D texture not being bound as layered.
Take a look at this: Compute shader not modifying 3d texture

Repeat texture like stipple

I'm using orthographic projection.
I have 2 triangles creating one long quad.
On this quad i put a texture that repeat him self along the the way.
The world zoom is always changing by the user - and makes the quad length be short or long accordingly. The height is being calculated in the shader so it is always the same size (in pixels).
My problem is that i want the texture to repeat according to it's real (pixel size) and the length of the quad. In other words, that the texture will be always the same size (pixels) and it will fill the quad by repeating it more or less depend on the quad length.
The rotation is important.
For Example
My texture is
I've added to my vertices - texture coordinates for duplicating it 20 times now
as you see below
Because it's too much zoomed out we see the texture squeezed.
Now i'm zooming in and the texture stretched. It will always be 20 times repeat.
I'm sure that i have to play in with the texture coordinates in the frag shader, but don't see the solution. or perhaps there is a better solution to my problem.
---- ADDITION ----
Solved it by:
Calculating the repeat S value in the current zoom (That i'm adding the vertices) and send the map width (in world values) as attribute. Every draw i'm sending the current map width as uniform for calculating the scale.
But i'm not happy with this solution.
OK, found a way to do it with minimum attributes and minimum code in the shader.
Do Once:
Calculating the the repeat count for each line as my world and my screen are 1:1 - 1 in my world is 1 pixel. LineDistance(InWorldUnits)/picWidth(inScreenUnits)
Saving as an attribute.
Every Draw:
Calculating the scale - world to screen - WorldWidth/ScreenWidth
Setting as uniform
Drawing the buffer
In the frag shader
simply multiply this scale with the repeat attribute.
Works perfectly and looks good. Resizing the window is supported as well.
The general solution is to include a texture matrix. So your vertex shader might look something like
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
uniform mat4 u_matrix;
uniform mat4 u_texMatrix;
void main() {
gl_Position = u_matrix * a_position;
v_texcoord = (u_texMatrix * v_texcoord).xy;
}
Now you can set up texture matrix to scale your texture coordinates however you need. If your texture coordinates go from 0 to 1 across the texture and your pattern is 16 pixels wide then if you're drawing a line 100 pixels long you'd need 100/16 as your X scale.
var pixelsLong = 100;
var pixelsTall = 8;
var textureWidth = 16;
var textureHeight = 16;
var xScale = pixelsLong / textureWidth;
var yScale = pixelsTall / textureHeight;
var texMatrix = [
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1,
];
gl.uniformMatrix4fv(texMatrixLocation, false, texMatrix);
That seems like it would work. Because you're using a matrix you can also easily offset or rotate the texture. See matrix math

How can I convey high precision uv coordinates between render passes in webgl?

Let's say I'm working on a problem in WebGL that requires values being retrieved from large textures (say, 2048x2048) with pixel perfect precision.
Everything's worked out great for me thus far. I can pass the large texture to a fragment shader, the fragment shader will transform it to a render target, and I can even supply that render target as input for another render pass, always retrieving the exact pixel I need.
But now let's say I want to mix things up. I want to create a render pass that returns a texture storing a pair of uv coordinates for each pixel in the large texture that I started out with. As the simplest use case, let's say:
precision highp float;
precision highp sampler2D;
varying vec2 vUv;
void main() {
gl_FragColor = vec4(vUv, 0, 1);
}
Using the uv coordinates returned by this first render pass, I want to access a pixel from the large texture I started out with:
precision highp float;
precision highp sampler2D;
uniform sampler2D firstPass;
uniform sampler2D largeTexture;
varying vec2 vUv;
void main() {
vec2 uv = texture2D(firstPass, vUv);
gl_FragColor = texture2D(largeTexture, uv);
}
This however does not work with adequate precision. I will most often get a color from a neighboring pixel as opposed to the pixel I intended to address. From some tinkering around I've discovered this only works on textures with sizes up to ~512x512.
You will note I've specified the use of high precision floats and sampler2Ds in these examples. This was about the only solution that came readily to mind, but this still does not address my problem. I know I can always fall back on addressing pixels with a relative coordinate system that requires lower precision, but I'm hoping I may still be able to address with uv for the sake of simplicity.
Ideas
Make your UV texture a floating point texture? Your texture is currently probably only 8bits per channel so that means it can only address 256 unique locations. A floating point texture would not have that problem
Unfortunately rendering to floating point textures is not supported everywhere and the browsers have not uniformly implemented the required extensions to check if it will work or not. If you're on a modern desktop it likely will work though.
To find out if it will work, try to get the floating point texture extension, if it exists make a floating point texture and attach it to a framebuffer then check if the framebuffer is complete. If it is you can render to it.
var floatTextures = gl.getExtension("OES_texture_float");
if (!floatTextures) {
alert("floating point textures are not supported on your system");
return;
}
// If you need linear filtering then...
var floatLinearTextures = gl.getExtension("OES_texture_float_linear");
if (!floatLinearTextures) {
alert("linear filtering of floating point textures is not supported on your system");
}
// check if we can render to floating point textures.
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.FLOAT, null);
// some drivers have a bug that requires you turn off filtering before
// rendering to a texture.
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
// make a framebuffer
var fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
// attach the texture
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D, tex, 0);
// check if we can render
var status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
if (status != gl.FRAMEBUFFER_COMPLETE) {
alert("can't render to floating point textures");
return;
}
// You should be good to go.
Increase your resolution by combining data into multiple channels
when writing the UV texture convert UV from the 0-1 range to the 0 to 65535 range then write modf(uv, 256) / 255 to one channel and floor(uv / 256) / 256 to another channel. When reading re-combine the channels with something like uv = (lowChannels * 256.0 + highChannels * 65535.0) / 65535.0
That should work everywhere and give you enough resolution to address a 65536x65536 texture.

Resources