Using Sampler3D to read from a 3D texture in OpenGL ES 3.x - opengl-es

I am using OpenGL ES 3.2 to read from a 3D texture in the fragment shader and write that value out to an FBO. I then read from the FBO attachment using glReadPixels, and print out the values obtained.
I am attaching the sampler as:
GLuint texLoc = glGetUniformLocation(shader_program_new, "input_tex");
glUniform1i(texLoc, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, texture_output);
Inside the shader, I read from the texture as:
#version 300 es
precision highp float;
precision highp sampler3D;
    
uniform sampler3D input_tex;
in vec3 tex_pos;
out vec4 fragmentColor;
    
void main() {
fragmentColor = texture(input_tex, vec3(0.0, 0.0, 0.0)); // nonzero z co-ordinate doesn't work    
}
While reading from the texture, I am only able to read from values where the z co-ordinate is 0. Reading values from any other depths gives garbage values or NANs.
Shouldn't a 3D texture allow me to use (x, y, z) values as texture co-ordinates, where x, y, and even z can be between 0.0 and 1.0?

Let me guess: You didn't initialize the texture correctly.
Please show the texture creation code.

This is most likely due to the 3D texture not being bound as layered.
Take a look at this: Compute shader not modifying 3d texture

Related

OpenGL - trouble passing ALL data into shader at once

I'm trying to display textures on quads (2 triangles) using opengl 3.3
Drawing a texture on a quad works great; however when I have ONE textures (sprite atlas) but using 2 quads(objects) to display different parts of the atlas. When in draw loop, they end up switching back and fourth(one disappears than appears again, etc) at their individual translated locations.
The way I'm drawing this is not the standard DrawElements for each quad(or object) but I package all quads, uv, translations, etc send them up to the shader as one big chunk (as "in" variables): Vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
in vec3 vertexPosition_modelspace;
in vec3 vertexColor;
in vec2 vertexUV;
in vec3 translation;
in vec4 rotation;
in vec3 scale;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
// Output data ; will be interpolated for each fragment.
out vec3 fragmentColor;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
...
void main(){
mat4 Model = mat4(1.0);
mat4 t = translationMatrix(translation);
mat4 s = scaleMatrix(scale);
mat4 r = rotationMatrix(vec3(rotation), rotation[3]);
Model *= t * r * s;
gl_Position = MVP * Model * vec4 (vertexPosition_modelspace,1); //* MVP;
// The color of each vertex will be interpolated
// to produce the color of each fragment
fragmentColor = vertexColor;
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
Is the vertex shader working as I think it would with a large chunk of data - that it draws each segment passed up as uniform individually because it does not seem like it? Is my train of thought correct on this?
For completeness this is my fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec3 fragmentColor;
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main()
{
// Output color = color of the texture at the specified UV
color = texture2D( myTextureSampler, UV ).rgba;
}
A request for more information was made so I will put how i bind this data up to the vertex shader. The following code is just one I use for my translations. I have more for color, rotation, scale, uv, etc:
gl.BindBuffer(gl.ARRAY_BUFFER, tvbo)
gl.BufferData(gl.ARRAY_BUFFER, len(data.Translations)*4, gl.Ptr(data.Translations), gl.DYNAMIC_DRAW)
tAttrib := uint32(gl.GetAttribLocation(program, gl.Str("translation\x00")))
gl.EnableVertexAttribArray(tAttrib)
gl.VertexAttribPointer(tAttrib, 3, gl.FLOAT, false, 0, nil)
...
gl.DrawElements(gl.TRIANGLES, int32(len(elements)), gl.UNSIGNED_INT, nil)
You have just single sampler2D
which means you have just single texture at your disposal
regardless on how many of them you bind.
If you really need to pass the data as single block
then you should add sampler per each texture you got
not sure how many objects/textures you have
but you are limited by gfx hw limit on texture units with this way of data passing
also you need to add another value to your data telling which primitive use which texture unit
and inside fragment then select the right texture sampler ...
You should add stuff like this:
// vertex
in int usedtexture;
out int txr;
void main()
{
txr=usedtexture;
}
// fragment
uniform sampler2D myTextureSampler0;
uniform sampler2D myTextureSampler1;
uniform sampler2D myTextureSampler2;
uniform sampler2D myTextureSampler3;
in vec2 UV;
in int txr;
out vec4 color;
void main
{
if (txr==0) color = texture2D( myTextureSampler0, UV ).rgba;
else if (txr==1) color = texture2D( myTextureSampler1, UV ).rgba;
else if (txr==2) color = texture2D( myTextureSampler2, UV ).rgba;
else if (txr==3) color = texture2D( myTextureSampler3, UV ).rgba;
else color=vec4(0.0,0.0,0.0,0.0);
}
This way of passing is not good for these reasons:
number of used textures is limited to HW texture units limit
if your rendering would need additional textures like normal/shininess/light maps
then you need more then 1 texture per object type and your limit is suddenly divided by 2,3,4...
You need if/switch statements inside fragment which can slow things down considerably
Yes you can do it brunch less but then you would need to access all textures all the time increasing heat stress on gfx without reason...
This kind of passing is suitable for
all textures inside single image (as you mentioned texture atlas)
which can be faster this way and reasonable for scenes with small number of object types (or materials) but large object count...
Since I needed more input on this matter, I linked this page to reddit and someone was able to help me with one response! Anyways the reddit link is here:
https://www.reddit.com/r/opengl/comments/3gyvlt/opengl_passing_all_scene_data_into_shader_each/
The issue of seeing two individual textures/quads after passing all vertices as one data structure over to vertex shader was because my element indices were off. I needed to determine the correct index of each set of vertices for my 2 triangle(quad) objects. Simply had to do something like this:
vertexInfo.Elements = append(vertexInfo.Elements, uint32(idx*4), uint32(idx*4+1), uint32(idx*4+2), uint32(idx*4), uint32(idx*4+2), uint32(idx*4+3))

Retrieve Vertices Data in THREE.js

I'm creating a mesh with a custom shader. Within the vertex shader I'm modifying the original position of the geometry vertices. Then I need to access to this new vertices position from outside the shader, how can I accomplish this?
In lieu of transform feedback (which WebGL 1.0 does not support), you will have to use a passthrough fragment shader and floating-point texture (this requires loading the extension OES_texture_float). That is the only approach to generate a vertex buffer on the GPU in WebGL. WebGL does not support pixel buffer objects either, so reading the output data back is going to be very inefficient.
Nevertheless, here is how you can accomplish this:
This will be a rough overview focusing on OpenGL rather than anything Three.js specific.
First, encode your vertex array this way (add a 4th component for index):
Vec4 pos_idx : xyz = Vertex Position, w = Vertex Index (0.0 through NumVerts-1.0)
Storing the vertex index as the w component is necessary because OpenGL ES 2.0 (WebGL 1.0) does not support gl_VertexID.
Next, you need a 2D floating-point texture:
MaxTexSize = Query GL_MAX_TEXTURE_SIZE
Width = MaxTexSize;
Height = min (NumVerts / MaxTexSize, 1);
Create an RGBA floating-point texture with those dimensions and use it as FBO color attachment 0.
Vertex Shader:
#version 100
attribute vec4 pos_idx;
uniform int width; // Width of floating-point texture
uniform int height; // Height of floating-point texture
varying vec4 vtx_out;
void main (void)
{
float idx = pos_idx.w;
// Position this vertex so that it occupies a unique pixel
vec2 xy_idx = vec2 (float ((int (idx) % width)) / float (width),
floor (idx / float (width)) / float (height)) * vec2 (2.0) - vec2 (1.0);
gl_Position = vec4 (xy_idx, 0.0f, 1.0f);
//
// Do all of your per-vertex calculations here, and output to vtx_out.xyz
//
// Store the index in the W component
vtx_out.w = idx;
}
Passthrough Fragment Shader:
#version 100
varying vec4 vtx_out;
void main (void)
{
gl_FragData [0] = vtx_out;
}
Draw and Read Back:
// Draw your entire vertex array for processing (as `GL_POINTS`)
glDrawArrays (GL_POINTS, 0, NumVerts);
// Bind the FBO's color attachment 0 to `GL_TEXTURE_2D`
// Read the texture back and store its results in an array `verts`
glGetTexImage (GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, verts);

Draw GL_TRIANGLE_STRIP based on centre point and size

I am rendering TRIANGLE_STRIPS in OpenGL ES 2.0. I was wondering, would it be possible to modify the vertex shader such that instead of feeding it 4 texture vertices, you give it only one vertex that represents the centre of the TRIANGLE_STRIP, with a parameter for texture width and a height?
Assuming my texture vertex is:
GLfloat textureVertices[] = {
x, y
};
Can the vertex shader be modified to work with texSize uniform, which would represent the width/height of the TRIANGLE_STRIP? :
attribute highp vec4 position;
attribute lowp vec4 inputPointCoordinate;
uniform mat4 MVP;
uniform lowp vec4 vertexColor;
uniform float texSize;
varying lowp vec2 textureCoordinate;
varying lowp vec4 color;
void main()
{
gl_Position = MVP*position;
textureCoordinate = inputPointCoordinate.xy;
color = vertexColor;
}
No, at least not in the vertex shader. You need to get the 3 different points in the vertex shader with different attribute values so you can receive the coordinate in the fragment shader which is interpolated.
What you actually can do is pass a center into the vertex shader which is the multiplied with the same matrix as the vertex coordinates. Beside that you would need some kind of radius (or the texture dimensions vector) which will probably need to be scaled if the matrix contains the scale as well. Then you can take both of these values and pass them to the fragment shader (using varying). In the fragment shader you then need to compute the texture coordinates from those 2 parameters and the fragment position.
A simular procedure is used to draw a very nice circle or sphere using only 2 triangles (a square) but I do not suggest you do this as you will only lose on performance plus it is quite a lot of work...

Opengl Shader, what's the gl_FragColor's alpha components?

I think it'll be a little bit simple answer.
But I can't find the answer with googling.
It's OpenGLES shader thing. I am using cocos2d-x engine.
This is my fragment shader code.
precision lowp float;
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform vec4 u_lightPosition;
void main()
{
vec4 col=texture2D(u_texture,v_texCoord);
mediump float lightDistance = distance(gl_FragCoord, u_lightPosition);
mediump float alpha = 100.0/lightDistance;
alpha = min(alpha, 1.0);
alpha = max(alpha, 0.0);
col.w = alpha;
//col.a = alpha;
gl_FragColor=col;
}
I just want to give opacity in some circle area. So I change the color's w value because I thought it's the alpha value of the texel. But the result was very odd.
I am afraid it's not alpha value.
Even if I set the value to 1.0 for testing, whole sprite change to be bright and white.
Its vertex shader is very normal, there is nothing special to attached.
Any idea please.
Updated: For reference, I attach some result image.
case 1)
col.w = alpha;
case 2)
col.w = 1.0
and normal texture before applying shader.)
The GL ES 2.0 reference card defines:
Variable mediump vec4 gl_FragColor;
Description fragment color
Units or coordinate system RGBA color
It further states:
Vector Components In addition to array numeric subscript syntax,
names of vector components are denoted by a single letter. Components
can be swizzled and replicated, e.g.: pos.xx, pos.zy
{x, y, z, w} Use when accessing vectors that represent points or normals
{r, g, b, a} Use when accessing vectors that represent colors
{s, t, p, q} Use when accessing vectors that represent texture coordinates
So, sure, using .a would be more idiomatic but it's explicitly the case that what you store to .w is the output alpha for gl_FragColor.
To answer the question you've set as a title rather than the question in the body, the value returned by texture2D will be whatever is correct for that texture. Either an actual stored value if the texture is GL_RGBA or GL_LUMINANCE_ALPHA or else 1.0.
So you're outputting alpha correctly.
If your output alpha isn't having the mixing effect that you expect then you must have glBlendFunc set to something unexpected, possibly involving GL_CONSTANT_COLOR.

GLSL: gl_FragCoord issues

I am experimenting with GLSL for OpenGL ES 2.0. I have a quad and a texture I am rendering. I can successfully do it this way:
//VERTEX SHADER
attribute highp vec4 vertex;
attribute mediump vec2 coord0;
uniform mediump mat4 worldViewProjection;
varying mediump vec2 tc0;
void main()
{
// Transforming The Vertex
gl_Position = worldViewProjection * vertex;
// Passing The Texture Coordinate Of Texture Unit 0 To The Fragment Shader
tc0 = vec2(coord0);
}
//FRAGMENT SHADER
varying mediump vec2 tc0;
uniform sampler2D my_color_texture;
void main()
{
gl_FragColor = texture2D(my_color_texture, tc0);
}
So far so good. However, I'd like to do some pixel-based filtering, e.g. Median. So, I'd like to work in pixel coordinates rather than in normalized (tc0) and then convert the result back to normalized coords. Therefore, I'd like to use gl_FragCoord instead of a uv attribute (tc0). But I don't know how to go back to normalized coords because I don't know the range of gl_FragCoords. Any idea how I could get it? I have got that far, using a fixed value for 'normalization', though it's not working perfectly as it is causing stretching and tiling (but at least is showing something):
//FRAGMENT SHADER
varying mediump vec2 tc0;
uniform sampler2D my_color_texture;
void main()
{
gl_FragColor = texture2D(my_color_texture, vec2(gl_FragCoord) / vec2(256, 256));
}
So, the simple question is, what should I use in the place of vec2(256, 256) so that I could get the same result as if I were using the uv coords.
Thanks!
gl_FragCoord is in screen coordinates, so to get normalized coords you need to divide by the viewport width and height. You can use a uniform variable to pass that information to the shader, since there is no built in variable for it.
You can also sample the texture by un-normalized coordinates if:
sampling by texture() from GL_TEXTURE_RECTANGLE
sampling by texelFetch() from a regular texture or texture buffer

Resources