How to fast flip OpenGL ES FBO? - opengl-es

version:Android OpenGL ES 2.0
I use 5 filters and FBO to render a bitmap, every filter need bitmap texture and bitmap's mask texture, my problem is after every filter render, the next filter get FBO is upside-down, mask and texture orientation are opposite on even-numbered filter,I want to know how to fast flip FBO before next filter use it?
#version 100
precision mediump float;
uniform sampler2D uTexture;
uniform sampler2D uMaskTexture;
varying vec2 vTexCoord;
void main(){
float mask=texture2D(uMaskTexture, vTexCoord);
gl_FragColor=texture2D(uTexture*mask, vTexCoord);
}
To simplify the problem, the 5 filters are similar to the code above. uTexture comes from the FBO of the previous filter and uMaskTexture is a texture without any changes

It seems that your texture coordinates are wrong. Correct the texture coordinate attribute. This means that you have to "flip" the y component of the texture coordinate. 0 becomes 1 and 1 becomes 0:
Of course this also can be done in the fragment shader:
void main()
{
vec2 uv = vec2(vTexCoord.x, 1.0-vTexCoord.y);
float mask = texture2D(uMaskTexture, uv);
gl_FragColor = texture2D(uTexture, uv) * mask;
}
or in the vertex shader:
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main
{
vTexCoord = vec2(aTexCoord.x, 1.0-aTexCoord.y);
// [...]
}

Related

How to texture non-unwrapped model using a cubemap

I have lots of models that ain't unwrapped (they don't have UV coordinates). They are quite complex to unwrap them. Thus, I decided to texture them using a seamless cubemap:
[VERT]
attribute vec4 a_position;
varying vec3 texCoord;
uniform mat4 u_worldTrans;
uniform mat4 u_projTrans;
...
void main()
{
gl_Position = u_projTrans * u_worldTrans * a_position;
texCoord = vec3(a_position);
}
[FRAG]
varying vec3 texCoord;
uniform samplerCube u_cubemapTex;
void main()
{
gl_FragColor = textureCube(u_cubemapTex, texCoord);
}
It works, but the result is quite weird due to texturing depends on the vertices position. If my model is more complex than a cube or sphere, I see visible seams and low resolution of the texture on some parts of the object.
Reflection is mapped good on the model, but it has a mirror effect.
Reflection:
[VERT]
attribute vec3 a_normal;
varying vec3 v_reflection;
uniform mat4 u_matViewInverseTranspose;
uniform vec3 u_cameraPos;
...
void main()
{
mat3 normalMatrix = mat3(u_matViewInverseTranspose);
vec3 n = normalize(normalMatrix * a_normal);
//calculate reflection
vec3 vView = a_position.xyz - u_cameraPos.xyz;
v_reflection = reflect(vView, n);
...
}
How to implement something like a reflection, but with “sticky” effect, which means that it’s as if the texture is attached to a certain vertex (not moving). Each side of the model must display its own side of the cubemap, and as a result it should look like a common 2D texturing. Any advice will be appreciated.
UPDATE 1
I summed up all comments and decided to calculate cubemap UV. Since I use LibGDX, some names may differ from OpenGL ones.
Shader class:
public class CubemapUVShader implements com.badlogic.gdx.graphics.g3d.Shader {
ShaderProgram program;
Camera camera;
RenderContext context;
Matrix4 viewInvTraMatrix, viewInv;
Texture texture;
Cubemap cubemapTex;
...
#Override
public void begin(Camera camera, RenderContext context) {
this.camera = camera;
this.context = context;
program.begin();
program.setUniformMatrix("u_matProj", camera.projection);
program.setUniformMatrix("u_matView", camera.view);
cubemapTex.bind(1);
program.setUniformi("u_textureCubemap", 1);
texture.bind(0);
program.setUniformi("u_texture", 0);
context.setDepthTest(GL20.GL_LEQUAL);
context.setCullFace(GL20.GL_BACK);
}
#Override
public void render(Renderable renderable) {
program.setUniformMatrix("u_matModel", renderable.worldTransform);
viewInvTraMatrix.set(camera.view);
viewInvTraMatrix.mul(renderable.worldTransform);
program.setUniformMatrix("u_matModelView", viewInvTraMatrix);
viewInvTraMatrix.inv();
viewInvTraMatrix.tra();
program.setUniformMatrix("u_matViewInverseTranspose", viewInvTraMatrix);
renderable.meshPart.render(program);
}
...
}
Vertex:
attribute vec4 a_position;
attribute vec2 a_texCoord0;
attribute vec3 a_normal;
attribute vec3 a_tangent;
attribute vec3 a_binormal;
varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;
uniform mat4 u_matProj;
uniform mat4 u_matView;
uniform mat4 u_matModel;
uniform mat4 u_matViewInverseTranspose;
uniform mat4 u_matModelView;
void main()
{
gl_Position = u_matProj * u_matView * u_matModel * a_position;
v_texCoord = a_texCoord0;
//CALCULATE CUBEMAP UV (WRONG!)
//I decided that tm_l2g mentioned in comments is u_matView * u_matModel
v_cubeMapUV = vec3(u_matView * u_matModel * vec4(a_normal, 0.0));
/*
mat3 normalMatrix = mat3(u_matViewInverseTranspose);
vec3 t = normalize(normalMatrix * a_tangent);
vec3 b = normalize(normalMatrix * a_binormal);
vec3 n = normalize(normalMatrix * a_normal);
*/
}
Fragment:
varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;
uniform sampler2D u_texture;
uniform samplerCube u_textureCubemap;
void main()
{
vec3 cubeMapUV = normalize(v_cubeMapUV);
vec4 diffuse = textureCube(u_textureCubemap, cubeMapUV);
gl_FragColor.rgb = diffuse;
}
The result is completely wrong:
I expect something like that:
UPDATE 2
The texture looks stretched on the sides and distorted in some places if I use vertices position as a cubemap coordinates in the vertex shader:
v_cubeMapUV = a_position.xyz;
I uploaded euro.blend, euro.obj and cubemap files to review.
that code works only for meshes that are centered around (0,0,0) if that is not the case or even if (0,0,0) is not inside the mesh then artifacts occur...
I would start with computing BBOX BBOXmin(x0,y0,z0),BBOXmax(x1,y1,z1) of your mesh and translate the position used for texture coordinate so its centered around it:
center = 0.5*(BBOXmin+BBOXmax);
texCoord = vec3(a_position-center);
However non uniform vertex density would still lead to texture scaling artifacts especially if BBOX sides sizes differs too much. Rescaling it to cube would help:
vec3 center = 0.5*(BBOXmin+BBOXmax); // center of BBOX
vec3 size = BBOXmax-BBOXmin; // size of BBOX
vec3 r = a_position-center; // position centered around center of BBOX
r.x/=size.x; // rescale it to cube BBOX
r.y/=size.y;
r.z/=size.z;
texCoord = r;
Again if the center of BBOX is not inside mesh then this would not work ...
The reflection part is not clear to me do you got some images/screenshots ?
[Edit1] simple example
I see it like this (without the center offsetting and aspect ratio corrections mentioned above):
[Vertex]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
{
pixel_col=col;
pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
}
//------------------------------------------------------------------
[Fragment]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec4 pixel_col;
in smooth vec3 pixel_txr;
uniform samplerCube txr_skybox;
out layout(location=0) vec4 frag_col;
//------------------------------------------------------------------
void main(void)
{
frag_col=texture(txr_skybox,pixel_txr);
}
//------------------------------------------------------------------
And here preview:
The white torus in first few frames are using fixed function and the rest is using shaders. As you can see the only input I use is the vertex position,color and transform matrices tm_l2g which converts from mesh coordinates to global world and tm_g2s which holds the perspective projection...
As you can see I render BBOX with the same CUBE MAP texture as I use for rendering the model so it looks like cool reflection/transparency effect :) (which was not intentional).
Anyway When I change the line
pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
into:
pixel_txr=pos;
In my vertex shader the object will be solid again:
You can combine both by passing two texture coordinate vectors and fetching two texels in fragment adding them with some ratio together. Of coarse you would need to pass 2 Cube map textures one for object and one for skybox ...
The red warnings are from my CPU side code reminding me that I am trying to set uniforms that are not present in the shaders (as I did this from the bump mapping example without changing CPU side code...)
[Edit1] here preview of your mesh with offset
The Vertex changes a bit (just added the offsetting described in the answer):
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
uniform vec3 center=vec3(0.0,0.0,2.0);
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
{
pixel_col=col;
pixel_txr=pos-center;
gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
}
//------------------------------------------------------------------
So by offsetting the center point you can get rid of the singular point distortion however as I mentioned in comments for arbitrary meshes there will be always some distortions with cheap texturing tricks instead of proper texture coordinates.
Beware my mesh was resized/normalized (sadly I do not remeber if its <-1,+1> range or different ona and too lazy to dig in my source code of the GLSL engine I tested this in) so the offset might have different magnitude in your environment to achieve the same result.

OpenGL - trouble passing ALL data into shader at once

I'm trying to display textures on quads (2 triangles) using opengl 3.3
Drawing a texture on a quad works great; however when I have ONE textures (sprite atlas) but using 2 quads(objects) to display different parts of the atlas. When in draw loop, they end up switching back and fourth(one disappears than appears again, etc) at their individual translated locations.
The way I'm drawing this is not the standard DrawElements for each quad(or object) but I package all quads, uv, translations, etc send them up to the shader as one big chunk (as "in" variables): Vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
in vec3 vertexPosition_modelspace;
in vec3 vertexColor;
in vec2 vertexUV;
in vec3 translation;
in vec4 rotation;
in vec3 scale;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
// Output data ; will be interpolated for each fragment.
out vec3 fragmentColor;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
...
void main(){
mat4 Model = mat4(1.0);
mat4 t = translationMatrix(translation);
mat4 s = scaleMatrix(scale);
mat4 r = rotationMatrix(vec3(rotation), rotation[3]);
Model *= t * r * s;
gl_Position = MVP * Model * vec4 (vertexPosition_modelspace,1); //* MVP;
// The color of each vertex will be interpolated
// to produce the color of each fragment
fragmentColor = vertexColor;
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
Is the vertex shader working as I think it would with a large chunk of data - that it draws each segment passed up as uniform individually because it does not seem like it? Is my train of thought correct on this?
For completeness this is my fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec3 fragmentColor;
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main()
{
// Output color = color of the texture at the specified UV
color = texture2D( myTextureSampler, UV ).rgba;
}
A request for more information was made so I will put how i bind this data up to the vertex shader. The following code is just one I use for my translations. I have more for color, rotation, scale, uv, etc:
gl.BindBuffer(gl.ARRAY_BUFFER, tvbo)
gl.BufferData(gl.ARRAY_BUFFER, len(data.Translations)*4, gl.Ptr(data.Translations), gl.DYNAMIC_DRAW)
tAttrib := uint32(gl.GetAttribLocation(program, gl.Str("translation\x00")))
gl.EnableVertexAttribArray(tAttrib)
gl.VertexAttribPointer(tAttrib, 3, gl.FLOAT, false, 0, nil)
...
gl.DrawElements(gl.TRIANGLES, int32(len(elements)), gl.UNSIGNED_INT, nil)
You have just single sampler2D
which means you have just single texture at your disposal
regardless on how many of them you bind.
If you really need to pass the data as single block
then you should add sampler per each texture you got
not sure how many objects/textures you have
but you are limited by gfx hw limit on texture units with this way of data passing
also you need to add another value to your data telling which primitive use which texture unit
and inside fragment then select the right texture sampler ...
You should add stuff like this:
// vertex
in int usedtexture;
out int txr;
void main()
{
txr=usedtexture;
}
// fragment
uniform sampler2D myTextureSampler0;
uniform sampler2D myTextureSampler1;
uniform sampler2D myTextureSampler2;
uniform sampler2D myTextureSampler3;
in vec2 UV;
in int txr;
out vec4 color;
void main
{
if (txr==0) color = texture2D( myTextureSampler0, UV ).rgba;
else if (txr==1) color = texture2D( myTextureSampler1, UV ).rgba;
else if (txr==2) color = texture2D( myTextureSampler2, UV ).rgba;
else if (txr==3) color = texture2D( myTextureSampler3, UV ).rgba;
else color=vec4(0.0,0.0,0.0,0.0);
}
This way of passing is not good for these reasons:
number of used textures is limited to HW texture units limit
if your rendering would need additional textures like normal/shininess/light maps
then you need more then 1 texture per object type and your limit is suddenly divided by 2,3,4...
You need if/switch statements inside fragment which can slow things down considerably
Yes you can do it brunch less but then you would need to access all textures all the time increasing heat stress on gfx without reason...
This kind of passing is suitable for
all textures inside single image (as you mentioned texture atlas)
which can be faster this way and reasonable for scenes with small number of object types (or materials) but large object count...
Since I needed more input on this matter, I linked this page to reddit and someone was able to help me with one response! Anyways the reddit link is here:
https://www.reddit.com/r/opengl/comments/3gyvlt/opengl_passing_all_scene_data_into_shader_each/
The issue of seeing two individual textures/quads after passing all vertices as one data structure over to vertex shader was because my element indices were off. I needed to determine the correct index of each set of vertices for my 2 triangle(quad) objects. Simply had to do something like this:
vertexInfo.Elements = append(vertexInfo.Elements, uint32(idx*4), uint32(idx*4+1), uint32(idx*4+2), uint32(idx*4), uint32(idx*4+2), uint32(idx*4+3))

WebGL - which API to use?

I want to draw multiple polygon shapes (where each shape has it's own set of vertices).
I want to be able to position these shapes independently of each other.
Which API can i use to set the a_Position for the vertex shader?
A) gl.vertexAttrib3f
B) gl.vertexAttribPointer + gl.enableVertexAttribArray
thanks.
Your question makes it sound like you're really new to WebGL? Maybe you should read some tutorials? But in answer to your question:
gl.vertexAttrib3f only lets you supply a single constant value to a GLSL attribute so you'll need to use gl.vertexAttribPointer and gl.enableVertexAttribArray. You'll also need to set up buffers with your vertex data.
gl.vertexAttrib3f only point is arguably to let you pass in a constant in the case that you have a shader that uses multiple attributes but you don't have data for all of them. For example lets say you have a shader that uses both textures and so needs texture coordinates and it also has vertex colors. Something like this
vertex shader
attribute vec4 a_position;
attribute vec2 a_texcoord;
attribute vec4 a_color;
varying vec2 v_texcoord;
varying vec4 v_color;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * a_position;
// pass texcoord and vertex colors to fragment shader
v_texcoord = a_texcoord;
v_color = v_color;
}
fragment shader
precision mediump float;
varying vec2 v_texcoord;
varying vec4 v_color;
uniform sampler2D u_texture;
void main() {
vec4 textureColor = texture2D(u_texture, v_texcoord);
// multiply the texture color by the vertex color
gl_FragColor = textureColor * v_color;
}
This shader requires vertex colors. If your geometry doesn't have vertex colors then you have 2 options (1) use a different shader (2) turn off the attribute for vertex colors and set it to a constant color, probably white.
gl.disableVertexAttribArray(aColorLocation);
gl.vertexAttrib4f(aColorLocation, 1, 1, 1, 1);
Now you can use the same shader even though you have no vertex color data.
Similarly if you have no texture coordinates you could pass in a white 1 pixel shader and set the texture coordinates to some constant.
gl.displayVertexAttribArray(aTexcoordLocation);
gl.vertexAttrib2f(aTexcoordLocation, 0, 0);
gl.bindTexture(gl.TEXTURE_2D, some1x1PixelWhiteTexture);
In that case you could also decide what color to draw with by setting the vertex color attribute.
gl.vertexAttrib4f(aColorLocation, 1, 0, 1, 1); // draw in magenta

Desktop GLSL without ftransform()

I'm porting a codebase of mine from fixed-function OpenGL 1.x to OpenGL 2.x - Technically OpenGL ES 2.0, but I'm still coding on the desktop, just keeping in mind the limitations that ES 2.0 imposes which are similar to the 3.1 'new' profile.
Problem is, it seems like for anything other than 2D, creating a shader passing in the modelviewprojection matrix as a uniform does not work. Normally I get a black screen, but if I set the Z value of all my vertices to 0 I get stuff to show up.
Putting my shaders in RenderMonkey works when I have ES 2.0 mode enabled, but on standard desktop GL it's just a black screen (no compiler errors/warnings):
vert shader:
uniform mat4 mvp_matrix;
uniform mat4 obj_matrix;
uniform vec4 u_color;
attribute vec3 a_vertex;
attribute vec2 a_texcoord0;
varying vec4 v_color;
varying vec2 v_texcoord0;
void main(void)
{
v_color = u_color;
gl_Position = mvp_matrix * (obj_matrix * vec4(a_vertex, 1.0));
v_texcoord0 = a_texcoord0;
}
frag shader:
uniform sampler2D t_texture0;
varying vec2 v_texcoord0;
varying vec4 v_color;
void main(void)
{
vec4 color = texture2D(t_texture0, v_texcoord0);
gl_FragColor = color * v_color;
}
I am passing in the matrices as glUniformMatrix4fv(location, 1, GL_FALSE, mvpMatrix);
This shader works like gold for anything drawn in 2D. What am I doing wrong here? Or am I required to use ftransform() on desktop GL?
One thing I think needs a bit of clarification:
A model matrix transforms an object from object coordinates to world coordinates.
A view matrix transforms the world coordinates to eye coordinates.
A projection matrix converts eye coordinates to clip coordinates.
Based on standard naming conventions, the mvpMatrix is projection * view * model, in that order. There is no other matrices that you need to multiply by. Projection is your projection matrix (either ortho or perspective), view is the camera transform matrix (NOT the modelview), and model is the position, scale, and rotation of your object.
I believe the issue either lies in either multiplying matrices that don't need to be multiplied together or in multiplying matrices in the wrong order. (matrix multiplication isn't commutative)
If you haven't already solved this, I would recommend sending all 3 matrices over separately and later dumping the values back to make sure there are no issues sending the matrices over.
Vertex shader:
attribute vec4 a_vertex;
attribute vec2 a_texcoord0;
varying vec2 v_texcoord0;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main(void)
{
gl_Position = projection * view * model * a_vertex;
v_texcoord0 = a_texcoord0;
}
Fragment Shader:
uniform sampler2D t_texture0;
uniform vec4 u_color;
varying vec2 v_texcoord0;
void main(void)
{
vec4 color = texture2D(t_texture0, v_texcoord0);
gl_FragColor = color * u_color;
}
Also I moved the color uniform to the frag shader, passing it through as a varying is unnecessary when all the vertices will have the same color.

GLSL: gl_FragCoord issues

I am experimenting with GLSL for OpenGL ES 2.0. I have a quad and a texture I am rendering. I can successfully do it this way:
//VERTEX SHADER
attribute highp vec4 vertex;
attribute mediump vec2 coord0;
uniform mediump mat4 worldViewProjection;
varying mediump vec2 tc0;
void main()
{
// Transforming The Vertex
gl_Position = worldViewProjection * vertex;
// Passing The Texture Coordinate Of Texture Unit 0 To The Fragment Shader
tc0 = vec2(coord0);
}
//FRAGMENT SHADER
varying mediump vec2 tc0;
uniform sampler2D my_color_texture;
void main()
{
gl_FragColor = texture2D(my_color_texture, tc0);
}
So far so good. However, I'd like to do some pixel-based filtering, e.g. Median. So, I'd like to work in pixel coordinates rather than in normalized (tc0) and then convert the result back to normalized coords. Therefore, I'd like to use gl_FragCoord instead of a uv attribute (tc0). But I don't know how to go back to normalized coords because I don't know the range of gl_FragCoords. Any idea how I could get it? I have got that far, using a fixed value for 'normalization', though it's not working perfectly as it is causing stretching and tiling (but at least is showing something):
//FRAGMENT SHADER
varying mediump vec2 tc0;
uniform sampler2D my_color_texture;
void main()
{
gl_FragColor = texture2D(my_color_texture, vec2(gl_FragCoord) / vec2(256, 256));
}
So, the simple question is, what should I use in the place of vec2(256, 256) so that I could get the same result as if I were using the uv coords.
Thanks!
gl_FragCoord is in screen coordinates, so to get normalized coords you need to divide by the viewport width and height. You can use a uniform variable to pass that information to the shader, since there is no built in variable for it.
You can also sample the texture by un-normalized coordinates if:
sampling by texture() from GL_TEXTURE_RECTANGLE
sampling by texelFetch() from a regular texture or texture buffer

Resources